Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Czech government bans China’s DeepSeek AI, warns of security risks

OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman

Varun Mohan education qualifications: How an Indian-origin MIT engineer secured Google’s $2.4 billion AI investment

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
DeepSeek

Deepseek—China’s Vision of World Perception

By Advanced AI EditorJuly 10, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


By Maciej gaca

When DeepSeek-R1 debuted at a conference in Hangzhou this February, the atmosphere was electrifying and unsettling. There were loud cries of delight at possibilities it opened for programmers and companies as well as nervousness on stock exchange listing Western technology companies. There were quiet sighs too from experts fearing a new information weapon might be hiding under guise of “democratisation.” However, history teaches us that technology can just as easily facilitate concentration of power as it can emancipation.

DeepSeek-R1 in China didn’t have to reach for tanks or prisons to monopolise the discussion. Official messages were primarily embedded in it during the learning process. As a result, instead of confronting different viewpoints, the model itself promotes a single, state-owned version of history, and users receive a ready-made, contradiction-free story as an unquestionable fact. This is a more subtle process than traditional censorship and more engaging, because the user himself willingly reaches for content that the model selects according to political guidelines dictated to him.

DeepSeek-R1 has been met with great enthusiasm on Chinese social media. On the largest sites, Zhihu and Weibo, computer science students and novice programmers enthusiastically described the model’s lightning-fast responses, its effectiveness in solving complex algorithmic tasks, and the impressive image quality it created, as evidenced by numerous entries in the column series “从0到1了解DeepSeek” (‘from 0 to 1 we get to know DeepSeek’ – a series of short articles on Zhihu presenting the model’s functions and capabilities). However, over time, the technological experiment has become a sociological observation: users noticed that when asked about political events, R1 consistently avoided references to Tiananmen Square protests or critical analyses of Beijing, Taiwan and Xinjiang issues, it reproduced only official, party narratives.

The breakthrough was brought by safety reports, a study published on arXiv “Safety Evaluation of DeepSeek Models in Chinese Contexts”, showing the model’s 100% effectiveness in simulated disinformation attacks and near complete rejection of content that deviated from the state line. Posts on internal educational forums instructed how to “bypass” DeepSeek’s self-censorship, but the model itself instantly blocked accounts that distributed links to independent sources. This change in mood – from admiration for the architecture and computing power, to a bitter conclusion about the ideological penetration of the neural network’s weights (numbers the model changes during training to better “understand” and favour certain information) – reveals that young Chinese increasingly see DeepSeek’s “openness” not as a true democratisation of technology, but a sophisticated mechanism for maintaining a single, official vision of the world.

Ultimately, what’s at stake is not just technical supremacy, but the foundation of our shared cognitive space. If every powerful actor – Beijing, Washington, Brussels – introduces its own “objective” AI-generated versions of history, younger generations will find themselves at crossroads of alternative “truths” isolated in hermetic information bubbles. Without international mechanisms of mutual accountability, transparent audits of training data, and open procedures for verifying algorithms, even the most reliable open-source projects can become vehicles for narrative tyranny. And then we are one step away from turning a historical dispute into an armed conflict and from completely eroding trust in the very concept of information.

OpenAI-ish ChatGPT, Google Bard and Meta LLaMA draw their data from a wide range of sources–international agencies such as CNN, AFP and Al-Jazeera, through academic repositories in languages, to archives of rarely cited periodicals and informal discussion forums. Only after an initial training, during which the model “swallows” entire web pages, does the arduous work of “fine-tuning” begin — successive rounds of human evaluation, analysis of deviations from neutrality and attempts to restore balance. Of course, it was not possible to eliminate all extremes.

Researchers from Munich and Copenhagen have shown that ChatGPT sometimes tilts towards pro-ecological and left-libertarian narratives, while Bing Chat is slightly more favourable to tech industry. Nevertheless, each is regularly audited, by Swedish FOI, Norwegian NUPI and French Fondation pour l’Innovation Politique, which describe with surgical precision where the training data comes from and what rules govern how people evaluate their answers. Thanks to this, reports can be looked at by both a defender of free speech and an activist fighting discrimination and each will find arguments to accuse the model of overrepresenting some sources or underrepresenting minority voices.

In contrast to openness of Western solutions, DeepSeek-R1 operates “in secret” in educational chatbots or government apps in Asia and Europe, but the effect is more perfidious: instead of bypassing censorship, the model reinforces it, surrounding the user with a tight record of a uniform narrative. These are not ordinary recommendation algorithms but airtight information bubbles, in which every story, news item, piece of advice must fit the official line. Eli Pariser, an American internet activist and author of The Filter Bubble, warned a decade ago that algorithms that personalise content can cut us off from opposing views. Today, when technology tempts us with appearance of objectivity, isolation is even more dangerous. Young internet users, fed an endless stream of TikTok or WeChat, rarely verify information. One-click answers replace critical questions, and the bubble becomes their entire world.

Prospect Foundation in a study “Narrative-Building Trumps Island- Building for Beijing in Sandy Cay” warns competition for dominant AI models threatens to spark a real “narrative war.” Similar conclusions are by a report by Taiwan Foundation for Democracy on disinformation during 2024 presidential election – analysts have shown that algorithms driven by conflicting state data sets from China, US, and Europe are creating isolated “information islands” where young recipients become accustomed to competing versions of reality and are less likely to verify them.

Analysts emphasise while local regulations may tighten requirements for one platform or service, they won’t stop phenomenon of information fragmentation if each government implements its own AI model. French Cybersecurity Agency (ANSSI), in White Paper on AI published in 2023, demands transparency of training data origin, arguing only a full list of sources allows users to understand what materials shape the model’s operation. Swedish SÄPO insists that multi-party audits by independent expert teams are necessary, which as thorough analysis of model’s code and behavior, especially in sensitive questions, can reveal hidden biases or mechanisms filtering truth.

Both institutions also point to educating young generation in critical reception of content generated by AI. It’s worth introducing classes devoted to “algorithmic texts” at school, i.e. learning to understand how models formulate their answers and compare them with independent information sources. Without such preparation, society will be condemned to accept competing and isolated narratives as indisputable facts. Experience shows every technological revolution promotes concentration of power. DeepSeek-R1, managed and paid for in Beijing’s bureaucratic structures, is today becoming a more subtle tool of centralisation than traditional network censorship or media access blockades.

When the model independently selects/edits historical narratives using neural network weights, it builds a performative story of the state, which over time is considered a “natural” reality. This is a seemingly bloodless cascade — no one calls ‘Guards’ when the algorithm enters subsequent history versions into the code, and society begins to live according to these predefined patterns.

Ultimately, what’s at stake is no longer the fight for technological supremacy, but the very foundation of our collective understanding of reality – the space in which we establish what we consider to be fact. Without clearly defined rules of accountability, mandatory audits, and transparent verification criteria, even the most “open” source models can be used to impose their own versions of the world. As Yuval warns Noah Harari (cf. 21 Lessons for the 21st Century, 2018), if we do not build mechanisms to protect against fragmentation of truth into atoms, we’ll find ourselves in a world where conflicting narratives, each equally convincing, compete like feuding tribes, undermining the very meaning of the debate.

In turn, Yanis Varoufakis (The Other Now, 2020), reminds us that in this chaos of alternative “truths”, international solidarity is weakening. Instead of facing global challenges together, we are sinking deeper and deeper into isolated information bubbles. Klaus Schwab and his World Economic Forum preach the slogan of “shared responsibility” for technological advancement, but it’s hard not to see how often this serves Beijing’s centralist aspirations. Under inclusiveness banner, WEF becomes a platform where authoritarian regimes, including China’s, can present their digital infrastructure as “innovation for common good” while simultaneously reinforcing systems of mass surveillance.

If we want to avoid such a scenario, empty slogans about openness will not suffice. We need real international agreements that will enforce standards regarding the origin of data, model training processes and their controlled exploitation – as well as national laws that impose tough legal consequences for AI activity. Only in this way will technology cease to be a tool of interests and become an infrastructure on which a democratic society can be built, not a war of narratives. —INFA



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNvidia AI Chip Production Lands in the US to Avoid Trump’s Tariffs
Next Article Accelerating generative AI development with fully managed MLflow 3.0 on Amazon SageMaker AI
Advanced AI Editor
  • Website

Related Posts

Czech government bans China’s DeepSeek AI, warns of security risks

July 12, 2025

Alibaba-backed Moonshot AI launches a new open-source model, rivaling DeepSeek

July 12, 2025

DeepSeek And The Future Of Enterprise AI

July 11, 2025

Comments are closed.

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Czech government bans China’s DeepSeek AI, warns of security risks

July 12, 2025

OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman

July 12, 2025

Varun Mohan education qualifications: How an Indian-origin MIT engineer secured Google’s $2.4 billion AI investment

July 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Czech government bans China’s DeepSeek AI, warns of security risks
  • OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman
  • Varun Mohan education qualifications: How an Indian-origin MIT engineer secured Google’s $2.4 billion AI investment
  • TU Wien Rendering #6 – Snell’s Law and Total Internal Reflection
  • Paper page – Dynamic Chunking for End-to-End Hierarchical Sequence Modeling

Recent Comments

  1. código de indicac~ao binance on [2505.13511] Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
  2. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  3. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery
  4. código binance on Five takeaways from IBM Think 2025
  5. Dang k'y binance on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.