Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nightfall launches ‘Nyx,’ an AI that automates data loss prevention at enterprise scale

Who really benefits from the AI boom?

Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less – NBC 5 Dallas-Fort Worth

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Manufacturing AI

Alibaba’s AI coding tool raises security concerns in the West

By Advanced AI EditorJuly 30, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Alibaba has released a new AI coding model called Qwen3-Coder, built to handle complex software tasks using a large open-source model. The tool is part of Alibaba’s Qwen3 family and is being promoted as the company’s most advanced coding agent to date.

The model uses a Mixture of Experts (MoE) approach, activating 35 billion parameters out of a total 480 billion and supporting up to 256,000 tokens of context. That number can reportedly be stretched to 1 million using special extrapolation techniques. The company claims Qwen3-Coder has outperformed other open models in agentic tasks, including versions from Moonshot AI and DeepSeek.

But not everyone sees this as good news. Jurgita Lapienyė, Chief Editor at Cybernews, warns that Qwen3-Coder may be more than just a helpful coding assistant—it could pose a real risk to global tech systems if adopted widely by Western developers.

A trojan horse in open source clothing?

Alibaba’s messaging around Qwen3-Coder has focused on its technical strength, comparing it to top-tier tools from OpenAI and Anthropic. But while benchmark scores and features draw attention, Lapienyė suggests they may also distract from the real issue: security.

It’s not that China is catching up in AI—that’s already known. The deeper concern is about the hidden risks of using software generated by AI systems that are difficult to inspect or fully understand.

As Lapienyė put it, developers could be “sleepwalking into a future” where core systems are unknowingly built with vulnerable code. Tools like Qwen3-Coder may make life easier, but they could also introduce subtle weaknesses that go unnoticed.

This risk isn’t hypothetical. Cybernews researchers recently reviewed AI use across major US firms and found that 327 of the S&P 500 now publicly report using AI tools. In those companies alone, researchers identified nearly 1,000 AI-related vulnerabilities.

Adding another AI model—especially one developed under China’s strict national security laws—could add another layer of risk, one that’s harder to control.

When code becomes a backdoor

Today’s developers lean heavily on AI tools to write code, fix bugs, and shape how applications are built. These systems are fast, helpful, and getting better every day.

But what if those same systems were trained to inject flaws? Not obvious bugs, but small, hard-to-spot issues that wouldn’t trigger alarms. A vulnerability that looks like a harmless design decision could go undetected for years.

That’s how supply chain attacks often begin. Past examples, like the SolarWinds incident, show how long-term infiltration can be done quietly and patiently. With enough access and context, an AI model could learn how to plant similar issues—especially if it had exposure to millions of codebases.

It’s not just a theory. Under China’s National Intelligence Law, companies like Alibaba must cooperate with government requests, including those involving data and AI models. That shifts the conversation from technical performance to national security.

What happens to your code?

Another major issue is data exposure. When developers use tools like Qwen3-Coder to write or debug code, every piece of that interaction could reveal sensitive information.

That might include proprietary algorithms, security logic, or infrastructure design—exactly the kind of details that can be useful to a foreign state.

Even though the model is open source, there’s still a lot that users can’t see. The backend infrastructure, telemetry systems, and usage tracking methods may not be transparent. That makes it hard to know where data goes or what the model might remember over time.

Autonomy without oversight

Alibaba has also focused on agentic AI—models that can act more independently than standard assistants. These tools don’t just suggest lines of code. They can be assigned full tasks, operate with minimal input, and make decisions on their own.

That might sound efficient, but it also raises red flags. A fully autonomous coding agent that can scan entire codebases and make changes could become dangerous in the wrong hands.

Imagine an agent that can understand a company’s system defences and craft tailored attacks to exploit them. The same skillset that helps developers move faster could be repurposed by attackers to move even faster still.

Regulation still isn’t ready

Despite these risks, current regulations don’t address tools like Qwen3-Coder in a meaningful way. The US government has spent years debating data privacy concerns tied to apps like TikTok, but there’s little public oversight of foreign-developed AI tools.

Groups like the Committee on Foreign Investment in the US (CFIUS) review company acquisitions, but no similar process exists for reviewing AI models that might pose national security risks.

President Biden’s executive order on AI focuses mainly on homegrown models and general safety practices. But it leaves out concerns about imported tools that could be embedded in sensitive environments like healthcare, finance, or national infrastructure.

AI tools capable of writing or altering code should be treated with the same seriousness as software supply chain threats. That means setting clear guidelines for where and how they can be used.

What should happen next?

To reduce risk, organisations dealing with sensitive systems should pause before integrating Qwen3-Coder—or any foreign-developed agentic AI—into their workflows. If you wouldn’t invite someone you don’t trust to look at your source code, why let their AI rewrite it?

Security tools also need to catch up. Static analysis software may not detect complex backdoors or subtle logic issues crafted by AI. The industry needs new tools designed specifically to flag and test AI-generated code for suspicious patterns.

Finally, developers, tech leaders, and regulators must understand that code-generating AI isn’t neutral. These systems have power—both as helpful tools and potential threats. The same features that make them useful can also make them dangerous.

Lapienyė called Qwen3-Coder “a potential Trojan horse,” and the metaphor fits. It’s not just about productivity. It’s about who’s inside the gates.

Not everyone agrees on what matters

Wang Jian, the founder of Alibaba Cloud, sees things differently. In an interview with Bloomberg, he said innovation isn’t about hiring the most expensive talent but about picking people who can build the unknown. He criticised Silicon Valley’s approach to AI hiring, where tech giants now compete for top researchers like sports teams bidding on athletes.

“The only thing you need to do is to get the right person,” Wang said. “Not really the expensive person.”

He also believes that the Chinese AI race is healthy, not hostile. According to Wang, companies take turns pulling ahead, which helps the entire ecosystem grow faster.

“You can have the very fast iteration of the technology because of this competition,” he said. “I don’t think it’s brutal, but I think it’s very healthy.”

Still, open-source competition doesn’t guarantee trust. Western developers need to think carefully about what tools they use—and who built them.

The bottom line

Qwen3-Coder may offer impressive performance and open access, but its use comes with risks that go beyond benchmarks and coding speed. In a time when AI tools are shaping how critical systems are built, it’s worth asking not just what these tools can do—but who benefits when they do it.

(Photo by Shahadat Rahman)

See also: Alibaba’s new Qwen reasoning AI model sets open-source records

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDVIDS – News – Marine Finds Purpose in Mentoring the Next Generation at Guard Academy
Next Article 5 AI Writing Tools That Deliver Quality Content Effortlessly
Advanced AI Editor
  • Website

Related Posts

Google’s Veo 3 AI video creation tools are now widely available

July 29, 2025

US-China AI competition accelerates with massive city funding

July 29, 2025

Forget the Turing Test, AI’s real challenge is communication

July 28, 2025

Comments are closed.

Latest Posts

Person Dies After Jumping from Whitney Museum

Trump’s ‘Big Beautiful Bill’ Orders Museum to Relocate Space Shuttle

Thomas Kinkade Foundation Denounces DHS’s Usage of Painting

Three Convicted for Stealing Ancient Celtic Coins from German Museum

Latest Posts

Nightfall launches ‘Nyx,’ an AI that automates data loss prevention at enterprise scale

July 31, 2025

Who really benefits from the AI boom?

July 31, 2025

Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less – NBC 5 Dallas-Fort Worth

July 31, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Nightfall launches ‘Nyx,’ an AI that automates data loss prevention at enterprise scale
  • Who really benefits from the AI boom?
  • Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less – NBC 5 Dallas-Fort Worth
  • OpenAI CEO expresses concerns about GPT-5, says he is ‘scared’ of it
  • ‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

Recent Comments

  1. aviator game download on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. casino mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  3. 🔏 Security - Transfer 1.8 BTC incomplete. Fix here >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=85ce984e332839165eff00f10a4fc17a& 🔏 on The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
  4. 💾 System: Transfer 0.5 Bitcoin incomplete. Verify now >> https://graph.org/OBTAIN-CRYPTO-07-23?hs=e1378433e58a7b696e3632102c97ef63& 💾 on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  5. 📞 Security; Transaction 0.5 BTC failed. Verify now => https://graph.org/OBTAIN-CRYPTO-07-23?hs=ec8b72524f993be230f3c8fd50d7bbae& 📞 on OpenAI Five: Dota Gameplay

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.