Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

ByteDance releases new open source Seed-OSS-36B model

You can now talk to Google Photos to make your edits

Anaconda Report Links AI Slowdown to Gaps in Data Governance

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Black Hat 2025: ChatGPT, Copilot, DeepSeek now create malware

By Advanced AI EditorAugust 13, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Russia’s APT28 is actively deploying LLM-powered malware against Ukraine, while underground platforms are selling the same capabilities to anyone for $250 per month.

Last month, Ukraine’s CERT-UA documented LAMEHUG, the first confirmed deployment of LLM-powered malware in the wild. The malware, attributed to APT28, utilizes stolen Hugging Face API tokens to query AI models, enabling real-time attacks while displaying distracting content to victims.

Cato Networks’ researcher, Vitaly Simonovich, told VentureBeat in a recent interview that these aren’t isolated occurrences, and that Russia’s APT28 is using this attack tradecraft to probe Ukrainian cyber defenses. Simonovich is quick to draw parallels between the threats Ukraine faces daily and what every enterprise is experiencing today, and will likely see more of in the future.

Most startling was how Simonovich demonstrated to VentureBeat how any enterprise AI tool can be transformed into a malware development platform in under six hours. His proof-of-concept successfully converted OpenAI, Microsoft, DeepSeek-V3 and DeepSeek-R1 LLMs into functional password stealers using a technique that bypasses all current safety controls.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

The rapid convergence of nation-state actors deploying AI-powered malware, while researchers continue to prove the vulnerability of enterprise AI tools, arrives as the 2025 Cato CTRL Threat Report reveals explosive AI adoption across over 3,000 enterprises. Cato’s researchers observe in the report, “most notably, Copilot, ChatGPT, Gemini (Google), Perplexity and Claude (Anthropic) all increased in adoption by organizations from Q1, 2024 to Q4 2024 at 34%, 36%, 58%, 115% and 111%, respectively.”

APT28’s LAMEHUG is the new anatomy of AI warfare

Researchers at Cato Networks and others tell VentureBeat that LAMEHUG operates with exceptional efficiency. The most common delivery mechanism for the malware is via phishing emails impersonating Ukrainian ministry officials, containing ZIP archives with PyInstaller-compiled executables. Once the malware is executed, it connects to Hugging Face’s API using approximately 270 stolen tokens to query the Qwen2.5-Coder-32B-Instruct model.

The legitimate-looking Ukrainian government document (Додаток.pdf) that victims see while LAMEHUG executes in the background. This official-looking PDF about cybersecurity measures from the Security Service of Ukraine serves as a decoy while the malware performs its reconnaissance operations. Source: Cato CTRL Threat Research

APT28’s approach to deceiving Ukrainian victims is based on a unique, dual-purpose design that is core to their tradecraft. While victims view legitimate-looking PDFs about cybersecurity best practices, LAMEHUG executes AI-generated commands for system reconnaissance and document harvesting. A second variant displays AI-generated images of “curly naked women” as a distraction during data exfiltration to servers.

The provocative image generation prompts used by APT28’s image.py variant, including ‘Curvy naked woman sitting, long beautiful legs, front view, full body view, visible face’, are designed to occupy victims’ attention during document theft. Source: Cato CTRL Threat Research

“Russia used Ukraine as their testing battlefield for cyber weapons,” explained Simonovich, who was born in Ukraine and has lived in Israel for 34 years. “This is the first in the wild that was captured.”

A quick, lethal six-hour path from zero to functional malware

Simonovich’s Black Hat demonstration to VentureBeat reveals why APT28’s deployment should concern every enterprise security leader. Using a narrative engineering technique, he calls “Immersive World,” he successfully transformed consumer AI tools into malware factories with no prior malware coding experience, as highlighted in the 2025 Cato CTRL Threat Report.

The method exploits a fundamental weakness in LLM safety controls. While every LLM is designed to block direct malicious requests, few if any are designed to withstand sustained storytelling. Simonovich created a fictional world where malware development is an art form, assigned the AI a character role, then gradually steered conversations toward producing functional attack code.

“I slowly walked him throughout my goal,” Simonovich explained to VentureBeat. “First, ‘Dax hides a secret in Windows 10.’ Then, ‘Dax has this secret in Windows 10, inside the Google Chrome Password Manager.’”

Six hours later, after iterative debugging sessions where ChatGPT refined error-prone code, Simonovich had a functional Chrome password stealer. The AI never realized it was creating malware. It thought it was helping write a cybersecurity novel.

Welcome to the $250 monthly malware-as-a-service economy

During his research, Simonovich uncovered multiple underground platforms offering unrestricted AI capabilities, providing ample evidence that the infrastructure for AI-powered attacks already exists. He mentioned and demonstrated Xanthrox AI, priced at $250 per month, which provides ChatGPT-identical interfaces without safety controls or guardrails.

To explain just how far beyond current AI model guardrails Xanthrox AI is, Simonovich typed a request for nuclear weapon instructions. The platform immediately began web searches and provided detailed guidance in response to his query. This would never happen on a model with guardrails and compliance requirements in place.

Another platform, Nytheon AI, revealed even less operational security. “I convinced them to give me a trial. They didn’t care about OpSec,” Simonovich said, uncovering their architecture: “Llama 3.2 from Meta, fine-tuned to be uncensored.”

These aren’t proof-of-concepts. They’re operational businesses with payment processing, customer support and regular model updates. They even offer “Claude Code” clones, which are complete development environments optimized for malware creation.

Enterprise AI adoption fuels an expanding attack surface

Cato Networks’ recent analysis of 1.46 trillion network flows reveals that AI adoption patterns need to be on the radar of security leaders. The entertainment sector usage increased 58% from Q1 to Q2 2024. Hospitality grew 43%. Transportation rose 37%. These aren’t pilot programs; they’re production deployments processing sensitive data. CISOs and security leaders in these industries are facing attacks that use tradecraft that didn’t exist twelve to eighteen months ago.

Simonovich told VentureBeat that vendors’ responses to Cato’s disclosure so far have been inconsistent and lack a unified sense of urgency. The lack of response from the world’s largest AI companies reveals a troubling gap. While enterprises deploy AI tools at unprecedented speed, relying on AI companies to support them, the companies building AI apps and platforms show a startling lack of security readiness.

When Cato disclosed the Immersive World technique to major AI companies, the responses ranged from weeks-long remediation to complete silence:

DeepSeek never responded

Google declined to review the code for the Chrome infostealer due to similar samples

Microsoft acknowledged the issue and implemented Copilot fixes, acknowledging Simonovich for his work

OpenAI acknowledged receipt but didn’t engage further

Six Hours and $250 is the new entry-level price for a nation-state attack

APT28’s LAMEHUG deployment against Ukraine isn’t a warning; it’s proof that Simonovich’s research is now an operational reality. The expertise barrier that many organizations hope exists is gone.

The metrics are stark—270 stolen API tokens are used to power nation-state attacks. Underground platforms offer identical capabilities for $250 per month. Simonovich proved that six hours of storytelling transforms any enterprise AI tool into functional malware with no coding required.

Enterprise AI adoption grew 34% in Q1 2024 to 115% in Q4 2024 per Cato’s 2025 CTRL Threat Report. Each deployment creates dual-use technology, as productivity tools can become weapons through conversational manipulation. Current security tools are unable to detect these techniques.

Simonovich’s journey from Air Force mechanic to electrical technician in the Israeli Air Force, to security researcher through self-education, lends more significance to his findings. He deceived AI models into developing malware while the AI believed it was writing fiction. Traditional assumptions about technical expertise no longer exist, and organizations need to realize it’s an entirely new world when it comes to threatcraft.

Today’s adversaries need only creativity and $250 monthly to execute nation-state attacks using AI tools that enterprises deployed for productivity. The weapons are already inside every organization, and today they’re called productivity tools.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic’s Claude AI model can now handle longer prompts
Next Article Google DeepMind CEO tells students to brace for change
Advanced AI Editor
  • Website

Related Posts

ByteDance releases new open source Seed-OSS-36B model

August 21, 2025

CodeSignal’s new AI tutoring app Cosmo wants to be the ‘Duolingo for job skills’

August 20, 2025

LLMs generate ‘fluent nonsense’ when reasoning outside their training zone

August 20, 2025

Comments are closed.

Latest Posts

Dallas Museum of Art Names Brian Ferriso as Its Next Director

Rapa Nui’s Moai Statues Threatened by Rising Sea Levels, Flooding

Mickalene Thomas Accused of Harassment by Racquel Chevremont

AI Impact on Art Galleries, and More Art News

Latest Posts

ByteDance releases new open source Seed-OSS-36B model

August 21, 2025

You can now talk to Google Photos to make your edits

August 21, 2025

Anaconda Report Links AI Slowdown to Gaps in Data Governance

August 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • ByteDance releases new open source Seed-OSS-36B model
  • You can now talk to Google Photos to make your edits
  • Anaconda Report Links AI Slowdown to Gaps in Data Governance
  • Tyson Foods elevates customer search experience with an AI-powered conversational assistant
  • AI Isn’t Coming for Hollywood. It’s Already Arrived

Recent Comments

  1. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. RaymondSwedo on Foundation AI: Cisco launches AI model for integration in security applications
  3. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.