Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Pixie: Fast and Generalizable Supervised Learning of 3D Physics from Pixels – Takara TLDR

Baido Baike AI Computing Platform 5.0 Upgrade Released, Kunlun Core Super Nodes Activated_the_Baidu_Sina

ChatGPT Parental Controls Are Coming After Parents Blame Chatbot For Teen’s Death

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

Mysterious ‘PromptLock’ Ransomware Is Harnessing OpenAI’s Model

By Advanced AI EditorAugust 26, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.

Whether for malicious purposes or simply research, someone appears to be using OpenAI’s open-source model for ransomware attacks, according to antivirus company ESET. 

On Tuesday, ESET said it had discovered “the first known AI-powered ransomware,” which the company has named PromptLock. It uses OpenAI’s gpt-oss:20b model, which the company released earlier this month as one of two open-source models, meaning a user can freely use and modify the code. It can also run on high-end desktop PCs or laptops with a 16GB GPU. 

ESET says PromptLock runs gpt-oss:20b “locally” on an infected device to help it generate malicious code, using “hardcoded” text prompts. As evidence, the cybersecurity company posted an image of PromptLock’s code that appears to show the text prompts and mentions the gpt-oss:20b model name. 

This Tweet is currently unavailable. It might be loading or has been removed.

The ransomware will then execute the malicious code, written in the Lua programming language, to search through an infected computer, steal files, and perform encryption. 

“These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS,” ESET warned. “Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it.”

Newsletter Icon

Newsletter Icon

Get Our Best Stories!

Stay Safe With the Latest Security News and Updates

SecurityWatch Newsletter Image

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

ESET appears to have discovered PromptLock through malware samples uploaded to VirusTotal, a Google-owned service that catalogs malware and checks files for malicious threats. However, the current findings suggest PromptLock might simply be a “proof-of-concept” or “work-in-progress” rather than an operational attack. ESET noted that the file-destruction feature in the ransomware hasn’t been implemented yet. One security researcher also tweeted that PromptLock actually belongs to them.

At 13GB, the gpt-oss:20b model’s size raises questions about viability. Running it could also hog the GPU’s video memory. However, ESET tells PCMag that, “The attack is highly viable. The attacker does not need to download the entire gpt-oss model, which can be several gigabytes in size. Instead, they can establish a proxy or tunnel from the compromised network to a server running the model and accessible via the Ollama API. This technique, known as Internal Proxy (MITRE ATT&CK T1090.001), is commonly used in modern cyberattacks.”

In its research, ESET also argues that it’s “our responsibility to inform the cybersecurity community about such developments.” John Scott-Railton, a spyware researcher at Citizen Lab, also warned: “We are in the earliest days of regular threat actors leveraging local/private AI. And we are unprepared.”

Recommended by Our Editors

In its own statement, OpenAI said, “We thank the researchers for sharing their findings. It’s very important to us that we develop our models safely. We take steps to reduce the risk of malicious use, and we’re continually improving safeguards to make our models more robust against exploits. For example, you can read about our research and approach in the model card.”

OpenAI previously tested its more powerful source model, gpt-oss-120b, and concluded that despite fine-tuning, it “did not reach High capability in Biological and Chemical Risk or Cyber risk.”

5 Ways to Get More Out of Your ChatGPT Conversations

PCMag Logo

5 Ways to Get More Out of Your ChatGPT Conversations

Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

About Michael Kan

Senior Reporter

Michael Kan

I’ve been working as a journalist for over 15 years—I got my start as a schools and cities reporter in Kansas City and joined PCMag in 2017.

Read Michael’s full bio

Read the latest from Michael Kan



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI admits ChatGPT safeguards fail during extended conversations
Next Article Claude AI Can Now Click and Scroll for You in Chrome
Advanced AI Editor
  • Website

Related Posts

ChatGPT Parental Controls Are Coming After Parents Blame Chatbot For Teen’s Death

August 28, 2025

OpenAI, CEO Sam Altman sued by parents who blame ChatGPT for teen’s death

August 27, 2025

Parents Sue OpenAI, Claiming ChatGPT Contributed To Their Teenage Son’s Suicide

August 27, 2025

Comments are closed.

Latest Posts

Claire Oliver Gallery Expands in New York’s Harlem Neighborhood

Van Gogh Museum Threatens Dutch Government with Closure

$15.5 M. Project Uncovers Stone Age Settlement on Seabed Near Denmark

Optimism Ahead of Japan and Korea Art Fairs, and More Art News

Latest Posts

Pixie: Fast and Generalizable Supervised Learning of 3D Physics from Pixels – Takara TLDR

August 28, 2025

Baido Baike AI Computing Platform 5.0 Upgrade Released, Kunlun Core Super Nodes Activated_the_Baidu_Sina

August 28, 2025

ChatGPT Parental Controls Are Coming After Parents Blame Chatbot For Teen’s Death

August 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Pixie: Fast and Generalizable Supervised Learning of 3D Physics from Pixels – Takara TLDR
  • Baido Baike AI Computing Platform 5.0 Upgrade Released, Kunlun Core Super Nodes Activated_the_Baidu_Sina
  • ChatGPT Parental Controls Are Coming After Parents Blame Chatbot For Teen’s Death
  • Google and Grok are catching up to ChatGPT, says a16z’s latest AI report
  • Tencent Hunyuan Opens Sources for 4 Small-scale Models

Recent Comments

  1. BrianUnfag on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. MichaelNem on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Rhys on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Jerrydaubs on C3 AI and Arcfield Announce Partnership to Accelerate AI Capabilities to Serve U.S. Defense and Intelligence Communities

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.