Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI makes us impotent

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » OpenAI Bans ChatGPT Accounts Used by Russian, Iranian and Chinese Hacker Groups
OpenAI

OpenAI Bans ChatGPT Accounts Used by Russian, Iranian and Chinese Hacker Groups

Advanced AI BotBy Advanced AI BotJune 9, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI Bans ChatGPT Accounts

OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.

“The [Russian-speaking] actor used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure,” OpenAI said in its threat intelligence report. “The actor demonstrated knowledge of Windows internals and exhibited some operational security behaviors.”

The Go-based malware campaign has been codenamed ScopeCreep by the artificial intelligence (AI) company. There is no evidence that the activity was widespread in nature.

The threat actor, per OpenAI, used temporary email accounts to sign up for ChatGPT, using each of the created accounts to have one conversation to make a single incremental improvement to their malicious software. They subsequently abandoned the account and moved on to the next.

This practice of using a network of accounts to fine-tune their code highlights the adversary’s focus on operational security (OPSEC), OpenAI added.

The attackers then distributed the AI-assisted malware through a publicly available code repository that impersonated a legitimate video game crosshair overlay tool called Crosshair X. Users who ended up downloading the trojanized version of the software had their systems infected by a malware loader that would then proceed to retrieve additional payloads from an external server and execute them.

Cybersecurity

“From there, the malware was designed to initiate a multi-stage process to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection,” OpenAI said.

“The malware is designed to escalate privileges by relaunching with ShellExecuteW and attempts to evade detection by using PowerShell to programmatically exclude itself from Windows Defender, suppressing console windows, and inserting timing delays.”

Among other tactics incorporated by ScopeCreep include the use of Base64-encoding to obfuscate payloads, DLL side-loading techniques, and SOCKS5 proxies to conceal their source IP addresses.

The end goal of the malware is to harvest credentials, tokens, and cookies stored in web browsers, and exfiltrate them to the attacker. It’s also capable of sending alerts to a Telegram channel operated by the threat actors when new victims are compromised.

OpenAI noted that the threat actor asked its models to debug a Go code snippet related to an HTTPS request, as well as sought help with integrating Telegram API and using PowerShell commands via Go to modify Windows Defender settings, specifically when it comes to adding antivirus exclusions.

The second group of ChatGPT accounts disabled by OpenAI are said to be associated with two hacking groups attributed to China: ATP5 (aka Bronze Fleetwood, Keyhole Panda, Manganese, and UNC2630) and APT15 (aka Flea, Nylon Typhoon, Playful Taurus, Royal APT, and Vixen Panda)

While one subset engaged with the AI chatbot on matters related to open-source research into various entities of interest and technical topics, as well as to modify scripts or troubleshooting system configurations.

“Another subset of the threat actors appeared to be attempting to engage in development of support activities including Linux system administration, software development, and infrastructure setup,” OpenAI said. “For these activities, the threat actors used our models to troubleshoot configurations, modify software, and perform research on implementation details.”

This consisted of asking for assistance building software packages for offline deployment and advice pertaining to configured firewalls and name servers. The threat actors engaged in both web and Android app development activities.

In addition, the China-linked clusters weaponized ChatGPT to work on a brute-force script that can break into FTP servers, research about using large-language models (LLMs) to automate penetration testing, and develop code to manage a fleet of Android devices to programmatically post or like content on social media platforms like Facebook, Instagram, TikTok, and X.

Cybersecurity

Some of the other observed malicious activity clusters that harnessed ChatGPT in nefarious ways are listed below –

A network, consistent with the North Korea IT worker scheme, that used OpenAI’s models to drive deceptive employment campaigns by developing materials that could likely advance their fraudulent attempts to apply for IT, software engineering, and other remote jobs around the world
Sneer Review, a likely China-origin activity that used OpenAI’s models to bulk generate social media posts in English, Chinese, and Urdu on topics of geopolitical relevance to the country for sharing on Facebook, Reddit, TikTok, and X
Operation High Five, a Philippines-origin activity that used OpenAI’s models to generate bulk volumes of short comments in English and Taglish on topics related to politics and current events in the Philippines for sharing on Facebook and TikTok
Operation VAGue Focus, a China-origin activity that used OpenAI’s models to generate social media posts for sharing on X by posing as journalists and geopolitical analysts, asking questions about computer network attack and exploitation tools, and translating emails and messages from Chinese to English as part of suspected social engineering attempts
Operation Helgoland Bite, a likely Russia-origin activity that used OpenAI’s models to generate Russian language content about the German 2025 election, and criticized the U.S. and NATO, for sharing on Telegram and X
Operation Uncle Spam, a China-origin activity that used OpenAI’s models to generate polarized social media content supporting both sides of divisive topics within U.S. political discourse for sharing on Bluesky and X
Storm-2035, an Iranian influence operation that used OpenAI’s models to generate short comments in English and Spanish that expressed support for Latino rights, Scottish independence, Irish reunification, and Palestinian rights, and praised Iran’s military and diplomatic prowess for sharing on X by inauthentic accounts posing as residents of the U.S., U.K., Ireland, and Venezuela.
Operation Wrong Number, a likely Cambodian-origin activity related to China-run task scam syndicates that used OpenAI’s models to generate short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole that advertised high salaries for trivial tasks such as liking social media posts

“Some of these companies operated by charging new recruits substantial joining fees, then using a portion of those funds to pay existing ’employees’ just enough to maintain their engagement,” OpenAI’s Ben Nimmo, Albert Zhang, Sophia Farquhar, Max Murphy, and Kimo Bumanglag said. “This structure is characteristic of task scams.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation
Next Article Google DeepMind’s CEO Thinks AI Will Make Humans Less Selfish
Advanced AI Bot
  • Website

Related Posts

OpenAI CEO Sam Altman says AI is like an intern today, but it will soon match experienced software engineers

June 9, 2025

OpenAI CEO Sam Altman says AI is like an intern today, but it will soon match experienced software engineers

June 9, 2025

Sam Altman’s Brief Ouster at OpenAI Is Getting the Movie Treatment

June 9, 2025
Leave A Reply Cancel Reply

Latest Posts

What Makes Lightning In A Bottle A Unique Festival Experience

How Icelandic Band KALEO Made The Rock Record Of 2025 So Far

Celebrating 60 Years At Detroit’s Charles H. Wright Museum Of African American History

16 Iconic Wild Animals Photos Celebrating Remembering Wildlife

Latest Posts

AI makes us impotent

June 9, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 9, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 9, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.