Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Ronald Sullivan: The Ideal of Justice in the Face of Controversy and Evil | Lex Fridman Podcast #170

The secret to AGI, in 4 pages

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign
Anthropic (Claude)

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

Advanced AI BotBy Advanced AI BotMay 10, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


May 01, 2025Ravie LakshmananArtificial Intelligence / Disinformation

Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an “influence-as-a-service” operation to engage with authentic accounts across Facebook and X.

The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct personas on the two social media platforms, creating a network of “politically-aligned accounts” that engaged with “10s of thousands” of authentic accounts.

The now-disrupted operation, Anthropic researchers said, prioritized persistence and longevity over vitality and sought to amplify moderate political perspectives that supported or undermined European, Iranian, the United Arab Emirates (U.A.E.), and Kenyan interests.

These included promoting the U.A.E. as a superior business environment while being critical of European regulatory frameworks, focusing on energy security narratives for European audiences, and cultural identity narratives for Iranian audiences.

Cybersecurity

The efforts also pushed narratives supporting Albanian figures and criticizing opposition figures in an unspecified European country, as well as advocated development initiatives and political figures in Kenya. These influence operations are consistent with state-affiliated campaigns, although exactly who were behind them remains unknown, it added.

“What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users,” the company noted.

“Claude was used as an orchestrator deciding what actions social media bot accounts should take based on politically motivated personas.”

The use of Claude as a tactical engagement decision-maker notwithstanding, the chatbot was utilized to generate appropriate politically-aligned responses in the persona’s voice and native language, and create prompts for two popular image-generation tools.

The operation is believed to be the work of a commercial service that caters to different clients across various countries. At least four distinct campaigns have been identified using this programmatic framework.

“The operation implemented a highly structured JSON-based approach to persona management, allowing it to maintain continuity across platforms and establish consistent engagement patterns mimicking authentic human behavior,” researchers Ken Lebedev, Alex Moix, and Jacob Klein said.

“By using this programmatic framework, operators could efficiently standardize and scale their efforts and enable systematic tracking and updating of persona attributes, engagement history, and narrative themes across multiple accounts simultaneously.”

Another interesting aspect of the campaign was that it “strategically” instructed the automated accounts to respond with humor and sarcasm to accusations from other accounts that they may be bots.

Anthropic said the operation highlights the need for new frameworks to evaluate influence operations revolving around relationship building and community integration. It also warned that similar malicious activities could become common in the years to come as AI lowers the barrier further to conduct influence campaigns.

Elsewhere, the company noted that it banned a sophisticated threat actor using its models to scrape leaked passwords and usernames associated with security cameras and devise methods to brute-force internet-facing targets using the stolen credentials.

Cybersecurity

The threat actor further employed Claude to process posts from information stealer logs posted on Telegram, create scripts to scrape target URLs from websites, and improve their own systems to better search functionality.

Two other cases of misuse spotted by Anthropic in March 2025 are listed below –

A recruitment fraud campaign that leveraged Claude to enhance the content of scams targeting job seekers in Eastern European countries
A novice actor that leveraged Claude to enhance their technical capabilities to develop advanced malware beyond their skill level with capabilities to scan the dark web and generate undetectable malicious payloads that can evade security controls and maintain long-term persistent access to compromised systems

“This case illustrates how AI can potentially flatten the learning curve for malicious actors, allowing individuals with limited technical knowledge to develop sophisticated tools and potentially accelerate their progression from low-level activities to more serious cybercriminal endeavors,” Anthropic said.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle DeepMind UK Workers To Unionise Over AI Sales To Israeli Defence Groups: Report
Next Article Here’s a Closer Look at Meta’s New Standalone AI App
Advanced AI Bot
  • Website

Related Posts

Anthropic Launches Claude Web Search API for Developers

May 10, 2025

Anthropic Launches Claude Web Search API for Developers

May 10, 2025

Anthropic Launches Claude Web Search API for Developers

May 10, 2025
Leave A Reply Cancel Reply

Latest Posts

Inside The Society Of MSK’s TEFAF New York Collector’s Preview

Mexican Sculptor Dies at 79

The Internet Blessed Pope Leo XIV With Chicago-Themed Memes

Art Dealer Pleads Guilty to Selling to Suspected Hezbollah Financier

Latest Posts

Ronald Sullivan: The Ideal of Justice in the Face of Controversy and Evil | Lex Fridman Podcast #170

May 10, 2025

The secret to AGI, in 4 pages

May 10, 2025

Foundation AI: Cisco launches AI model for integration in security applications

May 10, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.