A new kind of cybercrime has emerged in the online job market, and it is powered by Claude AI. Behind what looks like normal hiring, investigators have uncovered organized systems of fraudulent remote jobs, run by North Korean workers who use artificial intelligence (AI) to fake skills, pass interviews, and keep high-paying roles.
According to Anthropic’s latest threat intelligence report, these jobs are organized by the state to bring in money, helping North Korea bypass international sanctions. The money flows directly into national programs, including the country’s weapons development.
By lowering the barrier to complex technical work, Claude AI allows untrained workers to act like skilled engineers, turning simple job platforms into pipelines for global fraud.
This article explores how these schemes work, why North Korea is using them, and what investigators are doing to stop them.
Key Takeaways
North Korean workers rely on Claude AI to build fake profiles, pass job interviews, and keep roles they cannot perform on their own.
The purpose of these fraudulent remote jobs is clear: to evade sanctions and fund national weapons programs.
Most daily tasks, such as coding or writing work messages, are completed with AI support.
Accounts tied to the schemes were removed, and new monitoring tools were introduced, yet remote job scams continue to threaten companies.
Reports in malware news reveal that North Korea’s AI-driven job fraud connects closely with larger cyber threats, including stolen data and ransomware attacks.
Table of Contents
Table of Contents
Show Full Guide
How North Korean Operatives Use Claude AI for Fraudulent Remote Jobs
According to a threat intelligence report published by Anthropic, Claude AI has become a main tool used in North Korea’s scheme for fraudulent remote jobs.
These operations are set up to bring money back to the state by placing remote workers inside foreign technology companies. Much of this income is then used to support the country’s weapons programs, which face limits under international sanctions.
Many of the workers do not have the skills usually needed for these jobs:
They cannot write code on their own.
They struggle to fix technical problems.
They find it hard to communicate in English in a professional way.
Instead, they rely on AI prompts to look capable and to keep their jobs.
In earlier years, fraudulent jobs connected to North Korea worked in another way. The state depended on a smaller number of workers who had gone through many years of technical training. Universities such as Kim Il Sung and Kim Chaek prepared students for these roles, which took time and limited the number of workers that could be placed abroad.
With Anthropic’s Claude, this barrier has disappeared:
People with little training can pass interviews by using AI-generated answers.
Workers can keep positions at large companies without the expected background.
These remote fraud jobs can now expand more quickly, as AI removes the need for specialist study.
This shows a clear misuse of AI. Instead of employing skilled professionals, companies are paying workers who act mainly as a link between themselves and Claude AI.
How Fraudulent Operations Unfold Step by Step
These schemes move through different stages. At every point, Claude AI is used to guide the worker and cover gaps in skill.
This process explains how remote fraud jobs can succeed even when the people behind them lack the needed knowledge.
Phase 1: Persona Building
The first step is to create a convincing identity. Workers use Claude to:
Write detailed resumes that match job requirements.
Build portfolios with project examples that look realistic.
Add career histories and cultural details to make the profile sound authentic.
By doing this, the operator appears to be an experienced professional, even if the experience is invented.
Phase 2: Applications & Interviews
The next stage is applying for roles and preparing for interviews. Claude AI helps by:
Tailoring resumes and cover letters for each job.
Writing answers for interview questions, both technical and general.
Offering real-time support during coding tests.
This makes it possible for untrained workers to pass difficult hiring processes, leading to more remote job scams in companies around the world.
Phase 3: On the Job
The strongest dependence shows once the worker is hired.
They turn to Claude for almost every task, including:
Frontend development: 61% of activity. Workers build user interfaces with frameworks such as React, Vue, or Angular. They also create small components and adjust website layouts, tasks they cannot manage without AI guidance.
Programming and scripting: 26% of activity. This includes writing Python scripts, solving coding tasks, and working with basic algorithms. Claude provides step-by-step solutions that allow workers to finish jobs they do not fully understand.
Interview preparation: 10% of activity. Even after being hired, operators continue to rehearse answers with AI support. They practice technical replies and generate responses for assessments that may come up inside the company.
Backend development: 3% of activity. Workers create server-side code, set up APIs, and maintain simple backend systems. These jobs also require Claude to explain technical steps and write usable code.
This means that almost nine out of ten tasks fall into frontend work or programming. These areas demand constant support, showing heavy dependence on Claude and providing clear examples of AI misuse.
Phase 4: Revenue stream
The final stage is about income. Because tasks are handled by Claude, one person can manage several jobs at once. These multiple salaries create a scale of fraud that was not possible before.
According to investigators, the money adds up to hundreds of millions of dollars each year, strengthening North Korea’s funds through this growing system of fraudulent remote jobs.
Mitigation Measures
Once investigators uncovered these schemes, action was taken to slow them down. Accounts linked to the activity were removed, blocking workers from continuing under the same profiles.
Anthropic also put in place stronger ways to track and respond to suspicious behavior. These new measures include:
Collecting and saving records of known fraudulent cases.
Sharing and linking data with threat intelligence partners.
Looking for repeated patterns that reveal unusual or dishonest use of Claude.
This approach makes it easier to see when someone is depending too heavily on the tool to complete their tasks.
The measures do not fully stop the issue, but they create more barriers for people running fraud remote jobs. They also help investigators share clearer AI misuse examples, which can guide companies in protecting themselves against future risks.
The Bottom Line
The threat intelligence report shows that misuse of AI has become a clear trend. Tools like Claude AI are now built into many stages of cybercrime. People with little skill can use them to pass interviews, handle daily coding tasks, and manage professional communication. This has turned remote fraudulent jobs into a larger risk for companies worldwide.
Stronger security measures and better sharing of threat intelligence are needed to make these scams harder to run and to reduce their impact in the future.
FAQs
AI fraud happens when tools like Claude AI are used to deceive or cheat. It can include fake resumes, remote job scams, or other kinds of artificial intelligence fraud where technology hides a lack of real skills or creates false identities to gain money or access.
The greatest drawback is misuse. Systems such as Claude by Anthropic make it easier for unskilled actors to run scams. This allows remote job scams and other forms of AI fraud to spread, creating bigger risks for both companies and individuals.
Signs include resumes that look perfect but lack depth, repeated patterns in emails, or workers who need prompts for simple tasks. Companies can detect artificial intelligence fraud by noticing AI misuse examples that show clear reliance on Claude instead of genuine ability.
Misuse can lead to financial loss, lower trust, and cyber risks. AI fraud harms companies through fake hires, while reports in malware news highlight new attacks shaped by AI. These cases show how artificial intelligence fraud can grow quickly and affect many people.
A clear case involves Claude being used by North Korean workers to secure tech jobs they cannot handle.
References
Threat Intelligence Report: August 2025 (Anthropic)