Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Is a Realistic Water Bubble Simulation Possible?

Do Something Difficult Every Day | AMA #1 – Ask Me Anything with Lex Fridman

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Security risks of AI-generated code and how to manage them
Coding Assistants

Security risks of AI-generated code and how to manage them

Advanced AI BotBy Advanced AI BotMay 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Large language model-based coding assistants, such as GitHub Copilot and Amazon CodeWhisperer, have revolutionized the software development landscape. These AI tools dramatically boost productivity by generating boilerplate code, suggesting complex algorithms and explaining unfamiliar codebases. In fact, research by digital consultancy Publicis Sapient found teams can see up to a 50% reduction in network engineering time using AI-generated code.

However, as AI content generators become embedded in development workflows, security concerns emerge. Consider the following:

Does AI-generated code introduce new vulnerabilities?
Can security teams trust code that developers might not fully understand?
How do teams maintain security oversight when code creation becomes increasingly automated?

Let’s explore AI-generated code security risks for DevSecOps teams and how application security (AppSec) teams can ensure the code used doesn’t introduce vulnerabilities.

The security risks of AI-generated coding assistants

In February 2025, Andrej Karpathy, a former research scientist and founding member of OpenAI, described a “new kind of coding … where you fully give in to the vibes, embrace exponentials and forget that the code even exists.” This tongue-in-cheek statement on vibe coding resulted in a flurry of comments from cybersecurity professionals expressing concerns at the potential rise in vulnerable software due to unchecked use of coding assistants based on large language models (LLMs).

Five security risks of using AI-generated code include the following.

Code based on public domain training

The foremost security risk of AI-generated code is that coding assistants have been trained on codebases in the public domain, many of which contain vulnerable code. Without any guardrails, they reproduce vulnerable code in new applications. A recent academic paper found that at least 48% of AI-generated code suggestions contained vulnerabilities.

Code generated without considering security

AI-generated coding tools do not understand security intent and reproduce code that appears correct based on prevalence in the training data set. This is analogous to copy-pasting code from developer forums and expecting it to be secure.

Code could use deprecated or vulnerable dependencies

A related concern is that coding assistants might ingest vulnerable or deprecated dependencies into new projects in their attempts to solve coding tasks. Left ungoverned, this can lead to significant supply chain vulnerabilities.

Code used is assumed to be vetted and secure

Another risk is that developers could become overconfident in AI-generated code. Many developers mistakenly assume that AI code suggestions are vetted and secure. A Snyk survey revealed that nearly 80% of developers and practitioners said they thought AI-generated code was more secure — a dangerous trend.

Remember that AI-generated code is only as good as its training data and input prompts. LLMs have a knowledge cutoff and lack awareness of new and emergent vulnerability patterns. Similarly, if a prompt fails to specify a security requirement, the generated code might lack basic security controls or protections.

Code could use another company’s IP or code base illegally

Coding assistants present significant intellectual property (IP) and data privacy concerns. Coding assistants might generate large chunks of licensed open source code verbatim, which leads to IP contamination in the new codebase. Some tools protect against the reuse of large chunks of public domain code, but AI can suggest copyrighted code or proprietary algorithms without such protection. To get useful suggestions, developers might prompt these tools with proprietary code or confidential logic. That input could be stored or later used in model training, potentially leaking secrets.

The security benefits of AI-generated coding assistants

Many of the AI-generated code security risks are self-evident, leading to industry speculation about a crisis in the software industry. The benefits are significant too, however, and might outweigh the downsides.

Reduced development time

AI pair-programming with coding assistants can speed up development by handling boilerplate code, potentially reducing human error. Developers can generate code for repetitive tasks quickly, freeing time to focus on security-critical logic. Simply reducing the cognitive load on developers to produce repetitive or error-prone code can result in significantly less vulnerable code.

Providing security suggestions

AI models trained on vast code corpora might recall secure coding techniques that a developer could overlook. For instance, users can prompt ChatGPT to include security features, such as input validation, proper authentication or rate limiting, in its code suggestions. ChatGPT can also recognize vulnerabilities when asked — for example, a developer can tell ChatGPT to review code for SQL injection or other flaws, and it attempts to identify issues and suggest fixes. This on-demand security expertise can help developers catch common mistakes earlier in the software development lifecycle.

Security reviews

Probably the biggest impact coding assistants can have on the security posture of new codebases is to use their ability to parse these codebases and act as an expert reviewer or a second pair of eyes. By prompting an assistant — preferably a different one than used to generate the code — with a security perspective, this kind of AI-driven code review augments a security professional’s efforts by quickly covering a lot of ground.

AI coding platforms are evolving to prioritize security. GitHub Copilot, for example, introduced an AI-based vulnerability filtering system that blocks insecure code patterns. At the same time, the Cursor AI editor can integrate with security scanners, such as Aikido Security, to flag issues as code is written, highlighting vulnerabilities or leaked secrets within the integrated development environment (IDE) itself.

Best practices for secure adoption of coding assistants

Follow these best practices to ensure the secure use of code assistants:

Treat AI suggestions as unreviewed code. Never assume AI-generated code is secure. Treat it with the same scrutiny as a snippet from an unknown developer. Before merging, always perform code reviews, linting and security testing on AI-written code. In practice, this means running static application security testing (SAST) tools, dependency checks and manual review on any code from Copilot or ChatGPT, just as with any human-written code.
Maintain human oversight and judgment. Use AI as an assistant, not a replacement. Make sure developers remain in the loop, understanding and vetting what the AI code generator produces. Encourage a culture of skepticism.
Use AI deliberately for security. Turn the tool’s strengths into an advantage for AppSec. For example, prompt the AI to focus on security, such as “Explain any security implications of this code” or “Generate this function using secure coding practices (input validation, error handling, etc.).” Remember that any AI output is a starting point; the development team must vet and integrate it correctly.
Enable and embrace security features. Take advantage of the AI tool’s built-in safeguards. For example, if using Copilot, enable the vulnerability filtering and license blocking options to automatically reduce risky suggestions.
Integrate security scanning in the workflow. Augment AI coding with automated security tests in the DevSecOps pipeline. For instance, use IDE plugins or continuous integration pipelines that run static analysis on new code contributions — this will flag insecure patterns, whether written by a human or AI. Some modern setups integrate AI and SAST; for example, the Cursor IDE’s integration with Aikido Security can scan code in real time for secrets and vulnerabilities as it’s being written.
Establish policies for AI use. Organizations should develop clear guidelines that outline how developers can use AI code tools. Define what types of data can and cannot be shared in prompts to prevent leakage of crown-jewel secrets.

By recognizing both the benefits and the risks of AI code generation, developers and security professionals can strike a balance. Tools such as Copilot, ChatGPT and Cursor can boost productivity and even enhance security through quick access to best practices and automated checks. But without the proper checks and mindset, they can just as easily introduce new vulnerabilities.

In summary, AI coding tools can improve AppSec, but only if they are integrated with strong DevSecOps practices. Pair the AI’s speed with human oversight and automated security checks to ensure nothing critical slips through.

Colin Domoney is a software security consultant who evangelizes DevSecOps and helps developers secure their software. He previously worked for Veracode and 42Crunch and authored a book on API security. He is currently a CTO and co-founder, and an independent security consultant.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIt’s too expensive to fight every AI copyright battle, Getty CEO says
Next Article Paper page – Characterizing Bias: Benchmarking Large Language Models in Simplified versus Traditional Chinese
Advanced AI Bot
  • Website

Related Posts

The Autonomous AI Coding Assistant Changing Developer Workflows

May 30, 2025

AI-enabled ‘vibe coding’ lets anyone write software : NPR

May 30, 2025

Great Valley team’s research on artificial intelligence for coding wins award

May 27, 2025
Leave A Reply Cancel Reply

Latest Posts

Trump Fires National Portrait Gallery Director Kim Sajet

Ukrainian Tradition Reimagined—Worn By Icons, Loved Worldwide

The Mix That Is His Art And His Life

Banksy Puts A Fine New Lighthouse In Marseilles

Latest Posts

Is a Realistic Water Bubble Simulation Possible?

May 31, 2025

Do Something Difficult Every Day | AMA #1 – Ask Me Anything with Lex Fridman

May 31, 2025

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

May 31, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.