Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem

Basecamp Research leverages Microsoft and Nvidia AI to…

Meta Just Escalated the AI Talent War With OpenAI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Industry Applications

PromptArmor To Protect Lawyers From GenAI Vendor Security Risks – Artificial Lawyer

By Advanced AI EditorApril 7, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



PromptArmor is a startup out of California that tests genAI vendors for security risks such as ‘indirect prompt injection’, and 26 risk vectors overall. They’re focusing on legal, health, and other key sectors, and Artificial Lawyer caught up with them to learn more.

San Francisco-based co-founder Shankar Krishnan told AL: ‘If a law firm is evaluating an AI vendor, they would send it to us. We would give them back a detailed risk report on the AI components of that vendor. We do that by testing those vendors for risks such as ‘indirect prompt injection’ which is a new security risk for LLM applications, specifically.

‘We check for 26 risk vectors, all mapped to leading security frameworks like the ‘OWASP LLM top 10’, ‘mitre Atlas’, ‘NIST AI RMF’, and others. Our speciality is our technology which productizes the scan aspect of testing these LLM applications.’

And before you ask, this site has never heard of NIST AI RMF either…..but prompt injection is a term that’s done the legal tech rounds before.

So, to give you a sense of what they’re all about, here is a statement from the UK-based Alan Turing Institute from last year: ‘Prompt injection is one of the most urgent issues facing state-of-the-art generative AI models.

‘The UK’s National Cyber Security Centre has flagged it as a critical risk, while the US National Institute for Standards and Technology has described it as ‘generative AI’s greatest security flaw’.

‘Simply defined, prompt injection occurs ‘when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to unknowingly execute the attacker’s intentions’, as the Open Worldwide Application Security Project puts it. This can lead to the manipulation of the system’s decision-making, the distribution of disinformation to the user, the disclosure of sensitive information, the orchestration of intricate phishing attacks and the execution of malicious code.

‘Indirect prompt injection is the insertion of malicious information into the data sources of a GenAI system by hiding instructions in the data it accesses, such as incoming emails or saved documents. Unlike direct prompt injection, it does not require direct access to the GenAI system, instead presenting a risk across the range of data sources that a GenAI system uses to provide context.’

So, there you go, serious stuff, and just one aspect of what PromptArmor covers. Of course, do such vulnerabilities crop up in legal tech products? Is this likely? The short answer is: we don’t know. Why don’t we know? The answer is because as far as this site is aware no-one has ever published a public list of such incidents which include well-known legal AI tools.

The company, which is backed by Y Combinator, among others, added that in terms of what they offer, they provide ‘continuous monitoring’ and that there are already some law firms that ‘send us their entire repository of vendors’.  

‘We monitor those vendors for new AI features they are adding, or if they have introduced AI for the first time,’ Krishnan said.

And, in an example that may pique many firms’ interest, they added that: ‘We also scan for if there are privacy policy changes or terms changes (e.g. they are now training on your data), model changes (e.g. they have switched from Anthropic to OpenAI) and other relevant news.’

OK, all well and good. But, as noted above, AL has to ask: do law firms really need this? Is this something law firms need to think about? Krishnan underlined that they do.

‘Law firms need this because their innovation teams are bringing in AI vendors. Security teams don’t have the AI expertise to evaluate these vendors for novel AI security risk, so a lot of them get stuck in the PoC phase, or take longer to review.

‘Security teams also have a myriad of other things to do. We help security  teams assess AI vendors faster so innovation teams can bring them in, faster, creating a win-win. The gap between innovation and security at law firms has been well documented, and we think of ourselves as bridging this gap.’

Well, there you go. But, now the cost. Krishnan noted that: ‘Cost is tiered, based on the number of vendors, and we discount based on if they want assessments, continuous monitoring, or both.’

As mentioned, is this a big deal for law firms? It’s hard to estimate. Do well-known products out there have potential security gaps made possible by this new approach to AI? The short answer is: we don’t really know. Do products and companies that people don’t know well suffer the same or worse risks? Again, we don’t know.

So, given that most law firms are quite rightly risk-averse, then perhaps – rather as with genAI performance standards – we need to have an open chat about these issues as well. Maybe it’s something that can be quickly understood and resolved. But, now is probably a good time to get clarity on it as more and more products enter law firms’ tech stacks.

You can find more information about PromptArmor here.

—

What are your views and experiences with this field? Have you stopped using a genAI vendor because of the above risks? Is this even something you’ve tested for?



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla’s Giga Texas vehicles now drive themselves to outbound lot
Next Article Former Google CEO suggests building data centers in remote locations in case of nation-state attacks to slow down AI
Advanced AI Editor
  • Website

Related Posts

Tesla rolling out Robotaxi pilot in SF Bay Area this weekend: report

July 25, 2025

Alibaba’s new Qwen reasoning AI model sets open-source records

July 25, 2025

Tesla is ready with a perfect counter to the end of US EV tax credits

July 25, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem

July 26, 2025

Basecamp Research leverages Microsoft and Nvidia AI to…

July 26, 2025

Meta Just Escalated the AI Talent War With OpenAI

July 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem
  • Basecamp Research leverages Microsoft and Nvidia AI to…
  • Meta Just Escalated the AI Talent War With OpenAI
  • Accenture Becomes Inaugural Member of Corporate Affiliate Program at Stanford HAI
  • Perplexity AI Picks Bharat Over Big Tech

Recent Comments

  1. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’
  3. 打开Binance账户 on Tanka CEO Kisson Lin to talk AI-native startups at Sessions: AI
  4. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  5. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.