Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » PromptArmor To Protect Lawyers From GenAI Vendor Security Risks – Artificial Lawyer
Industry Applications

PromptArmor To Protect Lawyers From GenAI Vendor Security Risks – Artificial Lawyer

Advanced AI BotBy Advanced AI BotApril 7, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



PromptArmor is a startup out of California that tests genAI vendors for security risks such as ‘indirect prompt injection’, and 26 risk vectors overall. They’re focusing on legal, health, and other key sectors, and Artificial Lawyer caught up with them to learn more.

San Francisco-based co-founder Shankar Krishnan told AL: ‘If a law firm is evaluating an AI vendor, they would send it to us. We would give them back a detailed risk report on the AI components of that vendor. We do that by testing those vendors for risks such as ‘indirect prompt injection’ which is a new security risk for LLM applications, specifically.

‘We check for 26 risk vectors, all mapped to leading security frameworks like the ‘OWASP LLM top 10’, ‘mitre Atlas’, ‘NIST AI RMF’, and others. Our speciality is our technology which productizes the scan aspect of testing these LLM applications.’

And before you ask, this site has never heard of NIST AI RMF either…..but prompt injection is a term that’s done the legal tech rounds before.

So, to give you a sense of what they’re all about, here is a statement from the UK-based Alan Turing Institute from last year: ‘Prompt injection is one of the most urgent issues facing state-of-the-art generative AI models.

‘The UK’s National Cyber Security Centre has flagged it as a critical risk, while the US National Institute for Standards and Technology has described it as ‘generative AI’s greatest security flaw’.

‘Simply defined, prompt injection occurs ‘when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to unknowingly execute the attacker’s intentions’, as the Open Worldwide Application Security Project puts it. This can lead to the manipulation of the system’s decision-making, the distribution of disinformation to the user, the disclosure of sensitive information, the orchestration of intricate phishing attacks and the execution of malicious code.

‘Indirect prompt injection is the insertion of malicious information into the data sources of a GenAI system by hiding instructions in the data it accesses, such as incoming emails or saved documents. Unlike direct prompt injection, it does not require direct access to the GenAI system, instead presenting a risk across the range of data sources that a GenAI system uses to provide context.’

So, there you go, serious stuff, and just one aspect of what PromptArmor covers. Of course, do such vulnerabilities crop up in legal tech products? Is this likely? The short answer is: we don’t know. Why don’t we know? The answer is because as far as this site is aware no-one has ever published a public list of such incidents which include well-known legal AI tools.

The company, which is backed by Y Combinator, among others, added that in terms of what they offer, they provide ‘continuous monitoring’ and that there are already some law firms that ‘send us their entire repository of vendors’.  

‘We monitor those vendors for new AI features they are adding, or if they have introduced AI for the first time,’ Krishnan said.

And, in an example that may pique many firms’ interest, they added that: ‘We also scan for if there are privacy policy changes or terms changes (e.g. they are now training on your data), model changes (e.g. they have switched from Anthropic to OpenAI) and other relevant news.’

OK, all well and good. But, as noted above, AL has to ask: do law firms really need this? Is this something law firms need to think about? Krishnan underlined that they do.

‘Law firms need this because their innovation teams are bringing in AI vendors. Security teams don’t have the AI expertise to evaluate these vendors for novel AI security risk, so a lot of them get stuck in the PoC phase, or take longer to review.

‘Security teams also have a myriad of other things to do. We help security  teams assess AI vendors faster so innovation teams can bring them in, faster, creating a win-win. The gap between innovation and security at law firms has been well documented, and we think of ourselves as bridging this gap.’

Well, there you go. But, now the cost. Krishnan noted that: ‘Cost is tiered, based on the number of vendors, and we discount based on if they want assessments, continuous monitoring, or both.’

As mentioned, is this a big deal for law firms? It’s hard to estimate. Do well-known products out there have potential security gaps made possible by this new approach to AI? The short answer is: we don’t really know. Do products and companies that people don’t know well suffer the same or worse risks? Again, we don’t know.

So, given that most law firms are quite rightly risk-averse, then perhaps – rather as with genAI performance standards – we need to have an open chat about these issues as well. Maybe it’s something that can be quickly understood and resolved. But, now is probably a good time to get clarity on it as more and more products enter law firms’ tech stacks.

You can find more information about PromptArmor here.

—

What are your views and experiences with this field? Have you stopped using a genAI vendor because of the above risks? Is this even something you’ve tested for?



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla’s Giga Texas vehicles now drive themselves to outbound lot
Next Article Former Google CEO suggests building data centers in remote locations in case of nation-state attacks to slow down AI
Advanced AI Bot
  • Website

Related Posts

AI could unleash ‘deep societal upheavals’ that many elites are ignoring, Palantir CEO Alex Karp warns

June 7, 2025

Morgan Stanley upgrades mining stock as best pick to play rare earths

June 7, 2025

‘Bitcoin Family’ changed security after recent crypto kidnappings

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

16 Iconic Wild Animals Photos Celebrating Remembering Wildlife

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Latest Posts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 8, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 8, 2025

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.