Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Databricks, Noma Tackle CISOs’ AI Inference Nightmare
VentureBeat AI

Databricks, Noma Tackle CISOs’ AI Inference Nightmare

Advanced AI BotBy Advanced AI BotJune 5, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

CISOs know precisely where their AI nightmare unfolds fastest. It’s inference, the vulnerable stage where live models meet real-world data, leaving enterprises exposed to prompt injection, data leaks, and model jailbreaks.

Databricks Ventures and Noma Security are confronting these inference-stage threats head-on. Backed by a fresh $32 million Series A round led by Ballistic Ventures and Glilot Capital, with strong support from Databricks Ventures, the partnership aims to address the critical security gaps that have hindered enterprise AI deployments.

“The number one reason enterprises hesitate to deploy AI at scale fully is security,” said Niv Braun, CEO of Noma Security, in an exclusive interview with VentureBeat. “With Databricks, we’re embedding real-time threat analytics, advanced inference-layer protections, and proactive AI red teaming directly into enterprise workflows. Our joint approach enables organizations to accelerate their AI ambitions safely and confidently finally,” Braun said.

Securing AI inference demands real-time analytics and runtime defense, Gartner finds

Traditional cybersecurity prioritizes perimeter defenses, leaving AI inference vulnerabilities dangerously overlooked. Andrew Ferguson, Vice President at Databricks Ventures, highlighted this critical security gap in an exclusive interview with VentureBeat, emphasizing customer urgency regarding inference-layer security. “Our customers clearly indicated that securing AI inference in real-time is crucial, and Noma uniquely delivers that capability,” Ferguson said. “Noma directly addresses the inference security gap with continuous monitoring and precise runtime controls.”

Braun expanded on this critical need. “We built our runtime protection specifically for increasingly complex AI interactions,” Braun explained. “Real-time threat analytics at the inference stage ensure enterprises maintain robust runtime defenses, minimizing unauthorized data exposure and adversarial model manipulation.”

Gartner’s recent analysis confirms that enterprise demand for advanced AI Trust, Risk, and Security Management (TRiSM) capabilities is surging. Gartner predicts that through 2026, over 80% of unauthorized AI incidents will result from internal misuse rather than external threats, reinforcing the urgency for integrated governance and real-time AI security.

Gartner’s AI TRiSM framework illustrates comprehensive security layers essential for managing enterprise AI risk effectively. (Source: Gartner)

Noma’s proactive red teaming aims to ensure AI integrity from the outset

Noma’s proactive red teaming approach is strategically central to identifying vulnerabilities long before AI models reach production, Braun told VentureBeat. By simulating sophisticated adversarial attacks during pre-production testing, Noma exposes and addresses risks early, significantly enhancing the robustness of runtime protection.

During his interview with VentureBeat, Braun elaborated on the strategic value of proactive red teaming: “Red teaming is essential. We proactively uncover vulnerabilities pre-production, ensuring AI integrity from day one.”

“Reducing time to production without compromising security requires avoiding over-engineering. We design testing methodologies that directly inform runtime protections, helping enterprises move securely and efficiently from testing to deployment”, Braun advised.

Braun elaborated further on the complexity of modern AI interactions and the depth required in proactive red teaming methods. He stressed that this process must evolve alongside increasingly sophisticated AI models, particularly those of the generative type: “Our runtime protection was specifically built to handle increasingly complex AI interactions,” Braun explained. “Each detector we employ integrates multiple security layers, including advanced NLP models and language-modeling capabilities, ensuring we provide comprehensive security at every inference step.”

The red team exercises not only validate the models but also strengthen enterprise confidence in deploying advanced AI systems safely at scale, directly aligning with the expectations of leading enterprise Chief Information Security Officers (CISOs).

How Databricks and Noma Block Critical AI Inference Threats

Securing AI inference from emerging threats has become a top priority for CISOs as enterprises scale their AI model pipelines. “The number one reason enterprises hesitate to deploy AI at scale fully is security,” emphasized Braun. Ferguson echoed this urgency, noting, “Our customers have clearly indicated securing AI inference in real-time is critical, and Noma uniquely delivers on that need.”

Together, Databricks and Noma offer integrated, real-time protection against sophisticated threats, including prompt injection, data leaks, and model jailbreaks, while aligning closely with standards such as Databricks’ DASF 2.0 and OWASP guidelines for robust governance and compliance.

The table below summarizes key AI inference threats and how the Databricks-Noma partnership mitigates them:

Threat VectorDescriptionPotential ImpactNoma-Databricks MitigationPrompt InjectionMalicious inputs are overriding model instructions.Unauthorized data exposure and harmful content generation.Prompt scanning with multilayered detectors (Noma); Input validation via DASF 2.0 (Databricks).Sensitive Data LeakageAccidental exposure of confidential data.Compliance breaches, loss of intellectual property.Real-time sensitive data detection and masking (Noma); Unity Catalog governance and encryption (Databricks).Model JailbreakingBypassing embedded safety mechanisms in AI models.Generation of inappropriate or malicious outputs.Runtime jailbreak detection and enforcement (Noma); MLflow model governance (Databricks).Agent Tool ExploitationMisuse of integrated AI agent functionalities.Unauthorized system access and privilege escalation.Real-time monitoring of agent interactions (Noma); Controlled deployment environments (Databricks).Agent Memory PoisoningInjection of false data into persistent agent memory.Compromised decision-making, misinformation.AI-SPM integrity checks and memory security (Noma); Delta Lake data versioning (Databricks).Indirect Prompt InjectionEmbedding malicious instructions in trusted inputs.Agent hijacking, unauthorized task execution.Real-time input scanning for malicious patterns (Noma); Secure data ingestion pipelines (Databricks).

How Databricks Lakehouse architecture supports AI governance and security

Databricks’ Lakehouse architecture combines the structured governance capabilities of traditional data warehouses with the scalability of data lakes, centralizing analytics, machine learning, and AI workloads within a single, governed environment.

By embedding governance directly into the data lifecycle, Lakehouse architecture addresses compliance and security risks, particularly during the inference and runtime stages, aligning closely with industry frameworks such as OWASP and MITRE ATLAS.

During our interview, Braun highlighted the platform’s alignment with the stringent regulatory demands he’s seeing in sales cycles and with existing customers. “We automatically map our security controls onto widely adopted frameworks like OWASP and MITRE ATLAS. This allows our customers to confidently comply with critical regulations such as the EU AI Act and ISO 42001. Governance isn’t just about checking boxes. It’s about embedding transparency and compliance directly into operational workflows”.

Databricks Lakehouse integrates governance and analytics to securely manage AI workloads. (Source: Gartner)

How Databricks and Noma plan to secure enterprise AI at scale

Enterprise AI adoption is accelerating, but as deployments expand, so do security risks, especially at the model inference stage.

The partnership between Databricks and Noma Security addresses this directly by providing integrated governance and real-time threat detection, with a focus on securing AI workflows from development through production.

Ferguson explained the rationale behind this combined approach clearly: “Enterprise AI requires comprehensive security at every stage, especially at runtime. Our partnership with Noma integrates proactive threat analytics directly into AI operations, giving enterprises the security coverage they need to scale their AI deployments confidently”.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic unveils custom AI models for U.S. national security customers
Next Article Inside Art Basel’s New Qatar Fair and the Race to Dominate the Gulf Art Market
Advanced AI Bot
  • Website

Related Posts

Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions

June 6, 2025

Voice AI that actually converts: New TTS model boosts sales 15% for major brands

June 6, 2025

Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance

June 5, 2025
Leave A Reply Cancel Reply

Latest Posts

Artists Accuse Dealer Reco Sturgis of Withholding Payments and Artworks

A Soulful Step Into Story, Self And Sound

Klaus Biesenbach Says America Has ‘Political Correctness’ Issue

Casa Sanlorenzo Anchors New Arts And Culture Venue In Venice

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 6, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 6, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.