Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

C3.ai Stock Dips Following Palantir Technologies Earnings: What’s Going On? – C3.ai (NYSE:AI)

China’s Tech Giants Stockpile Nvidia H20 AI Chips Ahead of U.S. Export Ban

AI-generated images are a legal mess – and still a very human process

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » RSAC 2025: Cisco and Meta put open-source AI at the heart of threat defense
Mozilla Foundation AI

RSAC 2025: Cisco and Meta put open-source AI at the heart of threat defense

Advanced AI BotBy Advanced AI BotMay 8, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

With cyberattacks accelerating at machine speed, open-source large language models (LLMs) have quickly become the infrastructure that enables startups and global cybersecurity leaders to develop and deploy adaptive, cost-effective defenses against threats that evolve faster than human analysts can respond.

Open-source LLMs’ initial advantages of faster time-to-market, greater adaptability and lower cost have created a scalable, secure foundation for delivering infrastructure. At last week’s RSAC 2025 conference, Cisco, Meta and ProjectDiscovery announced new open-source LLMs and a community-driven attack surface innovation that together define the future of open-source in cybersecurity.   

One of the key takeaways from this year’s RSAC is the shift in open-source LLMs to extend and strengthen infrastructure at scale.

Open-source AI is on the verge of delivering what many cybersecurity leaders have called on for years, which is the ability of the many cybersecurity providers to join forces against increasingly complex threats. The vision of being collaborators in creating a unified, open-source LLM and infrastructure is a step closer, given the announcements at RSAC.

Cisco’s Chief Product Officer Jeetu Patel emphasized in his keynote, “The true enemy is not our competitor. It is actually the adversary. And we want to make sure that we can provide all kinds of tools and have the ecosystem band together so that we can actually collectively fight the adversary.”

Patel explained the urgency of taking on such a complex challenge, saying, “AI is fundamentally changing everything, and cybersecurity is at the heart of it all. We’re no longer dealing with human-scale threats; these attacks are occurring at machine scale.”

Cisco’s Foundation-sec-8B LLM defines a new era of open-source AI

Cisco’s newly established Foundation AI group originates from the company’s recent acquisition of Robust Intelligence. Foundation AI’s focus is on delivering domain-specific AI infrastructure tailored explicitly to cybersecurity applications, which are among the most challenging to solve. Built on Meta’s Llama 3.1 architecture, this 8-billion parameter, open-weight Large Language Model isn’t a retrofitted general-purpose AI. It was purpose-built, meticulously trained on a cybersecurity-specific dataset curated in-house by Cisco Foundation AI.

“By their nature, the problems in this charter are some of the most difficult ones in AI today. To make the technology accessible, we decided that most of the work we do in Foundation AI should be open. Open innovation allows for compounding effects across the industry, and it plays a particularly important role in the cybersecurity domain,” writes Yaron Singer, VP of AI and Security at Foundation.

With open-source anchoring Foundation AI, Cisco has designed an efficient architectural approach for cybersecurity providers who typically compete with each other, selling comparable solutions, to become collaborators in creating more unified, hardened defenses.

Singer writes, “Whether you’re embedding it into existing tools or building entirely new workflows, foundation-sec-8b adapts to your organization’s unique needs.” Cisco’s blog post announcing the model recommends that security teams apply foundation-sec-8b across the security lifecycle. Potential use cases Cisco recommends for the model include SOC acceleration, proactive threat defense, engineering enablement, AI-assisted code reviews, validating configurations and custom integration.

Foundation-sec-8B’s weights and tokenizer have been open-sourced under the permissive Apache 2.0 license on Hugging Face, allowing enterprise-level customization and deployment without vendor lock-in, maintaining compliance and privacy controls. Cisco’s blog also notes plans to open-source the training pipeline, further fostering community-driven innovation.

Cybersecurity is in the LLM’s DNA

Cisco chose to create a cybersecurity-specific model optimized for the needs of SOC, DevSecOps and large-scale security teams. Retrofitting an existing, generic AI model wouldn’t get them to their goal, so the Foundation AI team engineered its training using a large-scale, expansive and well-curated cybersecurity-specific dataset.

By taking a more precision-focused approach to building the model, the Foundation AI team was able to ensure that the model deeply understands real-world cyber threats, vulnerabilities and defensive strategies.

Key training datasets included the following:

Vulnerability Databases: Including detailed CVEs (Common Vulnerabilities and Exposures) and CWEs (Common Weakness Enumerations) to pinpoint known threats and weaknesses.

Threat Behavior Mappings: Structured from proven security frameworks such as MITRE ATT&CK, providing context on attacker methodologies and behaviors.

Threat Intelligence Reports: Comprehensive insights derived from global cybersecurity events and emerging threats.

Red-Team Playbooks: Tactical plans outlining real-world adversarial techniques and penetration strategies.

Real-World Incident Summaries: Documented analyses of cybersecurity breaches, incidents, and their mitigation paths.

Compliance and Security Guidelines: Established best practices from leading standards bodies, including the National Institute of Standards and Technology (NIST) frameworks and the Open Worldwide Application Security Project (OWASP) secure coding principles.

This tailored training regimen positions Foundation-sec-8B uniquely to excel at complex cybersecurity tasks, offering significantly enhanced accuracy, deeper contextual understanding and quicker threat response capabilities than general-purpose alternatives.

Benchmarking Foundation-sec-8B LLM

Cisco’s technical benchmarks show Foundation-sec-8B delivers cybersecurity performance comparable to significantly larger models:

BenchmarkFoundation-sec-8BLlama-3.1-8BLlama-3.1-70BCTI-MCQA67.3964.1468.23CTI-RCM75.2666.4372.66

By designing the foundation model to be cybersecurity-specific, Cisco is enabling SOC teams to gain greater efficiency with advanced threat analytics without having to pay high infrastructure costs to get it.

Cisco’s broader strategic vision, detailed in its blog, Foundation AI: Robust Intelligence for Cybersecurity, addresses common AI integration challenges, including limited domain alignment of general-purpose models, insufficient datasets and legacy system integration difficulties. Foundation-sec-8B is specifically designed to navigate these barriers, running efficiently on minimal hardware configurations, typically requiring just one or two Nvidia A100 GPUs.

Meta also underscored its open-source strategy at RSAC 2025, expanding its AI Defenders Suite to strengthen security across generative AI infrastructure. Their open-source toolkit now includes Llama Guard 4, a multimodal classifier detecting policy violations across text and images, improving compliance monitoring within AI workflows.

Also introduced is LlamaFirewall, an open-source, real-time security framework integrating modular capabilities that includes PromptGuard 2, which is used to detect prompt injections and jailbreak attempts. Also launched as part of LlamaFirewall are Agent Alignment Checks that monitor and protect AI agent decision-making processes along with CodeShield, which is designed to inspect generated code to identify and mitigate vulnerabilities.

Meta also enhanced Prompt Guard 2, offering two open-source variants that further strengthen the future of open-source AI-based infrastructure. They include a high-accuracy 86M-parameter model and a leaner, lower-latency 22M-parameter alternative optimized for minimal resource use.

Additionally, Meta launched the open-source benchmarking suite CyberSec Eval 4, which was developed in partnership with CrowdStrike. It features CyberSOC Eval, benchmarking AI effectiveness in realistic Security Operations Center (SOC) scenarios and AutoPatchBench, which is used to evaluate autonomous AI capabilities for identifying and fixing software vulnerabilities.

Meta also launched the Llama Defenders Program, which provides early access to open-AI-based security tools, including sensitive-document classifiers and audio threat detection. Private Processing is a privacy-first, on-device AI piloted within WhatsApp.

At RSAC 2025, ProjectDiscovery won the award for the “Most Innovative Startup” in the Innovation Sandbox, highlighting its commitment to open-source cybersecurity. Its flagship tool, Nuclei, is a customizable, open-source vulnerability scanner driven by a global community that rapidly identifies vulnerabilities across APIs, websites, cloud environments and networks.

Nuclei’s extensive YAML-based templating library includes over 11,000 detection patterns, 3,000 directly tied to specific CVEs, enabling real-time threat identification. Andy Cao, COO at ProjectDiscovery, emphasized open-source’s strategic importance, stating: “Winning the 20th annual RSAC Innovation Sandbox proves open-source models can succeed in cybersecurity. It reflects the power of our community-driven approach to democratizing security.”

ProjectDiscovery’s success aligns with Gartner’s 2024 Hype Cycle for Open-Source Software, which positions open-source AI and cybersecurity tools in the “Innovation Trigger” phase. Gartner recommends that organizations establish open-source program offices (OSPOs), adopt software bill-of-materials (SBOM) frameworks, and ensure regulatory compliance through effective governance practices.

Actionable insights for security leaders

Cisco’s Foundation-sec-8B, Meta’s expanded AI Defenders Suite and ProjectDiscovery’s Nuclei together demonstrated that cybersecurity innovation thrives most when openness, collaboration and specialized domain expertise align across company boundaries. These companies and others like them are setting the stage for any cybersecurity provider to be an active collaborator in creating cybersecurity defenses that deliver greater efficacy at lower costs.

As Patel emphasized during his keynote, “These aren’t fantasies. These are real-life examples that will be delivered because we now have bespoke security models that will be affordable for everyone. Better security efficacy is going to come at a fraction of the cost with state-of-the-art reasoning.”

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStudy: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
Next Article EU Commission: “AI Gigafactories” to strengthen Europe as a business location
Advanced AI Bot
  • Website

Related Posts

Cisco Unveils Foundation AI for Enhanced Security Integration

May 8, 2025

RSAC 2025: Cisco and Meta put open-source AI at the heart of threat defense

May 8, 2025

Cisco Unveils Foundation AI for Enhanced Security Integration

May 8, 2025
Leave A Reply Cancel Reply

Latest Posts

Beyond ‘Love,’ The Enduring Legacy Of Robert Indiana Resonates Deeply Through Pace Gallery Representation

Ancient Greek Author and Title of Charred Herculaneum Scroll Revealed

Bonhams To Auction Museum Quality Work from The Holly Solomon Collection.

Justin Bateman Turns Stones Into Ephemeral Art

Latest Posts

C3.ai Stock Dips Following Palantir Technologies Earnings: What’s Going On? – C3.ai (NYSE:AI)

May 8, 2025

China’s Tech Giants Stockpile Nvidia H20 AI Chips Ahead of U.S. Export Ban

May 8, 2025

AI-generated images are a legal mess – and still a very human process

May 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.