Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Emergence AI’s CRAFT arrives to make it easy for enterprises to automate their entire data pipeline

In just 4 months, AI medical scribe Abridge doubles valuation to $5.3B

Stable Neural Style Transfer | Two Minute Papers #136

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » How autonomous truck developers are pushing forward AI safety research boundaries
Center for AI Safety

How autonomous truck developers are pushing forward AI safety research boundaries

Advanced AI EditorBy Advanced AI EditorJune 24, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


How Stanford is improving AV safety

Stanford is employing leading AI experts to better determine machine learning behavior for safety-critical applications. Lopez explained two key techniques that will help AV safety: out-of-distribution detection and adaptive stress testing.

Out-of-distribution detection

One way Stanford can help improve safety is by analyzing training data.

According to Lopez, every Level 4 autonomous vehicle company trains their perception models using millions and millions of relevant images. However, there is always a chance that the model will run into something new that it cannot recognize—it could then act confused and behave unpredictably.

“How can you anticipate every possible thing that a machine learning algorithm is going to encounter?” Lopez asked. “This problem I’m describing is a problem that every single Level 4 autonomous vehicle company is facing.”

Stanford is developing models that can take images from real-world operations, take images from the model’s training set, and feed both into a large language model. With this data, the LLM can determine whether certain real-world images might confuse the autonomous driver. This technique is called out-of-training-distribution input detection, or simply out-of-distribution detection.

“Immediately, the large language model will be able to tell you, ‘This image that you have from the road may not be very well represented in your training set. You better go update your training set and include some images of this Joshua tree, or tumbleweed, or billboard with a picture of a stop sign,’” Lopez said.

For autonomous trucks, the technique can help identify any faults in AI training data.

“We’re leveraging that capability to characterize our perception machine learning models to try to continuously understand whether we have complete training data sets or whether we need to update our training data set.”

Lopez said that Marco Pavone, a Stanford associate professor and member of the Center for AI Safety, is doing leading-edge research on out-of-distribution detection.

Adaptive stress testing

Sensors are prone to countless types of interference. Dirty or malfunctioning sensors can spell trouble when a truck’s driving system depends on sensor data to operate safely.

“Your perception system is going to be messy; it’s going to be noisy. There’s going to be camera obstructions, environmental conditions, fog, dust, rain. That’s going to make it difficult to get a very accurate and clear picture of the world,” Lopez explained. “If the path planner has noisy information about the world, it’s prone to make mistakes about the true world-state.”

Adaptive stress testing simulates sensor disturbances to better understand the autonomous driver’s behavior under various conditions and ensure it can still navigate safely.

“We’re trying to reproduce those types of conditions … and ensuring that our path planner can still create a safe path through that scene, even with these noisy disturbances added to the scene model.”

Stanford’s associate professor Mykel Kochenderfer helped develop the technique. Adaptive stress testing is already making significant contributions to safety: It helps inform the Federal Aviation Administration’s collision avoidance solutions for commercial aircraft.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleApple Knows AI Isn’t What People Really Want, but It Can’t Say That
Next Article Guido van Rossum: Python | Lex Fridman Podcast #6
Advanced AI Editor
  • Website

Related Posts

How to Talk About AI Safety

June 23, 2025

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

June 22, 2025

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

June 22, 2025
Leave A Reply Cancel Reply

Latest Posts

Ezrom Legae And Art Under Apartheid At High Museum Of Art In Atlanta

Chanel Launches Arts & Culture Magazine

Publicity Wizard Jalila Singerff On The Vital PR Rules For 2025

Tourist Damaged 17th-Century Portrait at Florence’s Uffizi Galleries

Latest Posts

Emergence AI’s CRAFT arrives to make it easy for enterprises to automate their entire data pipeline

June 24, 2025

In just 4 months, AI medical scribe Abridge doubles valuation to $5.3B

June 24, 2025

Stable Neural Style Transfer | Two Minute Papers #136

June 24, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.