How Stanford is improving AV safety
Stanford is employing leading AI experts to better determine machine learning behavior for safety-critical applications. Lopez explained two key techniques that will help AV safety: out-of-distribution detection and adaptive stress testing.
Out-of-distribution detection
One way Stanford can help improve safety is by analyzing training data.
According to Lopez, every Level 4 autonomous vehicle company trains their perception models using millions and millions of relevant images. However, there is always a chance that the model will run into something new that it cannot recognize—it could then act confused and behave unpredictably.
“How can you anticipate every possible thing that a machine learning algorithm is going to encounter?” Lopez asked. “This problem I’m describing is a problem that every single Level 4 autonomous vehicle company is facing.”
Stanford is developing models that can take images from real-world operations, take images from the model’s training set, and feed both into a large language model. With this data, the LLM can determine whether certain real-world images might confuse the autonomous driver. This technique is called out-of-training-distribution input detection, or simply out-of-distribution detection.
“Immediately, the large language model will be able to tell you, ‘This image that you have from the road may not be very well represented in your training set. You better go update your training set and include some images of this Joshua tree, or tumbleweed, or billboard with a picture of a stop sign,’” Lopez said.
For autonomous trucks, the technique can help identify any faults in AI training data.
“We’re leveraging that capability to characterize our perception machine learning models to try to continuously understand whether we have complete training data sets or whether we need to update our training data set.”
Lopez said that Marco Pavone, a Stanford associate professor and member of the Center for AI Safety, is doing leading-edge research on out-of-distribution detection.
Adaptive stress testing
Sensors are prone to countless types of interference. Dirty or malfunctioning sensors can spell trouble when a truck’s driving system depends on sensor data to operate safely.
“Your perception system is going to be messy; it’s going to be noisy. There’s going to be camera obstructions, environmental conditions, fog, dust, rain. That’s going to make it difficult to get a very accurate and clear picture of the world,” Lopez explained. “If the path planner has noisy information about the world, it’s prone to make mistakes about the true world-state.”
Adaptive stress testing simulates sensor disturbances to better understand the autonomous driver’s behavior under various conditions and ensure it can still navigate safely.
“We’re trying to reproduce those types of conditions … and ensuring that our path planner can still create a safe path through that scene, even with these noisy disturbances added to the scene model.”
Stanford’s associate professor Mykel Kochenderfer helped develop the technique. Adaptive stress testing is already making significant contributions to safety: It helps inform the Federal Aviation Administration’s collision avoidance solutions for commercial aircraft.