Autonomous truck developers are racing to commercialization—but safety is taking a front seat. On their trek to self-driving cars, developers are also helping push the boundaries of AI safety research.
Torc announced that it is now a supporting member of the Stanford Center for AI Safety, helping the California university's center develop public solutions to one of the fundamental challenges of machine learning applications: uncertainty.
“Every machine learning model that’s being used in this industry encounters the same sort of issues around uncertainty,” Jerry Lopez, senior director of safety assurance for Torc, told FleetOwner. “These are probabilistic models, where they’re taking in a bunch of real-world data and they’re making probabilistic estimations about what the world-state is and what the world-state will be in the future.”
Autonomous developers, such as Torc, want to resolve this safety problem. The Stanford Center for AI Safety is developing rigorous techniques and open-source software for building safer AI systems. The membership will help the groups tackle their common problem through leading research.
See also: Torc launches first autonomous trucking hub
A shared problem: Autonomous vehicle safety
At the Center for AI Safety, Stanford professors and graduate students' advanced research ensures that machine learning applications can work well in safety-critical applications.
Machine learning is central to autonomous vehicle development. With its power to develop statistical predictions and inferences, it enables the driving system to identify pedestrians, read traffic signs, anticipate vehicles’ trajectory, and more.
“Every Level 4 autonomous vehicle company is incorporating machine learning models into safety-critical applications for decision-making, perception of the world, and that sort of thing,” Lopez explained.
However, using machine learning also presents an engineering problem: ensuring that the AI application is safe. Machine learning applications can often lead to opaque systems where developers cannot anticipate real-world failures.
Torc will sponsor, collaborate in, and coauthor research with the center, gaining direct access to research findings as they happen. Torc will also have access to the center’s symposiums, seminars, and other member benefits.
These benefits will support Torc’s development process and improve its understanding of how machine learning models behave in various situations. The partnership follows Torc’s plans to fully commercialize autonomous trucks for long-haul applications in 2027.
How Stanford is improving AV safety
Stanford is employing leading AI experts to better determine machine learning behavior for safety-critical applications. Lopez explained two key techniques that will help AV safety: out-of-distribution detection and adaptive stress testing.
Out-of-distribution detection
One way Stanford can help improve safety is by analyzing training data.
According to Lopez, every Level 4 autonomous vehicle company trains their perception models using millions and millions of relevant images. However, there is always a chance that the model will run into something new that it cannot recognize—it could then act confused and behave unpredictably.
“How can you anticipate every possible thing that a machine learning algorithm is going to encounter?” Lopez asked. “This problem I’m describing is a problem that every single Level 4 autonomous vehicle company is facing.”
Stanford is developing models that can take images from real-world operations, take images from the model’s training set, and feed both into a large language model. With this data, the LLM can determine whether certain real-world images might confuse the autonomous driver. This technique is called out-of-training-distribution input detection, or simply out-of-distribution detection.
“Immediately, the large language model will be able to tell you, ‘This image that you have from the road may not be very well represented in your training set. You better go update your training set and include some images of this Joshua tree, or tumbleweed, or billboard with a picture of a stop sign,’” Lopez said.
For autonomous trucks, the technique can help identify any faults in AI training data.
“We’re leveraging that capability to characterize our perception machine learning models to try to continuously understand whether we have complete training data sets or whether we need to update our training data set.”
Lopez said that Marco Pavone, a Stanford associate professor and member of the Center for AI Safety, is doing leading-edge research on out-of-distribution detection.
Adaptive stress testing
Sensors are prone to countless types of interference. Dirty or malfunctioning sensors can spell trouble when a truck’s driving system depends on sensor data to operate safely.
“Your perception system is going to be messy; it’s going to be noisy. There’s going to be camera obstructions, environmental conditions, fog, dust, rain. That’s going to make it difficult to get a very accurate and clear picture of the world,” Lopez explained. “If the path planner has noisy information about the world, it’s prone to make mistakes about the true world-state.”
Adaptive stress testing simulates sensor disturbances to better understand the autonomous driver’s behavior under various conditions and ensure it can still navigate safely.
“We’re trying to reproduce those types of conditions … and ensuring that our path planner can still create a safe path through that scene, even with these noisy disturbances added to the scene model.”
Stanford’s associate professor Mykel Kochenderfer helped develop the technique. Adaptive stress testing is already making significant contributions to safety: It helps inform the Federal Aviation Administration’s collision avoidance solutions for commercial aircraft.