The model uses its own uncertainty to estimate the risk of potential collisions or other traffic disruptions at such intersections. It weighs several critical factors, including all nearby visual obstructions, sensor noise and errors, the speed of other cars, and even the attentiveness of other drivers.
Based on the measured risk, the system may advise the car to stop, pull into traffic, or nudge forward to gather more data.
This compares with existing automated systems that require direct visibility of the objects the vehicle must avoid; those systems can fail when their line of sight is blocked by nearby buildings or other obstructions.
“When you approach an intersection there is potential danger for collision. Cameras and other sensors require line of sight. If there are occlusions, they don’t have enough visibility to assess whether it’s likely that something is coming,” commented Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.
“In this work, we use a predictive-control model that’s more robust to uncertainty, to help vehicles safely navigate these challenging road situations.”
The system has been tested in more than 100 trials of remote-controlled cars turning left at a busy, obstructed intersection in a mock city, with other cars constantly driving through the cross street. Experiments involved fully autonomous cars and cars driven by humans but assisted by the system.
In all cases, the system successfully helped the cars avoid collision from 70 to 100 percent of the time, depending on various factors. Other similar models implemented in the same remote-control cars sometimes couldn’t complete a single trial run without a collision.
The researchers plan to advance the system further through including other challenging risk factors in the model, such as the presence of pedestrians in and around the road junction.