Advanced AI for self-driving cars

  

“The problem with self-driving cars,” said Samer Hijazi, pictured, engineering director at Cadence, “is that there was over promise and under delivery. Currently, ADAS are still only at level 3 ‘conditional automation’ on the J3016 table. We need to reach level 5 for self-driving cars to be fully automated.”

CNNs are a layer-based system of interconnected artificial neurons that exchange messages between each other and which are used in pattern recognition applications.

A neural network consists of multiple layers of feature-detecting neurons, which respond to different combinations of inputs from the previous layers. The connections in a neural network have numeric weights that get tuned during the training process, so that the network responds correctly to an image or pattern.

CNNs have demonstrated the best possible correct detection rates compared to other detection algorithms – and even humans.

However, there are still many unsolved problems and limiting factors for the wide adoption of CNN based ADAS applications. While CNN computation complexity increases ADAS complexity, there is a lack of knowledge about data preparation and network optimisation, along with a need for improved programmable acceleration for embedded systems, better recognition accuracy, improved energy and memory bandwidth efficiency, and better network structure optimisation platforms.

“CNN is not well understood by embedded engineers,” Hijazi explained, “and there is a high degree of redundancy in the designed networks.”

Cadence and other companies are working on software and hardware products to improve CNN applications.

Based on the German Traffic Sign Recognition Benchmark (GTSRB) dataset, Cadence has developed various algorithms in MATLAB for traffic sign recognition. Using a proprietary hierarchical CNN approach, Cadence says it has achieved a better correct detection rate than a previously established baseline – 99.58% with this algorithm which is said to be the best achieved on GTSRB to date.

The company hopes to optimise neural networks (NN) creating new basic NN structures and by using iterative network and advanced quantisation optimisation processes.

Today, the most advanced ADAS chip consumes 3W, making it highly energy inefficient.

To improve this performance, Cadence has developed the Vision P6, a low power DSP suitable for embedded neural network applications.

Compared to commercially available GPUs, the Vision P6 can achieve twice the frame rate at much lower power consumption on a typical neural network. The DSP implements on-the-fly data compression, which reduces its memory footprint and bandwidth requirements, provides support for integer, fixed- and floating-point data types and supports OpenCV and OpenVX libraries.

When asked about the future, Hijazi said the automotive company with the most road data will do best. New value chains will emerge, and energy and bandwidth will improve several fold in network and engine designs, leading to peta-MAC embedded neural networks and, hopefully, to stage 5 capable autonomous vehicles.