NVIDIA and ARM bring deep learning to IoT devices

The partnership will see NVIDIA and ARM integrate the open-source NVIDIA Deep Learning Accelerator (NVDLA) architecture into ARM’s Project Trillium platform for machine learning. The collaboration aims to make it simple for IoT chip companies to integrate AI into their designs in order to deliver more intelligent, but affordable products.

“Inferencing will become a core capability of every IoT device in the future,” said Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA. “Our partnership with ARM will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate deep learning technology.”

“Accelerating AI at the edge is critical in enabling ARM’s vision of connecting a trillion IoT devices,” explained Rene Haas, executive vice president, and president of the IP Group, at ARM. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”

Based on NVIDIA® Xavier™, the world’s most powerful autonomous machine system on a chip, NVDLA is a free, open architecture to promote a standard way to design deep learning inference accelerators. NVDLA’s modular architecture is scalable, highly configurable and designed to simplify integration and portability.

NVDLA brings a number of benefits that will help to speed the adoption of deep learning inference. It is supported by a suite of powerful developer tools, including upcoming versions of TensorRT, a programmable deep learning accelerator.

The open-source design allows for cutting-edge features to be added regularly, including contributions from the research community.

The integration of NVDLA with Project Trillium will enable deep learning developers to leverage ARM’s flexibility and scalability across a wide range of IoT devices.

“This is a win/win for IoT, mobile and embedded chip companies looking to design accelerated AI inferencing solutions,” said Karl Freund, lead analyst for deep learning at Moor Insights & Strategy. “NVIDIA is the clear leader in ML training and ARM is the leader in IoT end points, so it makes a lot of sense for them to partner on IP.”