Supermicro collaborates with Intel on large scale distributed training AI systems

  

Intel’s NNP-T is a purpose-built AI training ASIC supporting the growing compute needs of deep learning training models.

Commenting Charles Liang, president, and CEO of Supermicro said, “Striking a balance among computing, communication, and memory, the validated NNP-T ASICs on Supermicro systems can train large AI models with near-linear scaling efficiency via intra- and inter-chassis links.”

The Intel Nervana NNP-T solves memory constraints and is designed to scale out through systems with racks easier than today’s solutions. As part of the validation process, Supermicro integrated 8 NNP-T processors, dual 2nd Generation Intel Xeon Scalable processors, up to 6TB DDR4 memory per node supporting both PCIe card and OAM form factors.

Supermicro NNP-T systems are expected to be available mid-year 2020.

“Supermicro has validated our Deep Learning (DL) solution and is helping us prove the Nervana NNP-T system architecture, including card and server design, interconnect, and rack,” said Naveen Rao, corporate vice president and general manager, Artificial Intelligence Products Group, Intel.

With high compute utilisation and a high-efficiency memory architecture for complex deep learning models, the Supermicro NNP-T AI System is built to validate two key real-world considerations: accelerating the time to train ever-complex AI models and doing it within a given power budget. The system enables faster AI model training with images and speech, more efficient gas & oil exploration, more accurate medical image analytics, and faster autonomous driving model generation.