FPGAs power deep learning platform

  

“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning process units,” said Doug Burger, distinguished engineer at Microsoft Research NExT.

Microsoft notes that many silicon AI accelerators require multiple requests to be grouped (called ‘batching’) in order to achieve high performance. Using FPGA technology, Project Brainwave achieved a performance of more than 39Tflops on a single request.

Dan McNamara, general manager of Intel’s Programmable Solutions Group, noted: “Microsoft’s need for real-time inference across a wide range of data types demands a high-performance AI hardware accelerator with software-programmable flexibility. Microsoft selected Stratix FPGAs for their added communication blocks on the chip, as well as synthesisable logic, to provide high performance in deep learning across many types of data.”

* For more on how Intel sees FPGAs playing a role in AI and deep learning, click here