Said to be the fastest GPU IP ever released by Imagination, the IMG A-Series evolves the PowerVR GPU architecture to address the graphics and compute needs of next-generation devices. Designed to be “The GPU of Everything”, he IMG A-Series is intended for multiple markets, from automotive, AIoT, and computing through to DTV/STB/OTT, mobile and server.
The IMG A-Series provides a multi-dimensional approach to performance scalability which ranges from 1 pixel per clock (PPC) parts for the entry-level market right up to 2 TFLOP cores for performance devices, and beyond that to multi-core solutions for cloud applications.
Commenting Dr. Ron Black, CEO, Imagination Technologies, said: “IMG A-Series is our most important GPU launch since we delivered the first mobile PowerVR GPU 15 years ago and the best GPU IP for mobile ever made. It offers the best performance over sustained time periods and at low power budgets across all markets. It really is the GPU of everything.”
The IMG A-Series has been designed to deliver significant improvements, at the same clock and process, offering 2.5x the performance, 8x faster machine learning processing and 60% lower power than current PowerVR devices shipping.
Compared to other GPU IP solutions currently available, the IMG A-Series delivers higher performance, lower power (compared to competitors at the same clock and process), and lower bandwidth (at the same cache size as competitors) and a smaller silicon size. Its architecture also offers strong differentiators such as guaranteed 50% image compression data (lossless in most cases, or visually lossless in exception cases).
According to Jon Peddie, principle and founder, Jon Peddie Research, “The simple fact is that for mobile SoCs the market leader owns its own GPU technology and is increasing market share at rate of 5% year-on-year. In order to stop the losses to their own potential share the other mobile SoC companies need a compelling GPU that will deliver some real competition. Imagination’s A-Series can do that.”
IMG A-Series is already licensed for multiple markets and the first SoC devices are expected in 2020.
The IMG A-Series is available in four high-performance configurations:
- IMG AXT-64-2048 for flagship performance; 2.0 TFLOPS, 64 Gpixels and 8 TOPS of AI performance
- IMG AXT-48-1536 for premium mobile; 1.5 TFLOPS, 48 Gpixels and 6 TOPS
- IMG AXT-32-1024 for high-performance mobile and automotive; 1 TFLOPs, 32 Gpixels and 4 TOPS
- IMG AXT-16-512 for high-mid-performance mobile and automotive; 0.5 TFLOPS, 16 Gpixels and 2 TOPS
- IMG AXM-8-256 for mid-range mobile; 0.25 TFLOPS, 8 Gpixels and 1 TOPS
For lower cost segments IMG A-Series delivers the best in area, cost and efficiency:
- IMG AXE-2-16 for premium IoT, entry DTV/STB, display and other fillrate driven applications; 2 PPC, 16 GFLOPS and 2 Gpixels
- IMG AXE-1-16 for entry-level mobile and IoT and the fastest Vulkan-capable GPU in its class; 1 PPC, 16 GFLOPS and 1 Gpixels
The IMG A-Series features Imagination’s HyperLane technology; individual hardware control lanes, each isolated in memory, enabling different tasks to be submitted to the GPU simultaneously for secure GPU multitasking.
With Dynamic Performance Control, the GPU can spread its performance across these multiple tasks, executing them all, while maximising GPU utilisation. Priority HyperLanes deliver a dynamic split; for example, prioritising graphics and delivering all the required performance for that application while an AI task runs alongside using the remaining performance. HyperLane technology can also isolate protected content for rights management. All IMG A-Series GPUs support up to eight HyperLanes.
HyperLane also enables a new feature, AI Synergy. This new option enables SoC designers to take advantage of the compute capability in the IMG A-Series to accelerate their AI workloads. Through AI Synergy the GPU delivers graphics performance, while using its spare resource to enable programmable AI alongside a fixed-function, highly-optimised Imagination neural network accelerator. AI Synergy delivers programmable AI in the lowest silicon area, while a unified software stack enables flexibility and great performance.