CEA-Leti and Standford target edge-AI apps with NVM memory cell

Target applications for these new releases include energy-efficient, smart-sensor nodes to support artificial intelligence on the Internet of Things, or “edge AI”.

The proof-of-concept chip has been validated for a variety of applications (machine learning, control, security). It monolithically integrates two heterogeneous technologies: 18 kilobytes (KB) of on-chip RRAM on top of commercial 130nm silicon CMOS with a 16-bit general-purpose microcontroller core with 8KB of SRAM.

According to the researchers, their chip delivers 10-times better energy efficiency (at similar speed) versus standard embedded FLASH, thanks to its low operation energy. It also offers ultra-fast and energy-efficient transitions from on mode to off mode and vice versa.

To save energy, smart-sensor nodes must turn themselves off. Non-volatility, which enables memories to retain data when power is off, is thus becoming an essential on-chip memory characteristic for edge nodes. The design of 2.3 bits/cell RRAM enables higher memory density (NVM dense integration) yielding better application results: 2.3x better neural network inference accuracy, for example, compared to a 1-bit/cell equivalent memory, states the team.

But, NVM technologies (RRAM and others) suffer from write failures. Such write failures have catastrophic impact at the application level and significantly diminish the usefulness of NVM such as RRAM. The CEA-Leti and Stanford team created a new technique called ENDURER that is said to overcome this major challenge. This gives the chip a 10-year functional lifetime when continuously running inference with the Modified National Institute of Standards and Technology (MNIST) database, for example.

“The Stanford/CEA-Leti team demonstrated a complete chip that stores multiple bits per on-chip RRAM cell. Stored information is correctly processed when compared with previous demonstrations using standalone RRAM or a few cells in a RAM array,” explains Thomas Ernst, Leti’s chief scientist for silicon components and technologies. “This multi-bit storage improves the accuracy of neural network inference, a vital component of AI.”

Professors Subhasish Mitra of Stanford says the chip demonstrates several industry firsts for RRAM technology. These include new algorithms that achieve multiple bits-per-cell RRAM at the full memory level, new techniques that exploit RRAM features as well as application characteristics to demonstrate the effectiveness of multiple bits-per-cell RRAM at the computing system level, and new resilience techniques that achieve a useful lifetime for RRAM-based computing systems.