Intel fellow calls for standardised density metric

While feature size was in general agreement at earlier nodes, more recent processes are being described differently. Bohr said. “Node names have become a poor indicator of where a process stands on the Moore’s Law curve.

“Customers should be able to readily compare various process offerings of a chip maker and those of different chip makers. The challenge is in the increasing complexity of semiconductor processes and in the variety of designs,” he noted.

While there are a number of ways to compare various processes, Bohr said: “What is really needed is an absolute measure of transistors in a given area. At the other extreme, simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it – factors such as cache sizes and performance targets can cause great variations in this value.”

So he suggested resurrecting a previously used metric based on the transistor density of standard logic cells, with weighting factors that account for typical designs.

“While there is a large variety of standard cells in any library,” he admitted, “we can take one ubiquitous one – a two input NAND cell (four transistors) – and one that is more complex, but also very common – a scan flip flop (SFF).”

This approach, Bohr said, leads to a previously accepted formula for transistor density:

0.6(NAND2 Tr count/NAND2 cell area) + 0.4(SFF Tr count/SFF cell area) = transistors/mm2

where the weightings of 0.6 and 0.4 reflect the ratio of very small and very large cells in typical designs.

“Every chip maker, when referring to a process node, should disclose its logic transistor density in units of millions of transistors per square millimetre as measured by this simple formula. By adopting these metrics, the industry can clear up the node naming confusion and focus on driving Moore’s Law forward,” he concluded.