SAN FRANCISCO – Experts agree printed circuit boards and processors will eventually need optical interconnects. But just when and how the industry will get there is still unclear.
The International Technology Roadmap for Semiconductors plots an expanding gap between the growth of processor performance and the lagging ability of I/O to keep up. Similarly, data sent over the Internet is rising by a factor of 100 every 10-12 years--about the pace of growth of processor performance--but the links between processors and the optical backbone of the Net are not expanding at the same rate.
“A byte-per-flop gap is opening up that’s a major limit on architectures that gets worse and worse as we head to 2020 and beyond,” said David Miller, an optical research lab at Stanford, speaking in an evening panel at the
International Solid-State Circuits Conference here.
“We are getting close to the point where optical is more attractive,” said John Stonick, a Synopsys scientist and chair of the panel. “But optical interconnects have always been the next thing, and the big questions are when and how we get there,” he said.
Keishi Ohashi, an optical expert from NEC Corp., said optical may follow a path like hard disk technologies. It took nearly twenty years for powerful magneto-resistive heads to emerge from their beginnings in labor-intensive magnetic coils, he said.
Bert Offrein, an optical expert with IBM Research, noted milestones and challenges IBM has seen.
In 2008, the company built Roadrunner, the first petaflop computer, using optical interconnects to each server board.
Last year IBM created its Power P775 high-end server that brought optical links to the processor. It fused 56 fibre cables to modules with 56 transceivers integrated with help from Avago on to each IBM CPU.
“It required 100 additional assembly steps to bring the optics to the chips in addition to building transceivers themselves,” said Offrein. “That’s justified for some high-performance systems, but for general servers we need something easier,” he said.
Board makers are experimenting with many ways to integrate optical links, prototyping 10 Gbit/second channels now, but none are ready for commercial production, he said. One future vision is of 3-D stacks that include an optical layer, but it may require tiny mechanical mirrors or waveguides in the substrate, challenges companies such as IBM are wrestling with in research labs, he added.
Mario Paniccia, head of a silicon photonics research lab at Intel, called for flexible solutions that could address a range of applications from high-volume consumer markets to high-end exascale servers.
Paniccia’s lab demonstrated last year transmitter and receiver chips made in “CMOS-like” processes that sent the equivalent of 50 Gbits/s of data over fiber using four 12.5G channels. “It was not a product, but it gave us confidence,” he said.
Future versions of the technology could increase channel speeds and the number of parallel lines to hit rates of 400 Gbits/s and someday even a Terabit/s, he added.
Big questions remain unanswered, including what sort of chip-level connectors such links should use, Paniccia said. In addition, optical links will always consume more power than electronic ones at similar data rates, he added.
Ultimately, “I think silicon photonics will happen in increments as applications need more bandwidth, distance or new form factors,” he said.
Stanford’s Miller said Intel’s work is a bright spot in the field. Most optical companies such as Finisar and Luxtera lack the financial clout to take on the heavy duty work needed in silicon photonics.
Miller pointed to promising research that indicates plenty of headroom for optical technology. For example, he discussed early work on quantum well waveguides and optical antennas the size of a transistor gate.
Others are looking for ways to use germanium to embed light sources in chips without hitting the thermal limits of silicon. Researchers at Berkeley are using external light sources, integrating photonic modulators on chip, he said.