Microchips’ Optical Future
As the United States seeks to reinvigorate its job market and move past economic recession, MIT News examines manufacturing’s role in the country’s economic future through this series on work at the Institute around manufacturing.
Computer chips are one area where the United States still enjoys a significant manufacturing lead over the rest of the world. In 2011, five of the top 10 chipmakers by revenue were U.S. companies, and Intel, the largest of them by a wide margin, has seven manufacturing facilities in the United States, versus only three overseas.
The most recent of those to open, however, is in China, and while that may have been a strategic rather than economic decision — an attempt to gain leverage in the Chinese computer market — both the Chinese and Indian governments have invested heavily in their countries’ chip-making capacities. In order to maintain its manufacturing edge, the United States will need to continue developing new technologies at a torrid pace. And one of those new technologies will almost certainly be an integrated optoelectronic chip — a chip that uses light rather than electricity to move data.
As chips’ computational power increases, they need higher-bandwidth connections — whether between servers in a server farm, between a chip and main memory, or between the individual cores on a single chip. But with electrical connections, increasing bandwidth means increasing power. A 2006 study by Japan’s Ministry of Economy, Trade and Industry predicted that by 2025, information technology in Japan alone would consume nearly 250 billion kilowatt-hours’ worth of electricity per year, or roughly what the entire country of Australia consumes today.
Optoelectronic chips could drastically reduce future computers’ power consumption. But to produce the optoelectronic chips used today in telecommunications networks, chipmakers manufacture optical devices — such as lasers, photodetectors and modulators — separately and then attach them to silicon chips. That approach wouldn’t work with conventional microprocessors, which require a much denser concentration of higher-performance components.
The most intuitive way to add optics to a microprocessor’s electronics would be to build both directly on the same piece of silicon, a technique known as monolithic integration.
In a 2010 paper in the journal Management Science, Erica Fuchs, an assistant professor of engineering and public policy at Carnegie Mellon University, who got her PhD in 2006 from MIT’s Engineering Systems Division, and MIT’s Randolph Kirchain, a principal research scientist at the Materials Systems Laboratory, found that monolithically integrated chips were actually cheaper to produce in the United States than in low-wage countries.
“The designers and the engineers with the capabilities to produce those technologies didn’t want to move to developing East Asia,” Fuchs says. “Those engineers are in the U.S., and that’s where you would need to manufacture.”
During the telecom boom of the late 1990s, Fuchs says, telecommunications companies investigated the possibility of producing monolithically integrated communications chips. But when the bubble burst, they fell back on the less technically demanding process of piecemeal assembly, which was practical overseas. That yielded chips that were cheaper but also much larger.
While large chips are fine in telecommunications systems, they’re not an option in laptops or cellphones. The materials used in today’s optical devices, however, are incompatible with the processes currently used to produce microprocessors, making monolithic integration a stiff challenge.
Making the case
According to Vladimir Stojanovic, an associate professor of electrical engineering, microprocessor manufacturers are all the more reluctant to pursue monolithic integration because they’ve pushed up against the physical limits of the transistor design that has remained more or less consistent for more than 50 years. “It never was the case that from one generation to another, you’d be completely redesigning the device,” Stojanovic says. U.S. chip manufacturers are so concerned with keeping up with Moore’s Law — the doubling of the number of transistors on a chip roughly every 18 months — that integrating optics is on the back burner. “You’re trying to push really hard on the transistor,” Stojanovic says, “and then somebody else is telling you, ‘Oh, but you need to worry about all these extra constraints on photonics if you integrate in the same front end.’”
To try to get U.S. chip manufacturers to pay more attention to optics, Stojanovic and professor of electrical engineering Rajeev Ram have been leading an effort to develop techniques for monolithically integrating optical components into computer chips without disrupting existing manufacturing processes. They’ve gotten very close: Using IBM’s chip-fabrication facilities, they’ve produced chips with photodetectors, ring resonators (which filter out particular wavelengths of light) and waveguides (which conduct light across the chip), all of which are controlled by on-chip circuitry. The one production step that can’t be performed in the fabrication facility is etching a channel under the waveguides, to prevent light from leaking out of them.
But Stojanovic acknowledges that optimizing the performance of these optical components would probably require some modification to existing processes. In that respect, the uncertainty of the future of transistor design may actually offer an opportunity. It could be easier to add optical components to a chip being designed from the ground up than to one whose design is fixed. “That’s the moment it has to come in,” Stojanovic says, “at the moment where everything’s in flux, and soft.”