Thursday, October 26, 2006

Fibre-to-the-CPU


Intel and the University of California, Santa Barbara last month announced an electrically pumped, hybrid laser, an important component that aids integrated silicon photonics. NGN asked Intel some follow-up questions about the possible application of such technology. Here is the response of Sean Koehl, technology strategist at Intel's tera-scale computing research programme.


NGN: Can Intel explain what are the two or three bottlenecks that it already sees coming (even if 5 years off) where such optical interconnect technology will be needed (and where electrical interfaces will no longer do).


SK: Storage Area Networks (SANs) already rely on optical interconnects, with the state of the art currently at 4Gbit/s and increasing. For server rack-to-rack communication in data centers, there is already a mix of optical and electrical interconnects, but this is moving more and more to optical as 10Gbit/s Ethernet becomes prevalent.

Within a server, board-board communications will be an increasing bottleneck and therefore an opportunity for more optical interconnects.

Longer term, as systems enter the tera-scale era (teraflop processors operating on terabytes of data), processor-to-processor and processor-to-memory bandwidth requirements will scale to the point where even the best copper Input/Output (I/O) will have difficulty providing the required bandwidth. This is further out, but it is also the highest volume opportunity for silicon photonics and thus requires significant advantage in the price/performance characteristic for optical I/O.

NGN: The hybrid laser technology looks suited to applications where data rate and distance are issues, but Intel seems to be focused more on high-performance servers with lots of CPU cores and boards. It appears more an issue of interface- and data rate-density rather than data rate and distance.

SK: All three are important benefits of optical: data rate, distance, and density.
  1. Data rate: because copper will have difficulty scaling beyond 10Gbit/s, while optical technology already exists at 40Gbit/s
  2. Distance: though copper is getting faster, the distance these links can span is beginning to shrink significantly. Within a data center, distance is no issue with optical. This could not only solve existing links, but enable new architectures by providing distance independence.
  3. Density: because it is possible to multiplex many 10Gbit/s or 40 Gbit/s channels on a single fiber. Tera-byte bandwidths are straightforward to achieve on a single fiber. This is where fiber has the biggest advantage: aggregate bandwith. Copper requires more and more pins or ports to compete in this respect.

NGN: What are the key bottlenecks that will pop up first that need such optical technology? (Is it backplane technology? Is it CPU-to-CPU or CPU-to-memory?). And can Intel add some numbers here - data rates/ interface densities where electrical runs out of steam.


There is no exact number, because the distance and speed for electrical are highly related. You can always push an electrical solution farther, but you pay in power and complexity (by adding additional lines), and cost. This issue is the overall price/performance of the current electrical solution versus the proposed optical one.

However, that said, for chip-chip interconnects, going beyond 20Gbit/s per line looks to be very challenging. We also have leading research on copper based I/O which is showing great results, but above this speed optical will start to look more attractive.


For networking, 100Gbit/s Ethernet looks to be a key speed for optical. Ethernet is expected to continue the “factor of 10” scaling that it has in the past, and at 100Gbit/s speeds any copper solution would be extremely challenging.

No comments: