top of page

Understanding In-Package Optical I/O vs Co-Packaged Optics

Introduction

As data centers evolve to support more powerful artificial intelligence, high-performance computing, and cloud applications, there is a growing need for faster and more efficient data communication solutions. Traditional copper electrical interconnects face limitations in bandwidth, latency, energy efficiency, and reach. Recent advancements in silicon photonics, which implement optical components on silicon chips using semiconductor fabrication techniques, have opened up new possibilities for optical interconnects to address these challenges.

However, there is often confusion around two key photonic interconnect technologies - co-packaged optics (CPO) and in-package optical I/O. While both leverage silicon photonics, they are distinct solutions targeting different applications. This tutorial will clarify the differences between CPO and in-package optical I/O, their key characteristics, and their respective roles in future data center architectures.

Silicon Photonics Basics

Silicon photonics enables the creation of photonic integrated circuits (PICs) that use light to transmit and process data, overcoming many limitations of electrical interconnects. PICs combine optical components like modulators, waveguides, and detectors with electronic circuits on the same silicon chip.

By leveraging standard semiconductor manufacturing processes, silicon photonics can produce PICs at high volumes and low costs compared to traditional optical components. This semiconductor economies of scale unlocks new opportunities for using optical interconnects not just for long-reach networking, but for chip-to-chip, board-to-board, and rack-to-rack communication within data centers as well.

Co-Packaged Optics (CPO)

The incumbent technology for connecting switches, routers, and servers in data centers today is pluggable optical transceivers. These modules integrate a PIC along with other electronic chips and optics into a package roughly the size of a pack of gum. Pluggables provide convenient board-to-board and rack-to-rack optical connectivity by plugging fiber optic cables directly into the module's faceplate.

However, as data rates and bandwidth demands increase, pluggable transceivers will face challenges in terms of cost, power consumption, bandwidth density, footprint, and latency. Co-packaged optics (CPO) has emerged as an evolutionary alternative.

In a CPO implementation, the electronic driver chip and PIC from a pluggable module are combined into a single packaged module. This enables a smaller footprint and higher bandwidth density compared to pluggables. The CPO modules can be placed in closer proximity to the compute or switch chips, reducing the power and latency of the electrical connections.

Figure 1 illustrates the shoreline bandwidth density and energy efficiency of CPO versus other interconnect solutions. While not as efficient as in-package optical I/O (discussed next), CPO offers improved density, efficiency, and reach over pluggables at the package level.

A product of shoreline bandwidth density and energy efficiency of various interconnect solutions plotted versus reach. Shoreline bandwidth density and energy efficiency relates to how efficiently a chip, board, or package uses the limited real estate for interface connections along its edge. Different interfaces also deliver different reach capabilities. Courtesy of Ayar Labs.
Figure 1. A product of shoreline bandwidth density and energy efficiency of various interconnect solutions plotted versus reach. Shoreline bandwidth density and energy efficiency relates to how efficiently a chip, board, or package uses the limited real estate for interface connections along its edge. Different interfaces also deliver different reach capabilities. Courtesy of Ayar Labs.

CPO is suited for networking applications within the data center, evolving the massively parallel pluggable transceiver market to higher densities but still maintaining the modularity and tailored data rates of pluggables. Major industry groups like the Co-Packaged Optics Collaboration and the Optical Internetworking Forum are focusing standards and specifications for CPO implementations.

In-Package Optical I/O

While CPO is an evolution of pluggable transceivers, in-package optical I/O represents a revolutionary new solution. It places the optical interconnect in the same physical package as the chips used for compute functions like CPUs, GPUs, ASICs, or FPGAs.

Specifically, an optical I/O chiplet that converts electrical signals to optical is integrated alongside the compute chiplets into a single multi-chip package using advanced packaging technologies like 2.5D or 3D integration. This optical chiplet enables direct optical chip-to-chip communication within the package, to other packages on the same board, between boards, across racks, and throughout the data center infrastructure.

Figure 2 illustrates the concept of in-package optical I/O integration. Optical links replace electrical paths at the package level, enabling photonic communication that is extremely high bandwidth, low latency, efficient, and reach insensitive across an entire distributed computing system.

In-package optical I/O integrates optical interconnect into the same package as the chips used for compute functions. This approach enables chip-to-chip connectivity based on photons instead of electricity and, potentially, more efficient distributed compute systems. Courtesy of Ayar Labs.
Figure 2. In-package optical I/O integrates optical interconnect into the same package as the chips used for compute functions. This approach enables chip-to-chip connectivity based on photons instead of electricity and, potentially, more efficient distributed compute systems. Courtesy of Ayar Labs.

In-package optical I/O is designed to create direct high-performance connections between compute chips to enable new data center architectures like disaggregated memory pools and AI "scale-out" fabrics. For example, NVIDIA is scaling its NVLink interconnect from linking 8 GPUs in a single server to optically connecting up to 256 GPUs as one giant distributed accelerator.

These emerging architectures require linking massive numbers of chips with minimal latency overhead and maximum bandwidth density - a new regime that traditional electrical interconnects struggle with but optical I/O can handle seamlessly.

Inside In-Package Optical I/O

The in-package optical I/O chiplets leverage advanced silicon photonics packaging to densely integrate optical components like waveguides, modulators, and detectors alongside electronic circuits for drivers, amplifiers, equalization, and control.

A key innovation is the use of micro-ring resonators to enable wavelength division multiplexing (WDM) over the optical fibers, as illustrated in Figure 3. These extremely compact micro-ring devices selectively modulate and demodulate multiple wavelengths over each fiber, allowing massively parallel high-bandwidth data transmission.

Micro-ring resonators allow multiple wavelengths per fiber, and multiple fibers per chip. Courtesy of Ayar Labs.
Figure 3. Micro-ring resonators allow multiple wavelengths per fiber, and multiple fibers per chip. Courtesy of Ayar Labs.

For example, a current generation optical I/O chiplet achieves 4096 Gbps of bidirectional bandwidth using 64 wavelengths across 8 optical ports/fibers. This density and parallelism enables high energy efficiency - scaling bandwidth by using more wavelengths per fiber rather than increasing laser power or electronic signaling rates.

The photonic components are fabricated using standard CMOS processes and integrated with the electronic control circuitry into a single multi-chip package. Key industry standards like Universal Chiplet Interconnect Express (UCIe) and NVLink are being adopted to ensure cross-compatibility.

By placing the optical I/O circuitry in-package with the compute chips, the energy and latency overheads of electrical I/O can be avoided while still allowing direct optical connectivity between chiplets, between packages, between boards, and scaling across an entire distributed system.

CPO vs In-Package Optics - Performance Comparison

To better understand the different applications for CPO and in-package optical I/O, it's instructive to compare their key performance metrics:

  • Shoreline bandwidth density: For a given package edge length, in-package optical I/O provides over 10x higher bandwidth density than CPO, as seen in Figure 3. This extreme edge density enables scaling interconnect bandwidths alongside future compute chip densities.

  • Area bandwidth density: Figure 4 shows the bandwidth density per square millimeter of package area, which is critical for area-constrained high-performance compute packages. Optical I/O matches advanced electrical interfaces like UCIe while CPO lags behind.

  • Energy efficiency: The optical I/O chiplets achieve energy efficiencies better than 1 pJ/bit for bandwidths up to terabits per second by leveraging WDM parallelism. CPO modules still consume 10-100x higher energies per bit.

  • Latency: Optical I/O provides single-digit nanosecond latencies similar to on-board copper, while pluggable/CPO solutions incur 100ns+ latencies from forward error correction coding.

  • Cost efficiency: The tight integration and scaling curves of optical I/O are projected to surpass 10 Gbps/dollar. CPO is expected to follow pluggable cost trends limited to <1 Gbps/dollar.

  • Reach: Within a data center infrastructure, optical links can reach hundreds of meters while CPO reach is restricted to rack scales up to tens of meters.

A product of area bandwidth density and energy efficiency of various interconnect solutions plotted versus reach. Courtesy of Ayar Labs.
Figure 4. A product of area bandwidth density and energy efficiency of various interconnect solutions plotted versus reach. Courtesy of Ayar Labs.

These metrics highlight the diverging value propositions - CPO provides an evolutionary density and efficiency uplift for data center networking while optical I/O enables revolutionary distributed compute architectures by placing terabits of optical bandwidth and single-digit nanosecond latencies inside the package.

CPO targets improvements for traditional networking, switching, and I/O between current servers. In-package optical I/O is optimized for emerging AI/HPC/disaggregated architectures where compute chips need to be tightly coupled.

Future Prospects

Both co-packaged optics and in-package optical I/O represent important milestones in the adoption of silicon photonic interconnects within data centers. CPO will help address near-term needs for higher bandwidth module-to-module connectivity by evolving today's pluggable transceiver market. Meanwhile, in-package optical I/O is laying the foundation for future distributed computing systems where racks of CPUs/GPUs/memory can be tightly coupled as one giant AI accelerator fabric.

As these architectures continue gaining traction, we will likely see both technologies persist and co-exist - CPO for traditional box-to-box networking and optical I/O as the high-performance chiplet-to-chiplet interconnect within these increasingly photonic data center infrastructures. Silicon photonics has unlocked exciting new possibilities and we are just beginning to scratch the surface of its potential impact.

Reference

コメント


bottom of page