top of page

Intel's Optical Compute Interconnect (OCI) - Enabling Massive AI Scaling

Introduction

As artificial intelligence (AI) applications become increasingly pervasive, driving our global economy and influencing society, the need for larger and more efficient machine learning (ML) models is rapidly growing. To accelerate these AI workloads, future compute fabrics must scale exponentially in I/O bandwidth, reach longer distances to support larger accelerator clusters, and enable more efficient resource utilization architectures like GPU disaggregation and memory pooling.

Traditional electrical I/O using copper traces can provide high bandwidth density and low power, but only over very short reaches around 1 meter. Current pluggable optical transceivers increase the reach but at unsustainable cost and power levels for the massive scaling required by AI. Intel has developed a revolutionary solution - the Optical Compute Interconnect (OCI) chiplet.

Intel’s Optical Compute Interconnect (OCI)

The OCI Chiplet Intel's 4 Tbps bidirectional OCI chiplet is a fully integrated optical I/O solution based on its in-house silicon photonics technology. It contains:

  1. A silicon photonics integrated circuit (PIC) with integrated lasers

  2. An electrical IC with RF through-silicon vias (TSVs)

  3. A path to incorporate a detachable/re-usable optical connector

Intel 4 Tbps Optical Compute Interconnect
Figure 1. Intel 4 Tbps Optical Compute Interconnect (OCI) Chiplet

This OCI chiplet can be co-packaged with next-generation CPUs, GPUs, IPUs and other high-bandwidth SOCs. The first implementation provides:

  • 4 Tbps bidirectional bandwidth (compatible with PCIe Gen5)

  • 64 lanes of 32 Gbps data in each direction

  • Reach greater than 100 meters

  • 4x higher shoreline density than PCIe Gen6

  • Energy efficiency of pJ/bit

  • Ultra-low latency under 10ns

Concept Intel CPU with co-packaged OCI
Figure 2. Concept Intel CPU with co-packaged OCI

Intel demonstrated this first-gen OCI chiplet co-packaged with a prototype Intel CPU running live, error-free traffic over fiber at OFC 2024. The platform has a roadmap towards 32 Tbps OCI chiplets.

Silicon Photonics Innovation At the heart of the OCI chiplet is Intel's pioneering silicon photonics technology. The single PIC can support up to 8 Tbps bidirectional bandwidth by integrating:

  • DWDM laser arrays

  • Optical amplifiers

  • Complete optical subsystems on-chip

OCI Transmitter
Figure 3. OCI Transmitter: Optical spectrum of 8 wavelengths on one standard single-mode fiber

This integrated approach provides orders of magnitude higher reliability than conventional InP lasers. Intel has already shipped over 8 million of these PICs with 32 million integrated lasers in pluggable optical transceivers for datacenter networking.

A key advantage is that OCI uses standard, widely-deployed single-mode fiber (SMF-28) instead of polarization-maintaining fiber (PMF) which is rarely deployed due to vibration/fiber movement issues impacting performance.

Conclusion

Enabling Ubiquitous AI By combining Intel's strengths in silicon, optics, packaging and platform integration, the OCI chiplet provides a complete, next-generation compute solution. Its unprecedented bandwidth, efficiency, reach and scalability will help make ubiquitous AI a reality by allowing:

  • Massive scaling of AI/ML models and accelerator clusters

  • More efficient architectures like GPU disaggregation

  • Higher performance and reliability than existing solutions

OCI exemplifies how Intel can deliver innovative technologies across the full compute stack to drive the future of AI. The exponential growth of AI demands revolutionary interconnect solutions - and OCI is Intel's pioneering answer.


Reference

[1] C. Urricariet, "Intel® Shows OCI Optical I/O Chiplet Co-packaged with CPU at OFC2024, Targeting Explosive AI Scaling," Intel Community, March 21, 2024. [Online]. Available: https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-Shows-OCI-Optical-I-O-Chiplet-Co-packaged-with-CPU-at/post/1582541.

Comments


bottom of page