top of page

Co-packaged Datacenter Optics - Opportunities and Challenges

Introduction

High-capacity, high-density, power-efficient, and cost-effective optical links are critical for modern datacenter infrastructure. Over the past decade, the capacity of both Ethernet switches and optical modules has increased dramatically, from 0.64 Tb/s to 25.6 Tb/s for switches, and from 10 Gb/s to 400 Gb/s for optical modules. However, the current roadmap for pluggable datacenter optical modules does not extend beyond 800 Gb/s per module. Scaling optical links further faces significant challenges in terms of power consumption, cost per capacity, and bandwidth density.

These challenges have motivated a shift towards a new deployment model called co-packaged optics (CPO), which integrates the optical components much closer to the main switching ASIC. This tutorial article discusses the factors driving the transition to CPO, the technical requirements and enabling technologies, the paths to industry adoption, and the potential impact on datacenter network architecture.

The Evolution of Datacenter Switch ASICs

The rise of merchant silicon has substantially changed the Ethernet switch market, which used to be dominated by proprietary hardware and software. Nowadays, network equipment is often based on ASICs designed and sold by third-party silicon vendors. This has led to a more open ecosystem, where network operators have a broader palette of options to tailor their networks.

The evolution of Ethernet switch ASICs has followed a remarkable trajectory, doubling capacity roughly every 2 years by alternating between increasing the number of serdes lanes and the signaling rate per lane (Table 1, Figure 1). The latest generation of switch ASICs offers 25.6 Tb/s of capacity, achieved by using 256 serdes operating at 50 Gb/s with PAM4 modulation.

Looking ahead, the next generation of switch ASICs will likely adopt 56 GBd PAM4 serdes, enabling an aggregate throughput of 51.2 Tb/s. However, the energy efficiency of these high-speed serdes will be critical, as their power consumption could exceed 300 W in a 51.2 Tb/s switch ASIC.


TABLE 1 Switch ASIC evolution. The two rightmost columns are projections. Data source: Broadcom press releases and [1]

Year

2010

2012

2014

2016

2018

2020

2020

2022?

Serdes count

64

128

128

256

256

512

256

512

Serdes rate (Gb/s)

10

10

25

25

50

50

100

100

Serdes rate (GBd)

10

10

25

25

25

25

50

50

Modulation

NRZ

NRZ

NRZ

NRZ

PAM4

PAM4

PAM4

PAM4

Capacity (Tb/s)

0.64

1.28

3.2

6.4

12.8

25.6

25.6

51.2

CMOS node (nm)

40

40

28

16

16

7

7

5


FIGURE 1 Switch ASIC evolution
FIGURE 1 Switch ASIC evolution. Data source: Broadcom press releases and [1]
The Roadmap of Datacenter Optics

Faceplate-pluggable (FPP) optical modules have become the dominant deployment model for datacenter optical links. FPP modules offer several advantages, including flexibility in matching the optical interface to the use case, the ability to perform field upgrades and servicing, and the emergence of multi-source agreements (MSAs) that enable interoperability and a multi-vendor ecosystem.

The optical interfaces for datacenter optics are defined by IEEE 802.3 Ethernet standards and various MSAs. For 400G Ethernet, the key PMD options include parallel multimode fiber (SR16, SR8), duplex single-mode fiber with wavelength-division multiplexing (FR8, LR8, ER8), and parallel single-mode fiber (DR4) (Table 2). The 400GBASE-DR4 and -FR4 PMDs are the focus for intra-datacenter connectivity.

In addition to the optical interfaces, datacenter optics also require electrical interfaces, which are typically based on the Common Electrical Interface (CEI) defined by the Optical Internetworking Forum. These include LR, MR, VSR, XSR, and USR serdes, offering a trade-off between reach, power, and flexibility (Table 3).

TABLE 2 400 GBASE optical port types. Source: IEEE 802.3. Entries marked with an asterisk are not yet finalized (IEEE 802.3 cu, 802.3 cm)

PMD

Signalling

Medium

Centre λ

Reach

SR16

25 GBd NRZ

Parallel MMF 

850 nm

100 m

SR8* 

25 GBd PAM4 

Parallel MMF 

850 nm

100 m

FR8 

25 GBd PAM4 

Duplex SMF 

1273-1309 nm LWDM

2 km

LR8 

25 GBd PAM4 

Duplex SMF 

1273-1309 nm LWDM

10 km

ER8 

25 GBd PAM4 

Duplex SMF

1273-1309 nm LWDM 

40 km

SR4.2*

25 GBd PAM4 

Parallel MMF 

850, 910 nm SWDM

100 m

DR4 

50 GBd PAM4 

Parallel SMF

1311 nm

500 m

FR4*

50 GBd PAM4 

Duplex SMF

1271-1331 nm CWDM 

2 km

LR4-6* 

50 GBd PAM4 

Duplex SMF

1271-1331 nm CWDM 

6 km 

LR4*

50 GBd PAM4 

Duplex SMF

1271-1331 nm CWDM 

10 km

TABLE 3 OIF‐CEI‐112G link types defined by Optical Internetworking Forum [2]. IL = insertion loss, DAC = direct attach copper, C2C = chip‐to‐chip across backplane, C2M = chip‐to‐module, C2OE = chip‐to‐optical‐engine, D2D = die‐to‐die


LR

MR

VSR

XSR

USR

Reach

2 m twinax, 2 connectors

50 cm PCB, 1 connector

10+2 cm PCB, 1 connector

50 mm pkg substrate

<10 mm

IL @ Nyquist

30 dB

20 dB

12 dB

10 dB


BER (pre-EEC)

10-4

10-6

10-6

10-9


Use case

DAC, C2C

C2C

C2M

C2OE, D2D (2.5D)

D2D (2.5D or 3D)

Area

Largest




Smallest

Power

Highest




Lowest

Flexibility

High

Good

Fair

Low

Very low

DAC

Yes

No

No

No

No

PPP

Yes

Yes

Partially

No

No

CPO+H9:M11

Yes (adoption)

Possible

Possible

Yes (optimization)

Future

Challenges Facing Datacenter Optics

Despite the success of FPP modules, several challenges are emerging that threaten to limit further scaling:

1. Roadmap beyond 800G: There is currently no FPP form factor that supports more than 800G of capacity. Increasing the channel count beyond eight is challenging due to connector density, while higher signaling rates beyond 200 Gb/s per lane pose formidable signal integrity challenges.

2. Cost: The cost per capacity of optical modules has been decreasing, but the cost of optics now exceeds that of the switch ASIC itself, accounting for over 50% of the total cost of a fully populated switch (Figure 2). This is because switch ASICs have benefited more from Moore's law, while optical modules remain complex micro-optical systems with high packaging costs.

3. Power: The power consumed by the electrical channels between the switch ASIC and the optical modules is a growing concern, as it is projected to exceed the power of the switch core itself (Figure 4). Increasing the signaling rate from 50 Gb/s to 100 Gb/s will roughly double the insertion loss of these electrical channels, exacerbating the power challenge.

4. Density: The bandwidth density at the switch faceplate is also becoming a limiting factor. The transition from 12.8 Tb/s to 25.6 Tb/s required doubling the enclosure height from 1RU to 2RU to accommodate the increased number of pluggable modules (Table 4). Beyond 51.2 Tb/s, the faceplate-pluggable model may not be able to provide sufficient bandwidth density.

Cost trends of Ethernet switches and optical modules from 2010 to 2023; the values for 2020–2023
FIGURE 2 Cost trends of Ethernet switches and optical modules from 2010 to 2023; the values for 2020–2023 are projections. Based on data from reports by LightCounting [3] and Dell'Oro [4]

TABLE 4 Faceplate bandwidth density, assuming 32 modules per RU. The last two rows require new FPP form factors

Switch

Module

Faceplate

Lanes

Rate

Capacity

Lanes

Capacity

Modules

Size

128

25G

3.2 T

4

100G

32

1RU

256

25G

6.4 T

4

100G

64

2RU

256

50G

12.8 T

8

400G

32

1RU

512

50G

25.6 T

8

400G

64

2RU

256

100G

25.6 T

8

800G

32

1RU

512

100G

51.2 T

8

800G

64

2RU

1024

100G

102.4 T

8

800G

128

4RU

1024

100G

102.4 T

16

1.6 T

64

2RU

512

200G

102.4 T

8

1.6 T

64

2RU

Moving Optics Inside: Co-Packaged Optics

To address these challenges, the industry is exploring the concept of moving the optics from the faceplate into the switch enclosure and closer to the main switching ASIC, a approach known as co-packaged optics (CPO). This can be seen as an evolution from the current faceplate-pluggable model, through on-board optics, towards fully integrated 2.5D and 3D CPO (Figure 6).


CPO roadmap illustrating increasing levels of integration of optics and switch ASIC
FIGURE 6 CPO roadmap illustrating increasing levels of integration of optics and switch ASIC. This representation focuses on the linear distance between optics and ASIC but note that one of the key metrics for CPO solutions is bandwidth density along the switch perimeter.


The key advantages of CPO are the ability to:

  1. Drastically shorten the electrical channels between the switch ASIC and the optics, reducing power consumption.

  2. Leverage advanced packaging and integration techniques to reduce the cost per capacity.

  3. Increase the bandwidth density by eliminating the constraints of faceplate-mounted pluggable modules.

However, moving to CPO also requires overcoming several challenges:

  1. Maintaining the flexibility of pluggable optics, including the ability to use different optical PMDs and support copper DACs for short-reach connectivity.

  2. Ensuring field accessibility for servicing and upgrades.

  3. Establishing a multi-vendor ecosystem through standardization or multi-source agreements.

CPO Requirements and Enabling Technologies

To gain broad adoption, CPO solutions must deliver substantial (>50%) improvements in energy efficiency (target: 10-15 pJ/bit) and cost per capacity compared to pluggable optics. This requires addressing several key technical requirements:

  1. Optical engine integration: Highly integrated photonic integrated circuits (PICs) with a high density of optical channels (e.g., 16-32 channels per engine) and a small footprint (≤100 mm x 100 mm) are needed to achieve the required bandwidth density.

  2. Electrical interface: The CPO engines should leverage the XSR electrical interface (10 dB insertion loss budget) to minimize power consumption, while still supporting LR serdes to maintain compatibility with copper DACs and coherent optics.

  3. Packaging and thermal management: Advanced 2.5D and 3D packaging techniques are required to integrate the PIC, driver/TIA ICs, and optical inputs/outputs within a compact footprint while managing the thermal environment.

  4. Laser integration: An external, faceplate-mounted laser source architecture is preferred to address reliability concerns, although this comes with additional optical loss, footprint, and cost.

  5. Standardization: Establishing common form factors, electrical and optical interfaces, and management protocols through industry collaborations and MSAs is critical to enable a multi-vendor ecosystem.

Photonic integration is a key enabler for realizing these CPO requirements. While various integration platforms exist, including silicon photonics, indium phosphide, and silicon nitride, silicon photonics stands out as a promising candidate due to its ability to provide a broad range of photonic functionality, high levels of integration, and the potential for volume manufacturing.

Impact on Network Architecture

The transition to CPO will have a significant impact on datacenter network architecture. As CPO enables higher port densities and lower power consumption, it opens up new possibilities for network simplification and consolidation.

A potential incremental adoption path for CPO could be to start with the spine switches in a spine-leaf fabric, using 400GbE-DR4 and -FR4 CPO engines (Figure 7). As the next step, the uplinks from the leaf switches could migrate to CPO, potentially using a hybrid approach with a mix of CPO and pluggable optics. Finally, the leaf switch downlinks could also transition to CPO, leading to switches with different types of CPO engines (e.g., FR4 for uplinks, DR4 for downlinks).

Additionally, the TOR layer could be impacted, as CPO enables the consolidation of many small TOR switches into a single large switch. This could dramatically reduce the total number of switch boxes in the network, simplifying management and improving overall cost and power efficiency. However, the TOR layer may still require some pluggable ports to support copper DACs for server connectivity.


Gradual CPO adoption and network architecture evolution
FIGURE 7 Gradual CPO adoption and network architecture evolution go hand‐in‐hand

Beyond the TOR-spine-leaf architecture, the full radix of advanced switch ASICs enabled by CPO could potentially allow the elimination of the TOR layer altogether, leading to a more streamlined "leaf-spine" network topology with fiber-to-the-server (FTTS) connectivity. This would require cost-effective shorter-reach CPO links, potentially based on multimode fiber or reach-optimized single-mode fiber PMDs.

Conclusion

Faceplate-pluggable optical modules have served the datacenter industry well, but their roadmap is reaching its limits in terms of power, cost, and density. Co-packaged optics (CPO) represents a promising alternative that can address these challenges by integrating the optics much closer to the main switching ASIC.

While some technical challenges remain, the adoption of CPO is more hindered by the inertia of moving an entire industry away from the deeply entrenched pluggable optics model. Proving the potential Total Cost of Ownership (TCO) savings of at least 50% over pluggable optics is essential to driving broad CPO adoption.

Key enablers for CPO include advanced photonic integration, particularly silicon photonics, as well as collaborative efforts to establish industry standards and multi-source agreements. As CPO gains traction, it will also enable the simplification and consolidation of datacenter network architectures, potentially leading to more streamlined "leaf-spine" topologies with fiber-to-the-server connectivity.

The transition to CPO represents a significant shift in the datacenter optics landscape, and its successful adoption can have far-reaching implications for the performance, cost, and efficiency of future datacenter infrastructure.


Reference

[1] Verheyde, A.: Broadcom ships first 25.6Tbps switch on 7nm. Tom's Hardware (2019). https://www.tomshardware.com/uk/news/ broadcom‐ships‐first‐256tbps‐switch‐on‐7nm

[2] Optical Internetworking Forum: Common electrical I/O (CEI) 112G. OIF (2017). https://www.oiforum.com/technical‐work/hot‐topics/ common‐electrical‐interface‐cei‐112g‐2/

[3] LightCounting, Lightcounting ethernet optics report. (2020)

[4] Dell'oro datacenter ethernet switch report. Dell'Oro (2019)


Commentaires


bottom of page