top of page

The Path to Zettascale Computing: Energy Efficiency is Key

Abstract

In ISSCC plenary keynote talk, AMD CEO Dr. Lisa Su provides a comprehensive overview of the rapid gains in computing performance over recent decades, while underscoring the critical need for enhanced energy efficiency as we push towards exascale and eventually zettascale capabilities. She highlights architectural innovations, advanced 2.5D and 3D packaging, domain-specific accelerated computing, unified memory architectures, and extensive use of AI optimization as high-potential opportunities to drive major efficiency improvements. However, she stresses that continuing extensive innovation across the full hierarchy from materials to circuits, microarchitecture, system architecture and software will be essential to achieve the orders-of-magnitude gains in energy efficiency required to reach zettascale computing.

Introduction

In her headline address at the prestigious 2023 International Solid-State Circuits Conference, Dr. Lisa Su, CEO of AMD, took the audience on an enlightening tour through seven decades of astounding advances in computing performance. She reflected on the tremendous pace of progress, from early processors with just a billion transistors to state-of-the-art GPUs now packing over 50 billion transistors and delivering petaflops of performance. However, Dr. Su cautioned that while capabilities continue rapid, exponential growth, energy efficiency is not scaling nearly as quickly. She emphasized that this gap presents the primary grand challenge to push beyond the exascale milestone towards zettascale computing over the next decade.

After providing illuminating historical context, Dr. Su methodically outlined promising architectural approaches, packaging innovations, and system-level co-design techniques with potential to significantly enhance energy efficiency. She also suggested revolutionary gains could come from utilizing artificial intelligence much more pervasively to optimize designs and workflows. However, Dr. Su stressed that closing the efficiency gap will require extensive creative innovation across the entire hierarchy of computing technology - from fundamental materials research through circuits, microarchitecture, system architecture and software. She expressed optimism that through close collaboration across industry, academia and government, we can uncover solutions to overcome daunting obstacles on the path to unprecedented scales of high performance computing.


Optical Communication for Energy Efficient Long Reach

Key Technologies to Unlock Energy Efficient Computing

Dr. Su shining a spotlight on several key technologies she believes can dramatically improve energy efficiency in the future. First, she highlighted advanced 2.5D and 3D packaging, which enables tight vertical integration of processing and memory elements. This provides major gains by reducing high-cost data movements. Second, she called out optimized domain-specific architectures, like accelerators tailored for AI workloads, which can deliver order-of-magnitude efficiency improvements versus general-purpose designs. Next, Dr. Su emphasized unified memory architectures that simplify data access across diverse processing elements as another promising approach. She also specifically called out emerging co-packaged optics technology which can alleviate interconnect bandwidth bottlenecks while also boosting energy efficiency. Finally, Dr. Su noted that continuing to push materials, devices, circuits and processor microarchitectures will remain critical to realize the full benefits of these system-level innovations.

Future system-in-package architecture

Revolutionary Potential of AI and Specialized Architectures

In addition to discussing specific hardware technologies, Dr. Su suggested revolutionary system-level efficiency gains could come from utilizing artificial intelligence much more extensively across the computing stack. On the system architecture side, she proposed using AI surrogate models and hybrid workflows to optimize complex workloads like physics simulations. This technique involves partitioning problems between efficient AI-based approximations and sparse use of costly, high-fidelity supercomputer simulation. In terms of design, Dr. Su advocated leveraging AI throughout the development flow from design optimization to automated chip floor-planning. Specialized architectures for emerging workloads like AI, augmented reality and the metaverse will also be critical to maximize efficiency.

Conclusion

In closing, Dr. Su strongly reiterated the urgent imperative to place energy efficient computing front and center as we push towards exascale in the near future and zettascale beyond. She acknowledged the difficult obstacles still remaining, but expressed strong confidence that the combined innovation taking place across materials, devices, circuits, architecture and algorithms will unleash new breakthroughs. Dr. Su encouraged the audience to look beyond their own domains and seek out collaborative opportunities to attack this grand challenge. By fostering an ecosystem with diverse expertise united by the common goal of energy efficient computing, she believes we can unlock the next era of high performance systems.

Reference

[1] Innovation For the Next Decade of Compute Efficiency – Lisa Su – ISSCC 2023 Plenary

Comments


bottom of page