Title

Realizing the AMD Exascale Heterogeneous Processor Vision

Conference

Published in the Proceedings of the 51st Annual International Symposium on Computer Architecture (ISCA 2024), June, 2024 (industry session acceptance rate: 4/26 ≈ 15%)

Authors

Alan Smith, Gabriel H. Loh, Michael J. Schulte, Mike Ignatowski, Samuel Naffziger, Mike Mantor, Mark Fowler, Nathan Kalyanasundharam, Vamsi Alla, Nicholas Malaya, Joseph L. Greathouse, Eric Chapman, Raja Swaminathan

Abstract

AMD had previously detailed its exascale research journey from initial targets and requirements to the development and evolution of its vision of a high-performance computing (HPC) accelerated processing unit (APU), dubbed the Exascale Heterogeneous Processor or EHP. At the conclusion of that work, the learnings were integrated into the design of the node architecture that went into the Frontier supercomputer, the world's first exascale machine. However, while the Frontier node architecture embodied many of the attributes of the EHP concept, advanced heterogeneous integration capabilities at the time were not yet sufficiently mature to realize our vision of a fully-integrated APU for HPC and AI. In this paper, we finish the EHP's story by digging deeper into why an APU was not the right solution at the time of our first exascale architecture, what the shortcomings were of previous EHP concepts, and how AMD further evolved the concept into the AMD Instinct™ MI300A APU. MI300A is the culmination of years of AMD developments in advanced packaging technologies, its APU hardware and software, and the next step in our highly effective chiplet strategy to not only deliver a groundbreaking design for exascale computing, but to also meet the demands of new large-language model and generative AI applications.

Paper

PDF Copyright © 2024 Advanced Micro Devices, Inc. Hosted version of this paper is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ISCA'24.