View a PDF of the paper titled Demystifying AI Platform Design for Distributed Inference of Next-Generation LLM models, by Abhimanyu Bambhaniya and 7 other authors
View PDF
HTML (experimental)
Abstract:Large language models (LLMs) have shown remarkable performance across a wide range of applications, often outperforming human experts. However, deploying these gigantic models efficiently for diverse inference use cases requires carefully designed hardware platforms with ample computing, memory, and network resources. With constant innovation in LLM serving optimizations and model architecture evolving at breakneck speed, the hardware requirements to meet Service Level Objectives (SLOs) remain an open research question.
To answer the question, we present an analytical tool, GenZ, to efficiently navigate the relationship between diverse LLM model architectures(Dense, GQA, MoE, Mamba), LLM serving optimizations(Chunking, Speculative decoding, quanitization), and AI platform design parameters. Our tool estimates LLM inference performance metrics for the given scenario. We have validated against real hardware platforms running various different LLM models, achieving a max geomean error of this http URL use GenZ to identify compute, memory capacity, memory bandwidth, network latency, and network bandwidth requirements across diverse LLM inference use cases. We also study diverse architectural choices in use today (inspired by LLM serving platforms from several vendors) to help inform computer architects designing next-generation AI hardware accelerators and platforms. The trends and insights derived from GenZ can guide AI engineers deploying LLMs as well as computer architects designing next-generation hardware accelerators and platforms. Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of large language models across a spectrum of applications. The source code is available at this https URL . Users can also be tried it on at this https URL without any setup on your web browser.
Submission history
From: Abhimanyu Rajeshkumar Bambhaniya [view email]
[v1]
Mon, 3 Jun 2024 18:00:50 UTC (9,260 KB)
[v2]
Tue, 29 Apr 2025 23:25:27 UTC (4,930 KB)
[v3]
Thu, 15 May 2025 02:46:53 UTC (4,930 KB)