Large Language Models (LLMs) for Generative AI have achieved remarkable
progress, evolving into sophisticated and versatile tools widely adopted across
various domains and applications. However, the substantial memory overhead
caused by their vast number of parameters, combined with the high computational
demands of the attention mechanism, poses significant challenges in achieving
low latency and high throughput for LLM inference services. Recent
advancements, driven by groundbreaking research, have significantly accelerated
progress in this field. This paper provides a comprehensive survey of these
methods, covering fundamental instance-level approaches, in-depth cluster-level
strategies, emerging scenario directions, and other miscellaneous but important
areas. At the instance level, we review model placement, request scheduling,
decoding length prediction, storage management, and the disaggregation
paradigm. At the cluster level, we explore GPU cluster deployment,
multi-instance load balancing, and cloud service solutions. For emerging
scenarios, we organize the discussion around specific tasks, modules, and
auxiliary methods. To ensure a holistic overview, we also highlight several
niche yet critical areas. Finally, we outline potential research directions to
further advance the field of LLM inference serving.