With the rapid improvement in the general capabilities of LLMs, LLM personalization, i.e., how to
build LLM systems that can generate personalized responses or services that are tailored to distinct
user personas, has become an increasingly important research and engineering problem. However,
unlike many new challenging benchmarks being released for evaluating the general/reasoning
capabilities, the lack of high-quality benchmarks for evaluating LLM personalization greatly hinders
progress in this field. To address this, we introduce PersonaFeedback, a new benchmark that
directly evaluates LLMs’ ability to provide personalized responses given pre-defined user personas
and queries. Unlike existing benchmarks that require models to infer implicit user personas from
historical interactions, PersonaFeedback decouples persona inference from personalization,
focusing on evaluating the model’s ability to generate responses tailored to explicit personas.
PersonaFeedback consists of 8298 human-annotated test cases, which are categorized into
easy, medium, and hard tiers based on the contextual complexity of the user personas and the
difficulty in distinguishing subtle differences between two personalized responses. We conduct
comprehensive evaluations across a wide range of models. The empirical results reveal that even
state-of-the-art LLMs that can solve complex real-world reasoning tasks could fall short on the hard
tier of PersonaFeedback where even human evaluators may find the distinctions challenging.
Furthermore, we conduct an in-depth analysis of failure modes across various types of systems,
demonstrating that the current retrieval-augmented framework should not be seen as a de facto
solution for personalization tasks. All benchmark data, annotation protocols, and the evaluation
pipeline will be publicly available to facilitate future research on LLM personalization.
Dataset: https://huggingface.co/datasets/PersonalAILab/PersonaFeedback