A new field, AI Agent Behavioral Science, is proposed to systematically study the behaviors of AI agents in diverse contexts, emphasizing external factors and their interactions, and addressing responsible AI aspects.
Recent advances in large language models (LLMs) have enabled the development
of AI agents that exhibit increasingly human-like behaviors, including
planning, adaptation, and social dynamics across diverse, interactive, and
open-ended scenarios. These behaviors are not solely the product of the
internal architectures of the underlying models, but emerge from their
integration into agentic systems operating within specific contexts, where
environmental factors, social cues, and interaction feedbacks shape behavior
over time. This evolution necessitates a new scientific perspective: AI Agent
Behavioral Science. Rather than focusing only on internal mechanisms, this
perspective emphasizes the systematic observation of behavior, design of
interventions to test hypotheses, and theory-guided interpretation of how AI
agents act, adapt, and interact over time. We systematize a growing body of
research across individual agent, multi-agent, and human-agent interaction
settings, and further demonstrate how this perspective informs responsible AI
by treating fairness, safety, interpretability, accountability, and privacy as
behavioral properties. By unifying recent findings and laying out future
directions, we position AI Agent Behavioral Science as a necessary complement
to traditional model-centric approaches, providing essential tools for
understanding, evaluating, and governing the real-world behavior of
increasingly autonomous AI systems.