CAMS integrates an agentic framework with urban-knowledgeable large language models to simulate human mobility more realistically by modeling individual and collective patterns.
Human mobility simulation plays a crucial role in various real-world
applications. Recently, to address the limitations of traditional data-driven
approaches, researchers have explored leveraging the commonsense knowledge and
reasoning capabilities of large language models (LLMs) to accelerate human
mobility simulation. However, these methods suffer from several critical
shortcomings, including inadequate modeling of urban spaces and poor
integration with both individual mobility patterns and collective mobility
distributions. To address these challenges, we propose CityGPT-Powered
Agentic framework for Mobility Simulation
(CAMS), an agentic framework that leverages the language based urban
foundation model to simulate human mobility in urban space. CAMS
comprises three core modules, including MobExtractor to extract template
mobility patterns and synthesize new ones based on user profiles, GeoGenerator
to generate anchor points considering collective knowledge and generate
candidate urban geospatial knowledge using an enhanced version of CityGPT,
TrajEnhancer to retrieve spatial knowledge based on mobility patterns and
generate trajectories with real trajectory preference alignment via DPO.
Experiments on real-world datasets show that CAMS achieves superior
performance without relying on externally provided geospatial information.
Moreover, by holistically modeling both individual mobility patterns and
collective mobility constraints, CAMS generates more realistic and
plausible trajectories. In general, CAMS establishes a new paradigm
that integrates the agentic framework with urban-knowledgeable LLMs for human
mobility simulation.