In the last few years, artificial intelligence has plunged out of the ivory tower. Now AI writes computer code, moves robots on command, drives cars that dodge human traffic, and even plays a role in hiring, housing, and criminal charges.
As AI deluges our daily lives, companies have run into a serious problem. Their longstanding tools for measuring AI performance and safety no longer suffice.
Last year, scientists at Google DeepMind contacted SFI with a question: how do we evaluate AI models that are engaging actively with the world?
It’s a question that demands a complexity lens, says SFI Vice President for Applied Complexity Will Tracy.
“Because the deployment of AI involves so many complex-systems questions — about intelligence, institutions, society, human behavior, evolutionary transitions — SFI’s Office of Applied Complexity is exploring this area over the next two years. By engaging with leaders on the frontlines of AI development and deployment, we can promote complexity thinking and inform their understanding of these dynamic systems,” he explains.
Tracy organized the “Measuring AI in the World” studio at SFI from March 12–14. The Applied Complexity Studio Program brings leaders from industry, government, and civil society to SFI to examine far-reaching implications of complexity science on seemingly narrow industry problems.
Most frontier labs — companies at the forefront of building the world’s most powerful and adaptable AI models — sent representatives to the studio. The studio also included participants from governments, non-profits, and foundations working on AI.
Google DeepMind Principal Scientist William Isaac, Research Scientist Kristian Lum, and Staff Research Scientist Laura Weidinger served as co-organizers.
“I sincerely believe the convening at SFI will serve as a catalyst for this emerging area of research and practice over the coming years,” says Isaac. “Given the cross-cutting, interdisciplinary nature of the topic, it could only have been held at a place like SFI. This expert group is pushing the frontier of the field and solidifying the core intellectual questions for the next few years, especially as AI systems move into the real world in critical domains.”
During the studio, SFI researchers discussed how to gauge the impact of AI from unexpected angles, focusing on a complex-systems approach.
“We live in algorithmically infused societies, where the algorithms are based on AI technology. To measure the impact of AI on society — its risks and its rewards — we must adopt the mindset of complexity researchers,” says SFI External Professor Tina Eliassi-Rad (Northeastern University), who spoke at the event. “At the studio, we discussed the importance of measuring the impact of feedback loops, non-equilibria, and self-organization on human-AI ecosystems. All are important concepts from complexity science.”
SFI researchers covered topics such as the psychology of human intelligence, challenging the popular idea that frontier AI possesses a robust world model. Another presentation suggested that lessons from evolutionary transitions in both cities and organisms could inform our understanding of AI’s impact.
“The field of AI has made the biggest, most grandiose claims first — for example, ‘this large language model is already conscious.’ But we don’t yet have a complete science of animal consciousness!” says SFI Professor Chris Kempes. “During the studio, I pointed out that extraordinary claims require extraordinary evidence. We need to pull back the claims, or find ways to make new types of quantitative measurements.”
Frontier lab representatives say the studio led to tangible progress on the AI-measurement dilemma.
“Evaluation — long considered a solved issue in machine learning — is now one of the central challenges in modern AI,” says participant Maximilian Nickel, a scientist at Meta AI. “The SFI studio was a great success, advancing not only our understanding but also fostering a vibrant community around a critical epistemic question: how can we truly know how good our models are?”
To continue exploring that question, some participants reunited this summer in London at an ACtioN Roundtable discussion, again co-hosted by Google DeepMind and SFI. Held within DeepMind’s global headquarters, the June Roundtable used complexity science to organize a discussion of AI’s possible impact on social and economic distributions. Discussion ranged from how science gets done to which sectors will see the most labor disruption, and how education will need to evolve.