In 2024, a year where AI-manipulated media was expected to play a role in elections taking place across the globe, PAI continued its work to promote truth and transparency across the digital media and information ecosystem.
Early in the year, PAI launched a cross-sector Community of Practice to explore different approaches to the challenges posed by the use of AI tools in elections, creating space for shared learning.
Demonstrating the application of PAI’s Synthetic Media Framework, PAI published 16 in-depth case studies from AI-developing companies like Adobe and OpenAI, media organizations such as CBC and BBC, platforms such as Meta and Microsoft, as well as civil society organizations like Thorn and WITNESS. The case studies were a requirement of Framework supporters, who explored how the best practices on responsible development, creation, and sharing of AI generated media could be applied to real-world use cases.
The first set of case studies from framework supporters and accompanying analysis, focused on transparency, consent, and harmful/responsible use cases. The second set of case studies from framework supporters focused on an underexplored area of synthetic media governance: direct disclosure — methods to convey to audiences how content has been modified or created with AI, like labels or other signals — PAI developed policy recommendations based on insights from the cases. If responsible synthetic media best practices, such as disclosure, are not implemented alongside safety recommendations for open source model builders, synthetic media may lead to real-world harm, such as manipulating democratic and political processes.