Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

Trump’s Tech Sanctions To Empower China, Betray America

Paper page – DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » From Deepfakes to Disclosure: PAI Framework Insights from Three Global Case Studies
Partnership on AI

From Deepfakes to Disclosure: PAI Framework Insights from Three Global Case Studies

Advanced AI BotBy Advanced AI BotMarch 19, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


While many are harnessing AI for productivity and creativity, AI’s rapid advancement has accelerated the potential for real-world harm. AI-generated content, including audio and video deepfakes, have been used in elections to spread false information and manipulate public perception of candidates, undermining trust in democratic processes. Attacks on vulnerable groups, such as women, through the creation and spread of deepnudes, and other non consensual intimate imagery have left communities shaken and organizations to scramble to mitigate future harms.

To mitigate the spread of misleading AI-generated content, organizations have begun to deploy transparency measures. Recently, policymakers in China and Spain announced efforts to require labels on AI-generated content circulated online. Although governments and organizations are taking steps in the right direction to regulate AI-generated content, more comprehensive action is urgently needed at a global scale. PAI is working to bring together organizations across civil society, industry, government, and academia to develop comprehensive guidelines that further public trust in AI, protect users, and advance audience understanding of synthetic content.

Although governments and organizations are taking steps in the right direction to regulate AI-generated content, more comprehensive action is urgently needed at a global scale.

Launched in 2023, PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action provides timely and normative guidance for the use, distribution, and creation of synthetic media. The Framework supports Builders of AI tools, and Creators and Distributors of synthetic content in aligning on best practices to advance the use of synthetic media and protect users. The Framework is supported by 18 organizations, each of which has submitted a case study exploring the Framework’s application in the real world.

As we approach the conclusion of our case study collection in its current format, we are excited to publish the final round of case studies from Google, and civil society organizations Meedan and Code for Africa. These three case studies explore how synthetic media can impact elections and political content, how disclosure can limit misleading, gendered content, and how transparency signals help users make informed decisions about content, all vital considerations when governing synthetic media responsibly.

Code for Africa Explores Synthetic Content’s Impact on Elections

In May 2024, weeks before South African general elections, one political party’s use of generative AI tools sparked controversy: it distributed a video showing South Africa’s flag burning. Although the video was AI-generated, a lack of disclosure led to outrage from voters and a statement by the South African president that the video was treasonous.

The burden to interpret generative AI content should not be placed on audiences themselves, but on the institutions building, creating, and distributing content.

In its case study, Code for Africa argues for full disclosure of all AI-generated or edited content, increased training of newsroom staff on how to use generative AI tools, updated journalistic policies that take into account advancements in AI, and increased transparency of editorial policies and journalistic standards with users. Notably, they emphasize that the burden to interpret generative AI content should not be placed on audiences themselves, but on the institutions building, creating, and distributing content.

Although these recommendations could not have prevented the video’s creation and dissemination, the case study highlights the importance of direct disclosure, as recommended in our Framework. Direct disclosure by the video’s creator could have mitigated some of the public backlash and subsequent fallout. Through the use of direct disclosure, such as labeling, people viewing the content would have been able to distinguish between fact and AI-generated media, keeping them focused on the important message.

Read the case study

Google’s Approach to Direct Disclosure

Google, understanding the importance of user feedback when implementing direct disclosure mechanisms, conducted research to help identify which mechanisms would be most effective and useful for users. Research findings informed Google’s approach to direct disclosure by informing:

How prominent the label should be: considering its impact on the implied authenticity effect (when some content is labeled as AI-generated, people may believe content without labels must be authentic) and the liar’s dividend (the ability of bad actors to call into question authentic content due to the prevalence of synthetic content)
What additional information is needed: including an entry point for users to learn more about content, such as Google’s “About this image”
How to provide users with enough understanding to avoid any misinterpretation of direct disclosure.

These takeaways helped Google develop disclosure solutions to implement across three of its surfaces: YouTube, Search, and Google Ads. They noted that disclosures must feature context beyond “AI or not” in order to support audience understanding of content. AI disclosures provide only one datapoint that can help users determine the trustworthiness of content, alongside other signals like “what is the source of this content?” “How old is it?” and, “Where else might this content appear?”

Disclosures must feature context beyond “AI or not” in order to support audience understanding of content.

In addition, Google recommends further research to better understand user needs, media literacy levels, and disclosure comprehension and impact. By better understanding how users interpret direct disclosure and use them to make decisions about content, platforms can implement scalable and effective disclosure mechanisms that support synthetic content transparency that services audience understanding of content.

These recommendations align with how direct disclosure is defined in the Framework – “viewer or listening-facing and includes, but is not limited to, content labels, context notes, watermarking, and disclaimers.” They are also consistent with the Framework’s three key principles of transparency, consent, and disclosure.

Read the case study

Meedan Identifies Harmful Synthetic Content in South Asia

Check is an open-source platform created by Meedan that can help users connect with journalists, civil society organizations, and intergovernmental groups on closed-messaging platforms, such as WhatsApp. Via Check, users can help identify and debunk malicious synthetic content. By using Check and working with local partners on a research project, Meedan was able to identify that misleading, gendered content in South Asia contained synthetic components.

In its case study, Meedan recommends that platforms improve content monitoring and screening, as well as create localized escalation channels that can take into account diverse contexts and regions. Once implemented, these methods can help platforms mitigate the spread of malicious content being shared among “Larger World” communities (Meedan’s preferred term for the Global South) and better support local efforts to combat it.

The use of direct disclosure could have helped researchers identify synthetic content sooner.

In the Framework, we recommend that Creators directly disclose synthetic content “especially when failure to know about synthesis changes the way the content is perceived.” The use of direct disclosure in this instance could have helped researchers identify synthetic content sooner. This case study not only highlighted the need for direct disclosure, but also shed light on the importance of considering localized contexts when seeking to mitigate harm – an important aspect of regulating synthetic content at a global scale.

Read the case study

What’s Next

In order to develop comprehensive global regulation and best practices, we need the support of organizations across various fields including industry, academia, civil society, and government. The iterative case reporting process between PAI and supporter organizations demonstrates what real life change with supporters across these fields can accomplish.

The transparency and willingness of these organizations to provide insights into their efforts on governing synthetic media responsibly is a step in the right direction. In our March 2024 analysis, we recognized the importance of voluntary frameworks for AI governance. We hope to reveal further insights with these case studies on how we can make policy and technology decisions, providing a body of evidence about real-world AI policy implementation, and further consensus on best practices for evolving synthetic media policy.

These case studies span a range of impact areas and explore various mitigation strategies. This work from our supporters contributes to the refinement of the Framework, pursuit of future synthetic media governance, and uncovering the best ways to ensure optimal guidance is implemented by Builders, Creators, and Distributors.

In the coming months, we will incorporate our lessons learned from these case studies into the Framework, to ensure our guidance remains responsive to shifts in the AI field. In addition, we will publish an analysis of key takeaways, open questions, and future directions for the field around which we will have public programming addressing some of these themes. To stay up to date on where this work leads us next sign up for our newsletter.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRazer’s new AI-powered bug detector could help games release faster
Next Article How Educators Are Leading the Future of Learning With AI Initiatives
Advanced AI Bot
  • Website

Related Posts

Tech Industry Leaders Can Shape Responsible AI Beyond Model Deployment

May 15, 2025

Can AI Apps Help Carry the Mental Load for Moms?

May 8, 2025

Three minutes with Ingka Group | IKEA Francesco Marzoni

May 2, 2025
Leave A Reply Cancel Reply

Latest Posts

Netflix, Martha Stewart, T.O.P And Lil Yachty Welcome You To The K-Era

Closed SFAI Campus to Be Converted into Artist Residency Center

At Gearbox Records The Sound Quality Remains First

Natasha Lyonne Sparks Backlash After Quoting David Lynch

Latest Posts

C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

June 6, 2025

Trump’s Tech Sanctions To Empower China, Betray America

June 6, 2025

Paper page – DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.