Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

How To Save With Cards – Forbes Advisor

Anthropic users face a new choice – opt out or share your data for AI training

Cohere president Martin Kon steps down, moves into advisory role

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Partnership on AI

How Can AI Fortify Informed and Connected Communities?

By Advanced AI EditorAugust 28, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


When Partnership on AI began its programming in 2018, AI could generate videos, recommend content, and simulate conversations. Deepfakes were not yet widespread or perfectly photorealistic, but PAI was working across sectors to anticipate what lay ahead for our information ecosystem. Even as AI capabilities were evolving, we collaborated with partners across industries to prepare for its potential impacts. Newsrooms like the BBC were grappling with how their existing journalistic standards could address novel AI risks. Social media platforms like Meta (then Facebook) hoped to better support audiences encountering AI-generated content and to identify associated harms. Dating apps like Bumble were asking how to authenticate profiles as real and prepare for an influx of AI profiles.

As we predicted in 2019, “AI systems promise to augment human perception, cognition, and problem-solving abilities. [But] they also pose risks of manipulation, abuse, and other negative consequences, both foreseen and unintended.” Over the years that followed, many of those risks and opportunities materialized. AI was clearly ushering in an unprecedented era of knowledge sharing and connection online, but the pace of change was about to accelerate dramatically.

OpenAI’s release of DALL-E in 2021 brought generative AI, and specifically synthetic media, to the public. ChatGPT’s launch in 2022 accelerated this transformation. Since then, we’ve witnessed profound improvements in the technology’s realism and accessibility, fundamentally impacting trust and truth online.

When a deepfake image of the Pentagon on fire moved financial markets in 2023, PAI’s early decision to focus on AI and media integrity proved prescient. PAI’s Synthetic Media Framework provided builders, creators, and distributors of synthetic media with responsible use guidelines and transparency measures to empower people in the AI age. Eighteen diverse institutions — from OpenAI to Code for Africa to the CBC to TikTok — signed on to our guidance, and all of them wrote long form case studies examining adoption of the recommendations in real-world scenarios.

“AI transcends its purely technological status, simultaneously affecting how people socialize and consume knowledge.”

Yet, even as we addressed synthetic media’s challenges to impersonation and misrepresentation, AI has continued evolving in new directions. Now, in 2025, we confront an evolved AI landscape where interactive and increasingly capable, “personlike” AI systems, like AI agents and chatbots, affect how people understand each other and the world around them.

Today people not only develop relationships through AI, but also with AI — for romantic, therapeutic, or social purposes. Information and knowledge are increasingly synthesized and delivered through chatbots and conversational interfaces.

AI has become central to social connection and public knowledge, vital precursors to healthy epistemic communities, vibrant democracies, and overall human flourishing. According to a Pew Research Study, 57% of Americans surveyed report using AI at least once a day.

While the foundations for this transformation were laid years ago, AI’s capabilities — and consumer packaging, public integration, and use — have transformed. Today’s tools are more dynamic, emotionally evocative, sycophantic, personalized, persuasive, and interactive, making them seem genuinely “personlike” to users. To the teenager chatting daily with Character.AI’s virtual companions or the elderly person asking Amazon’s Alexa questions throughout the day, AI transcends its purely technological status, simultaneously affecting how people socialize and consume knowledge.

Meeting this moment in AI requires the entire ecosystem — not just technology companies, but also civil society, government, philanthropy, academia, media, and the public — to bring both attention and intention to how we all shape AI’s trajectory. Stakeholders must grapple with how our informational and social lives intertwine: how misleading ideas spread through social networks, how chatbots become trusted advisors, and how the quality of our social lives affects not only our emotional well being, but also our participation in public discourse and civic life.

Trust in information is fundamentally trust in sources and people. AI systems that cannot navigate both trustworthy communication and authentic human connection will fail at their most critical moments.

Partnership on AI’s Newest Area of Work: AI and Human Connection

To meet these interconnected challenges, PAI is launching a new area of work: AI and Human Connection. It will build upon PAI’s established leadership in AI and Media Integrity, and knowledge base from its Collaborations Between People and AI Systems projects, ultimately responding to the pressing question: How can AI strengthen and sustain informed and connected communities?

As researchers at Google DeepMind recently emphasized, we “must anticipate, monitor and mitigate against risks introduced by anthropomorphic AI design.” PAI’s AI and Human Connection program answers this call.

Some AI systems are built to give us information, but they’re starting to feel like friends or companions. Other AI systems are made for socializing, but they end up teaching us and shaping what we believe. To handle these changes properly, we need to design AI systems that tackle both information-sharing and connection.

“Trust in information is fundamentally trust in sources and people.”

Our work on AI and Human Connection will cultivate the interdisciplinary expertise needed to create AI that fortifies human epistemic and social communities in an age of unprecedented informational and relational complexity. Ultimately, it will address the ways that AI is changing how we connect with each other and how we learn about the world.

This effort will expand on seven years of previous work promoting AI that supports knowledge and connection. Since 2018, PAI has created and driven adoption of practical guidance that ensures AI positively impacts the trustworthiness of media and information. In particular, PAI’s Synthetic Media Framework continues to support AI practitioners and policymakers. Through long-form case studies, we also provide a venue for reflection on synthetic media developments and transparent documention of how practitioners adopt our guidance.

These insights can support responsible development of anthropomorphic AI, too.
PAI provides recommendations on fairness, documentation, disclosure, transparency, consent, and responsible and harmful uses that can be adapted to the increasingly capable AI systems of today.

What’s Next?

New Steering Committee. PAI’s AI and Media Integrity Steering Committee was integral to the creation and adoption of PAI’s Synthetic Media Framework. We build on this success through the formation of an AI and Human Connection Steering Committee, focused on advancing adoption of PAI’s Synthetic Media Framework with new technologies and shaping the field around knowledge-and connection affirming AI. The Steering Committee will include experts from Thorn, the ACLU, the Knight First Amendment Institute, and the CBC; the full group will be announced in late 2025

Workshop on AI and Human Connection. In the next year, PAI will convene its first workshop on this new topic — focusing on how we can develop a comprehensive roadmap for research, policy, and technology development to ensure interactive AI systems positively affect information, communication, and human connection. The roadmap will support a framework following a similar adoption model to our Synthetic Media Framework: practitioners will implement it, civil society will use it for advocacy, and policymakers crafting norms, standards, and regulations on related topics (like those we’ve recently seen in California and New York) will reference it.

Synthetic Media Framework. The Synthetic Media Framework is foundational to our future work. We will continue to work with organizations to promote its adoption and integration into new sectors and to share its insights with policymakers around the world.

The next year is critical. Society needs a new generation of AI practitioners, researchers, and leaders who understand that information and media integrity and social trust aren’t separate problems — they’re two manifestations of the same challenge: building AI systems that help humans discern and embrace what’s real, true, authentic, and human. PAI’s AI and Human Connection Program will broaden, and deepen, our Partner community’s impact.

To stay up to date on PAI’s AI and Human Connection Program as we tackle these defining challenges, sign up for our newsletter. Connect with our growing community of Partners, contribute your expertise, and help us forge the path forward, together.

Let’s build a world where AI connects communities rather than fragments them, where information systems inform rather than confuse, and where digital interactions enhance rather than degrade human dignity.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLegalOn Hires Former Litera Exec Vanessa Davis – Artificial Lawyer
Next Article How to Leverage Job Hugging for HR
Advanced AI Editor
  • Website

Related Posts

Improving Labor Transparency in AI through Worker Inclusion

August 21, 2025

Partnership on AI Appoints Three New Board Members

August 14, 2025

Shaping the EU AI Act’s Code of Practice

July 31, 2025

Comments are closed.

Latest Posts

Egyptian Antiquities Trafficker Sentenced to Six Months in Prison

Sotheby’s to Launch First Series of Luxury Auctions in Abu Dhabi

Nazi-Looted Painting Spotted in Argentina Disappears: Morning Links

Artifacts From 2,000-Year-old Sunken City Lifted Out of the Sea

Latest Posts

How To Save With Cards – Forbes Advisor

August 28, 2025

Anthropic users face a new choice – opt out or share your data for AI training

August 28, 2025

Cohere president Martin Kon steps down, moves into advisory role

August 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • How To Save With Cards – Forbes Advisor
  • Anthropic users face a new choice – opt out or share your data for AI training
  • Cohere president Martin Kon steps down, moves into advisory role
  • Mind the Third Eye! Benchmarking Privacy Awareness in MLLM-powered Smartphone Agents – Takara TLDR
  • Apple eyed AI buyouts before iPhone 17 launch

Recent Comments

  1. VictorGlods on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. تشک مشک on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Michaeltib on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.