Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language Models – Takara TLDR

Introducing auto scaling on Amazon SageMaker HyperPod

Anthropic warns that its Claude AI is being ‘weaponized’ by hackers to write malicious code

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

Summit With OpenAI, Google DeepMind Reaches Bleak Agreement

By Advanced AI EditorAugust 29, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Welcome back to In the Loop, TIME’s twice-weekly newsletter about the world of AI. If you’re reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.

Subscribe to In the Loop

What to Know: The AI social contract

At a lakefront venue in Sweden earlier this month, 18 individuals from OpenAI, Google DeepMind, the U.K. AI Security Institute, the OECD, and other groups gathered for an invite-only summit. On the agenda: arriving at a consensus on the likely ways that advanced AI will impact the “social contract” between working people, governments, and corporations.

Top AI CEOs like DeepMind’s Demis Hassabis and OpenAI’s Sam Altman have recently been urging academics and governments to grapple with this issue more deeply, to better prepare the world for what they expect will be a highly disruptive economic shock. So, every day for a week—in breakout rooms and in a nightly communal sauna—these 18 experts hashed out a picture of what economic shocks might be coming down the track… and what to do about them.

Bad news — One outcome of the so-called “AGI social contract summit” was a list of four consensus statements, according to the summit’s organizers. These statements have not previously been reported. They paint a grim picture of where the world could be headed, absent significant interventions by governments and societies. “AI is likely to exacerbate increasing wealth and income inequality within countries, worsening economic conditions for many working and middle-class people and families,” the first reads. “AI will increase inequality between countries that have access to AI infrastructure and those that don’t—both in terms of access to benefits as well as ability to respond to shocks,” says the second. “Without intervention, AI-enabled inequalities may lead to the political dominance of wealthy individuals and corporations, eroding democratic institutions and increasing levels of political dissatisfaction,” the third says. And the fourth: “The encroachment of AI systems and the erosion of the value of labor could lead to the increasing disempowerment of most humans, causing a degradation in individual well-being and purpose.”

Human disempowerment — Attendees at the summit agreed that the existing social contract—in which people receive security and a stake in society in return for their labor—is in trouble due to AI, says Deric Cheng, the event’s organizer, who serves as Director of Research at the Windfall Trust, a non-profit founded this year to grapple with these issues. “We’re essentially worried that labor will be disempowered relative to corporations, and also to some degree that governments might be disempowered relative to corporations,” Cheng says. “The obvious result of lower labor power is decreased real wages.” This view holds that people in wealthy democracies enjoy a high standard of living not due to their rights enshrined on paper—but due to their ability to withhold their labor. Remove labor from that equation, and standards of living are vulnerable to going down, even if overall GDP or productivity statistics rise.

Ways forward — Without intervention by governments, attendees agreed, the default path of advanced AI would likely result in bad economic outcomes for the average person. But fortunately, they also identified several possible actions that governments could take to push things in a better direction, Cheng says. For example: developing new institutions, in the vein of the IMF, to ensure that wealth derived from AI is distributed globally, rather than within the one or two powerful countries where AI companies are located. States could also run pilots today, Cheng says, for policies like basic income and reduced working weeks, to gather evidence about what kinds of safety nets are effective.

Google DeepMind declined to comment on the consensus statements that arose from the summit. OpenAI did not respond to a request for comment.

If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.

Who to Know: U.S. District Judge Amit Mehta

Last year, Federal District Judge Amit Mehta ruled that Google had illegally maintained a monopoly over online search and ads. This week, he is expected to announce the court’s decision on what to do about it—a ruling that could range from making Google share data with rivals, to forcing a breakup of the search giant itself.

Payments to rivals — The U.S. Department of Justice’s case against Google revolved around the multibillion dollar yearly payments that Google made to Apple in order to secure Google as the default search engine on iPhones. Observers expect the court to, at a minimum, place limits on these kinds of payments, which Mehta ruled were anticompetitive.

Spinning off Chrome — Another possibility is that Mehta could order Google to sell Chrome, the most popular browser in the world, with a 67% market share. Chrome allows Google to collect intricate data about users’ browsing patterns that shore up its dominance of the search and ad space. Any of Google’s competitors would no doubt jump at the chance to buy the world’s top browser, given the opportunity it affords to point users toward their LLM of choice.

Sharing user data — The data that Google collects on its users is part of the secret sauce of its search engine. Mehta could rule that Google must share this data with competitors—perhaps in an anonymized form, to ward off accusations of privacy violations.

AI in Action

The public trusts AI chatbots more than companies or community leaders, according to polling of users in 68 countries carried out by the Collective Intelligence Project.

More than half (56.6%) of people trust AI chatbots, the polling found. That’s higher than the AI companies that make them (34.6%) or even faith and community leaders (44.2%).

More than one in 10 people (14.9%) are using AI for emotional support on a daily basis, the survey found. And 30% of people have “at some point thought their AI chatbot might be self-aware.”

And 56% of people polled said that the proliferation of AI across society was likely to worsen access to good jobs.

As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: intheloop@time.com

What We’re Reading

The Race for Artificial General Intelligence Poses New Risks to an Unstable World, by Billy Perrigo in TIME

A shameless plug for my own story here. Earlier this year I traveled to Paris to sit in on a fascinating exercise: a simulated war-game, where four teams played out the impact of advanced AI on geopolitics. It was sort of like watching a game of Dungeons and Dragons, except the players were former government officials and AI researchers—and the game board was planet Earth. I use the war-game as a jumping-off point in the story to explore how Artificial General Intelligence has become an increasingly salient dimension of great power competition between the U.S. and China. I hope you’ll give it a read, and let me know what you think!



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT to Give Bees a Break with Robot HAZMAT Pollinator
Next Article CogVLA: Cognition-Aligned Vision-Language-Action Model via Instruction-Driven Routing & Sparsification – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Google DeepMind’s product director Dave Citron joins Microsoft as new corporate VP; gives Day 1 report on LinkedIn

August 28, 2025

Your Gemini app just got a major AI image editing upgrade – for free

August 28, 2025

Google’s AI Weather Model Nailed Its First Major Storm Forecast

August 28, 2025

Comments are closed.

Latest Posts

Australian School Faces Pushback over AI Art Course—and More Art News

London Museum Secures Banksy’s Piranhas

Egyptian Antiquities Trafficker Sentenced to Six Months in Prison

Sotheby’s to Launch First Series of Luxury Auctions in Abu Dhabi

Latest Posts

OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language Models – Takara TLDR

August 29, 2025

Introducing auto scaling on Amazon SageMaker HyperPod

August 29, 2025

Anthropic warns that its Claude AI is being ‘weaponized’ by hackers to write malicious code

August 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language Models – Takara TLDR
  • Introducing auto scaling on Amazon SageMaker HyperPod
  • Anthropic warns that its Claude AI is being ‘weaponized’ by hackers to write malicious code
  • The summer of vibe coding is over — How reasoning models broke the economics of AI code generation
  • Cultural Fit Essential in Hiring

Recent Comments

  1. F 168 on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. buôn bán nội tạng on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. selling alphaacetylmethadol on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Реферальный код binance on Neuralink Blindsight trials to restore vision will start in the UAE
  5. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.