Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI confronts user panic over court-ordered retention of ChatGPT logs

Matter-of-Fact: A Benchmark for Verifying the Feasibility of Literature-Supported Claims in Materials Science

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » The ‘father of the internet’ and hundreds of tech experts worry we’ll rely on AI too much
Finance AI

The ‘father of the internet’ and hundreds of tech experts worry we’ll rely on AI too much

Advanced AI BotBy Advanced AI BotApril 2, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


While the top minds in artificial intelligence are racing to make the technology think more like humans, researchers at Elon University have asked the opposite question: How will AI change the way humans think?

The answer comes with a grim warning: Many tech experts worry that AI will make people worse at skills core to being human, such as empathy and deep thinking.

“I fear — for the time being — that while there will be a growing minority benefitting ever more significantly with these tools, most people will continue to give up agency, creativity, decision-making and other vital skills to these still-primitive AIs,” futurist John Smart wrote in an essay submitted for the university’s nearly 300-page report, titled “The Future of Being Human,” which was provided exclusively to CNN ahead of its publication Wednesday.

The concerns come amid an ongoing race to accelerate AI development and adoption that has attracted billions of dollars in investment, along with both skepticism and support from governments around the world. Tech giants are staking their businesses on the belief that AI will change how we do everything — working, communicating, searching for information — and companies like Google, Microsoft and Meta are racing to build “AI agents” that can perform tasks on a person’s behalf. But experts warn in the report that such advancements could make people too reliant on AI in the future.

Already, the proliferation of AI has raised big questions about how humans will adapt to this latest technology wave, including whether it could lead to job losses or generate dangerous misinformation. The Elon University report further calls into question promises from tech giants that the value of AI will be in automating rote, menial tasks so that humans can spend more time on complex, creative pursuits.

Wednesday’s report also follows research published this year by Microsoft and Carnegie Mellon University that suggested using generative AI tools could negatively impact critical thinking skills.

‘Fundamental, revolutionary change’

Elon University researchers surveyed 301 tech leaders, analysts and academics, including Vint Cerf, one of the “fathers of the internet” and now a Google vice president; Jonathan Grudin, University of Washington Information School professor and former longtime Microsoft researcher and project manager; former Aspen Institute executive vice president Charlie Firestone; and tech futurist and Futuremade CEO Tracey Follows. Nearly 200 of the respondents wrote full-length essay responses for the report.

More than 60% of the respondents said they expect AI will change human capabilities in a “deep and meaningful” or “fundamental, revolutionary” way over the next 10 years. Half said they expect AI will create changes to humanity for the better and the worse in equal measure, while 23% said the changes will be mostly for the worse. Just 16% said changes will be mostly for the better (the remainder said they didn’t know or expected little change overall).

The respondents also predicted that AI will cause “mostly negative” changes to 12 human traits by 2035, including social and emotional intelligence, capacity and willingness to think deeply, empathy, and application of moral judgment and mental well-being.

Human capacity in those areas could worsen if people increasingly turn to AI for help with tasks such as research and relationship-building for convenience’s sake, the report claims. And a decline in those and other key skills could have troubling implications for human society, such as “widening polarization, broadening inequities and diminishing human agency,” the researchers wrote.

The report’s contributors expect just three areas to experience mostly positive change: curiosity and capacity to learn, decision-making, and problem-solving and innovative thinking and creativity. Even in tools available today, programs that can generate artwork and solve coding problems are among the most popular. And many experts believe that while AI could replace some human jobs, it could also create new categories of work that don’t yet exist.

The evolution of AI

Many of the concerns detailed in the report relate to how tech leaders predict people will incorporate AI into their daily lives by 2035.

Cerf said he expects humans will soon rely on AI agents, which are digital helpers that could independently do everything from taking notes during a meeting to making dinner reservations, negotiating complex business contracts or writing code. Tech companies are already rolling out early AI agent offerings — Amazon says its revamped Alexa voice assistant can order your groceries, and Meta is letting businesses create AI customer service agents to answer questions on its social media platforms.

Such tools could save people time and energy in everyday tasks while aiding with fields like medical research. But Cerf also worries about humans becoming “increasingly technologically dependent” on systems that can fail or get things wrong.

“You can also anticipate some fragility in all of this. For example, none of this stuff works without electricity, right?” Cerf said in an interview with CNN. “These heavy dependencies are wonderful when they work, and when they don’t work, they can be potentially quite hazardous.”

Cerf stressed the importance of tools that help differentiate humans versus AI bots online, and transparency around the effectiveness of highly autonomous AI tools. He urged companies that build AI models to keep “audit trails” that would let them interrogate when and why their tools get things wrong.

Futuremade’s Follows told CNN that she expects humans’ interactions with AI to move beyond the screens where people generally talk to AI chatbots today. Instead, AI technology will be integrated into various devices, such as wearables, as well as buildings and homes where humans can just ask questions out loud.

But with that ease of access, humans may begin outsourcing empathy to AI agents.

“AI may take over acts of kindness, emotional support, caregiving and charity fundraising,” Follows wrote in her essay. She added that “humans may form emotional attachments to AI personas and influencers,” raising “concerns about whether authentic, reciprocal relationships will be sidelined in favor of more predictable, controllable digital connection.”

Humans have already begun to form relationships with AI chatbots, to mixed effect. Some people have, for example, created AI replicas of deceased loved ones to seek closure, but parents of young people have also taken legal action after they say their children were harmed by relationships with AI chatbots.

Still, experts say people have time to curb some of the worst potential outcomes of AI through regulation, digital literacy training and simply prioritizing human relationships.

Richard Reisman, nonresident senior fellow at the Foundation for American Innovation, said in the report that the next decade marks a tipping point in whether AI “augments humanity or de-augments it.”

“We are now being driven in the wrong direction by the dominating power of the ‘tech-industrial complex,’ but we still have a chance to right that,” Reisman wrote.

For more CNN news and newsletters create an account at CNN.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleUFC signs sponsorship deal with Mark Zuckerberg’s Meta
Next Article Andrew Ng Discusses AI and Data Project Implementation in AI for Everyone Course | Flash News Detail
Advanced AI Bot
  • Website

Related Posts

Film festival showcases what artificial intelligence can do on the big screen

June 6, 2025

Big AI isn’t just lobbying Washington—it’s joining it

June 5, 2025

Former Google CEO Eric Schmidt’s AI Expo serves up visions of war, robotics, and LLMs for throngs of tech execs, defense officials, and fresh recruits

June 4, 2025
Leave A Reply Cancel Reply

Latest Posts

Casa Sanlorenzo Anchors New Arts And Culture Venue In Venice

Collector Hoping Elon Musk Buys Napoleon Collection

How Former Apple Music Mastermind Larry Jackson Signed Mariah Carey To His $400 Million Startup

Meet These Under-25 Climate Entrepreneurs

Latest Posts

OpenAI confronts user panic over court-ordered retention of ChatGPT logs

June 6, 2025

Matter-of-Fact: A Benchmark for Verifying the Feasibility of Literature-Supported Claims in Materials Science

June 6, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.