Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline

Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
IBM

The ethics layer | IBM

By Advanced AI EditorJuly 1, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Most of the attention in AI today is focused on output—what a model generates, how accurate or convincing it is, how well it performs against benchmarks. But for Hagerty, the real ethical tension begins earlier, at the foundation model level. This is the raw infrastructure of modern AI, the base layer of machine learning trained on vast datasets scraped from the web. It is what fuels large language models (LLMs) like ChatGPT and Claude.

“The foundation is where it happens,” Hagerty told me. “That is the first thing the system learns, and if it is full of junk, that junk does not go away.”

These base models are designed to be general-purpose. That is what makes them both powerful and dangerous, Hagerty said. Because they are not built with specific tasks or constraints in mind, they tend to absorb everything, from valuable semantic structures to toxic internet sludge. And once trained, the models are hard to audit. Even their creators often cannot say for sure what a model knows or how it will respond to a given prompt.

Hagerty compared this to pouring a flawed concrete base for a skyscraper. If the mix is wrong from the start, you might not see cracks immediately. But over time, the structure becomes unstable. In AI, the equivalent is brittle behavior, unintended bias or catastrophic misuse once a system is deployed. Without careful shaping early on, a model carries the risks it absorbed during training into every downstream application.

He is not alone in this concern. Researchers from Stanford’s Center for Research on Foundation Models (CRFM) have repeatedly warned about the emergent risks of large-scale training, including bias propagation, knowledge hallucination, data contamination and the difficulty of pinpointing failures. These problems can be mitigated but not eliminated, which makes early design choices, such as data curation, filtering and governance, more critical.

As Hagerty sees it, one of the biggest ethical barriers to meaningful progress is the sheer vagueness of what companies mean when they say ‘AI.’ Ask five product teams what they mean by “AI-powered,” and you will likely get five different answers. Hagerty views this definitional slipperiness as one of the core ethical failures of the current era.

“Most of the time, when people say AI, they mean automation. Or a decision tree. Or an if/else statement,” he said.

The lack of clarity around terms is not an academic quibble. When companies present deterministic software as intelligent reasoning, users tend to trust it. When startups pitch basic search and filter tools as generative models, investors throw money at mirages. Hagerty refers to this as “hype leakage” and sees it as a growing source of confusion and reputational damage.

In regulated industries like finance or healthcare, the consequences can be more severe. If a user is misled into thinking a system has a more profound awareness than it does, they may delegate decisions that should have remained human. The line between tool and agent becomes blurred, and with it, accountability.

This problem also leads to wasted effort. Hagerty cited recent research on the misuse of LLMs for time-series forecasting, a statistical method used to predict future values based on historical data, a task where classical methods remain more accurate and efficient. Yet some companies continue to use LLMs anyway, chasing novelty or signaling innovation.

“You are burning GPUs to get bad answers,” he said. “And worse, you are calling it progress.”

The ethical issue is not just inefficiency. It is a misrepresentation. Teams build products around technology they barely understand, add marketing that overstates their capability and deploy it to users who have no way to evaluate what they are using.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleInside Baidu’s Open-Source AI Push
Next Article Robert Rodriguez: Sin City, Desperado, El Mariachi, Alita, and Filmmaking | Lex Fridman Podcast #465
Advanced AI Editor
  • Website

Related Posts

IBM Fires 8,000 Employees and Replaced Them With AI, Only to Rehire Just as Many Shortly After for Jobs…

September 22, 2025

An IBM Executive Shares Her Go-to Interview Question

September 22, 2025

Web development training goes free: Check these online courses by Microsoft, IBM, SWAYAM, and more

September 22, 2025

Comments are closed.

Latest Posts

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

Three Loaned Banksy Works Incite Dispute Between England and Italy

Major Collection of Old Masters Paintings Could Be Fractionalized

100 Must-See Artworks at the Metropolitan Museum of Art

Latest Posts

AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline

September 22, 2025

Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR

September 22, 2025

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

September 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline
  • Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR
  • Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks
  • OPM adds OpenAI to its employees’ computers
  • Can AI Help Invent the Next Superconductor? MIT and Samsung Researchers Think So

Recent Comments

  1. JeffreyDaf on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. MartinHoins on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. TimothyEvive on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. skyforge5Nalay on Foundation AI: Cisco launches AI model for integration in security applications
  5. Brentelorm on C3 AI and Arcfield Announce Partnership to Accelerate AI Capabilities to Serve U.S. Defense and Intelligence Communities

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.