Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Perplexity’s AI browser Comet could cut need for extra hires, says CEO Aravind Srinivas | Technology News

Improving GUI Grounding with Explicit Position-to-Coordinate Mapping – Takara TLDR

Microsoft Adds Anthropic’s Claude AI to Copilot

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

The ghost in the machine

By Advanced AI EditorOctober 6, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A representational image of a robot hand showing a DNA sample. — Unsplash/File
A representational image of a robot hand showing a DNA sample. — Unsplash/File

It began, as many lessons do, with an ache that would not go away. Last summer, my father grimaced over a tooth that had already been through the wars, a root canal, a post, a crown. The local dentist glanced at the periapical radiograph, did not blink and prescribed a definitive remedy: extraction and an implant. There was a tidy logic to it: out with the old, in with the titanium, but something about the speed of that certainty unsettled us.

“Send me the X-ray”, my brother said. He works at Google DeepMind, a job description that has turned family group chat into a peculiar salon where philosophy meets machine learning. He ran the image through an AI diagnostic model and, within minutes, returned a report that was startling not for its brilliance but for its clarity. It highlighted a likely apical radiolucency, an infection at the root tip, and set out three options in plain English. Under each pathway, it listed pros and cons, recovery expectations and the ordinary trade-offs that make medicine as much art as science.

Armed with that printout, my father sought a second opinion from an endodontist. The specialist chose the conservative route, a retreatment through the crown. The tooth was saved. The first dentist had not mentioned that possibility; the machine had.

That episode did not prove that AI is better than doctors. It did something subtler and, to me, more profound. It showed the contours of good medicine: the collation of knowledge, the tempering effect of experience, a disciplined attention to uncertainty and the moral instinct to preserve what can be preserved. In other words, what makes a great doctor is the marriage of knowledge with judgment. At its best, AI is an amplifier of knowledge, the distilled experience of thousands of clinicians held in a model’s weights, offered humbly as a second pair of eyes.

This realisation became the spine of my Extended Project Qualification, a standalone, five-thousand-word independent research project that students complete alongside their A levels in the UK. I set out to ask not only whether AI could match or surpass clinicians in discrete diagnostic tasks, but whether it could help us predict and prevent disease earlier, and whether that future would feel like progress or peril.

My research began with unbridled optimism. The evidence of AI’s potential is difficult to dismiss. Algorithms developed by teams at Google DeepMind and elsewhere can detect diabetic retinopathy, a leading cause of blindness, with accuracy that matches or exceeds trained ophthalmologists. In oncology, models can analyse histopathology slides and identify cancerous cells with a speed and consistency no human pathologist could hope to maintain. The power of these tools lies in their ability to see beyond the limits of human perception.

This is the seductive promise of AI in medicine. It offers to augment the physician, to strip away the cognitive drudgery of routine diagnostics, and to let doctors focus on what truly matters, the patient. An X-ray sees the body; medicine must still see the person. In that vision, technology makes healthcare more human, not less.

As my research progressed, doubt crept in. A question intruded on the triumphalism. If we hand over the responsibility of diagnosis, the cornerstone of medical practice, to an algorithm, what becomes of our own skills? As we grow more reliant on flawless memory and inhuman pattern recognition, what happens to the physician’s cognitive muscles?

Here lies a paradox of our age. The tools we build to enhance our capabilities can, over time, erode them. Biology offers its reminders. Organs and abilities that no longer serve a purpose atrophy and disappear. Our bodies carry relics of lost functions, an appendix that speaks to a diet we no longer have, a tailbone that nods to a primate ancestry. Culture tells the same story. Satellite navigation chips away at our mental maps. Calculators turn long division into a vague recollection. Spellcheck pushes orthography into the background. A tool begins as an aid, becomes an exoskeleton and can harden into a cage.

Medicine is uniquely vulnerable because its competence is embodied. Pattern recognition, the subtle shading that separates scar tissue from malignancy, the cadence of a patient’s breathing that speaks of trouble long before a blood gas does, is not just propositional knowledge. It is craft memory. It is learned by doing, reinforced by consequences and kept honest by the felt weight of responsibility. If AI systems take over the routine triage, we may narrow the apprenticeship. The chance to make a thousand small calls and to learn from the hundred we get wrong could wither. One generation later, the baseline of human skill is lower. Because the baseline is lower, the tool seems more indispensable. The spiral is tidy until it is not.

This is not an argument for rejecting AI. My father’s X-ray taught me that the right machine, at the right moment, can expand options and restore the primacy of judgment. The model did not decide. It organised possibilities and made us better questioners. It gave the endodontist a clean map and left the driving to her.

The task, then, is to design a future in which human skill and machine competence grow together without the human side withering. That requires a different way of thinking about trust and training.

Trust should begin with accountable augmentation. The clinician remains the locus of responsibility. The AI is a tool with documented performance, calibrated probabilities and intelligible uncertainty. When the model is unsure, it should say so. When a recommendation hinges on a threshold, it should expose that threshold and the consequences of moving it.

Training should guard against deskilling. Juniors need periods of AI-free practice, reading images and seeing patients without automated prompts, followed by structured comparison with the model. Teams should study cases where the model erred or where automation bias misled a clinician, so that everyone rehearses the central question: when should I distrust the tool? Models should earn graduated autonomy. Early on, they can highlight low-salience findings and remain quiet on the obvious. As clinicians grow, the model’s commentary can widen, always with the volume under human control. Periodic skill drills, the medical equivalent of the pilot’s simulator, can keep muscles warm. Clinicians commit to decisions with the AI masked, receive feedback, then repeat.

Some will argue that such rituals slow us down in an overstretched system, that we cannot afford manual mode. I would argue the opposite. The rare crisis, when the distribution shifts, when a scanner update degrades calibration, when a new presentation breaks learned patterns, will be paid for in missed cancers and late recognitions. Teams that keep their muscles warm can smell smoke before the alarm notices heat.

My journey started with an X-ray, with the unsettling comparison between a human mind and an artificial one. It has led me to a place of profound ambivalence. I am awed by the potential of artificial intelligence to revolutionise medicine and alleviate human suffering. But I am also deeply concerned that in our rush to embrace this future, we may inadvertently de-skill ourselves, losing a part of what makes a doctor a healer, and not just a technician. The ghost in the machine is not the AI itself, but the spectre of our own diminishing expertise. The task ahead is to ensure that as the machines get smarter, they make us smarter too, not just more reliant.

We risk creating a generation of medical practitioners who are brilliant operators of technology but have lost the fundamental craft of medicine. The challenge is not to reject this powerful new technology. That would be as foolish as rejecting the stethoscope or the microscope. The future is not a battle of Man versus Machine. Rather, we must fundamentally redesign our approach to medical education and practice to integrate AI as a partner, not a replacement. We must cultivate a healthy scepticism, teaching doctors to use AI as a brilliant but fallible consultant. We must treat our own cognitive skills as the precious resources they are, continuing to exercise them even when a machine offers an easier path.

My father smiles when he chews now. To him, it was a simple mercy that his own tooth could be saved. To me, it was a curriculum. Medicine at its best is a conversation, between past and present knowledge, between probabilities and persons, between what we can do and what we should do. AI deserves a seat at that table. But it must not be allowed to take the chair.

The writer is a freelance contributor. He can be reached at: ansaarreshi@icloud.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLongCodeZip: Compress Long Context for Code Language Models – Takara TLDR
Next Article Automated Structured Radiology Report Generation with Rich Clinical Context – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Humanoid robots in the home? Not so fast, says expert

October 3, 2025

CISO Conversations: John ‘Four’ Flynn, VP of Security and Privacy at Google DeepMind

October 3, 2025

CISO Conversations: John ‘Four’ Flynn, VP of Security at Google DeepMind

September 30, 2025

Comments are closed.

Latest Posts

Former ARTnews Publisher Dies at 97

National Gallery of Art Closes as a Result of Government Shutdown

Almine Rech Closes London Gallery After More Than a Decade

Record Exec and Art Collector Gets Over 4 Years

Latest Posts

Perplexity’s AI browser Comet could cut need for extra hires, says CEO Aravind Srinivas | Technology News

October 6, 2025

Improving GUI Grounding with Explicit Position-to-Coordinate Mapping – Takara TLDR

October 6, 2025

Microsoft Adds Anthropic’s Claude AI to Copilot

October 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Perplexity’s AI browser Comet could cut need for extra hires, says CEO Aravind Srinivas | Technology News
  • Improving GUI Grounding with Explicit Position-to-Coordinate Mapping – Takara TLDR
  • Microsoft Adds Anthropic’s Claude AI to Copilot
  • Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation – Takara TLDR
  • C3.ai: Stay Patient Through The Transition (NYSE:AI)

Recent Comments

  1. Kimberlee Cistrunk on Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
  2. Benny Bottorf on Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
  3. Anton Lopuzzo on Khosla, Accel Top May Ranking
  4. Brady Nagtalon on Match 2 – Google DeepMind Challenge Match: Lee Sedol vs AlphaGo
  5. Mistie Kozlak on Debunking The AI Customer Service Myth

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.