Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

ByteDance releases new open source Seed-OSS-36B model

You can now talk to Google Photos to make your edits

Anaconda Report Links AI Slowdown to Gaps in Data Governance

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
MIT News

Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution

By Advanced AI EditorAugust 7, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution
NADH photoacoustic image of cerebral organoids and brain slice. Credit: Light: Science & Applications (2025). DOI: 10.1038/s41377-025-01895-x

Both for research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.

“The major advance here is to enable us to image deeper at single-cell resolution,” said neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering Professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general, and electrical activity in neurons in particular, all the way through samples such as a 1.1 mm “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7 mm thick slice of mouse brain tissue.

In fact, said co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, said the system could have peered far deeper but the test samples weren’t big enough to demonstrate that.

“That’s when we hit the glass on the other side,” he said. “I think we’re pretty confident about going deeper.”

Still, a depth of 1.1 mm is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.

MIT imaging tech promises deepest looks yet into living brain tissue at single-cell resolution
A picture of MIT’s new multiphoton, photoacoustic, label free microscope system. Image by Tatsuya Osaki. Credit: Tatsuya Osaki/MIT Picower Institute

Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur said).

Meanwhile, though the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (~10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”

“We merged all these techniques—three-photon, label-free, photoacoustic detection,” said co-lead author Tatsuya Osaki, a research scientist in The Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”

Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.

In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, which neuroscientists use to signal neural electrical activity.

MIT imaging tech promises deepest looks yet into living brain tissue at single-cell resolution
In this edited version of a figure from the research, NAD(P)H molecules within cells in a cerebral organoid are detected photoacoustically (in blue on left) and optically (black and white on right). Image depth 0.2 mm. Credit: MIT

Alzheimer’s and other applications

With the concept of label free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.

Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label free (i.e. no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.

The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.

Lee said he expects that full imaging at depths of 2 mm in live brains is entirely feasible given the results in the new study.

“In principle it should work,” he said.

Mercedes Balcells and Elazer Edelman are also authors of the paper.

More information:
Tatsuya Osaki et al, Multi-photon, label-free photoacoustic and optical imaging of NADH in brain cells, Light: Science & Applications (2025). DOI: 10.1038/s41377-025-01895-x

Provided by
Massachusetts Institute of Technology

Citation:
Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution (2025, August 7)
retrieved 7 August 2025
from https://medicalxpress.com/news/2025-08-imaging-tech-deepest-brain-tissue.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTrump’s Truth Social launches AI search powered by Perplexity
Next Article India Data Breach Costs Rise to Rs 22 Cr: IBM Report: Rediff Moneynews
Advanced AI Editor
  • Website

Related Posts

U.S. tech stocks slide after Altman warns of ‘bubble’ in AI and MIT study doubts the hype

August 20, 2025

US tech stocks slide after Altman warns of ‘bubble’ in AI and MIT study doubts the hype

August 20, 2025

MIT’s next-gen AI screens millions of molecules at supercomputer speed for drug study

August 20, 2025

Comments are closed.

Latest Posts

Dallas Museum of Art Names Brian Ferriso as Its Next Director

Rapa Nui’s Moai Statues Threatened by Rising Sea Levels, Flooding

Mickalene Thomas Accused of Harassment by Racquel Chevremont

AI Impact on Art Galleries, and More Art News

Latest Posts

ByteDance releases new open source Seed-OSS-36B model

August 21, 2025

You can now talk to Google Photos to make your edits

August 21, 2025

Anaconda Report Links AI Slowdown to Gaps in Data Governance

August 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • ByteDance releases new open source Seed-OSS-36B model
  • You can now talk to Google Photos to make your edits
  • Anaconda Report Links AI Slowdown to Gaps in Data Governance
  • Tyson Foods elevates customer search experience with an AI-powered conversational assistant
  • AI Isn’t Coming for Hollywood. It’s Already Arrived

Recent Comments

  1. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. RaymondSwedo on Foundation AI: Cisco launches AI model for integration in security applications
  3. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Charlescak on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. ArturoJep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.