
(Shutterstock AI Generator)
Space travel has always rewarded improvisation. Astronauts often rely on quick and practical fixes with whatever is on board. A strap becomes a sling. A checklist gets rewritten on the fly. Now there is a new partner on board.
NASA and Google are testing an AI medical assistant called the Crew Medical Officer Digital Assistant (CMO-DA), built to help astronauts handle medical problems when a doctor is not on the spacecraft and talking to Earth takes too long. It is meant to be a calm and informed guide that understands spaceflight medicine and can walk a crew member from symptom to next step without waiting for Mission Control.
The aim is to make medical care in space more Earth independent. When radio delay stretches into minutes or contact drops out, the assistant steps in with on board triage, step by step guidance, and treatment checklists. That kind of on the spot autonomy means safer and steadier care on the Moon, on the long ride to Mars, and during every deep space stretch in between.
The AI assistant is multimodal, so it can take voice, text, and images and fuse them into a single picture of what is happening. Describe the pain, show a photo of a swollen ankle, ask what to do next, and it responds with structured guidance tied to known procedures. It does not replace a physician. It keeps the human in charge and offers a clear, explainable plan when the nearest clinic is a planet away.
Jim Kelly, Google’s vice president of federal sales in the public sector, wrote in a blog post that Google and NASA co-developed theCrew Medical Officer Digital Assistant to aid astronauts as NASA missions venture deeper into space.
“This innovative system isn’t just about supporting space exploration; it’s about pushing the boundaries of what’s possible with AI to provide essential care in the most remote and demanding environments,” Kelly wrote. “This tool represents an important milestone for AI assisted medical care and our continued exploration of the cosmos. It holds potential for advancing space missions and could also benefit people here on Earth by providing early access to quality medical care in remote areas.”
Under the hood, the assistant runs on Google’s Vertex AI platform. NASA keeps control of the application’s code, while Google supplies the platform and tools. It is built to work with low bandwidth or no link at all, using onboard computing and cached references. Each recommendation is saved with a time stamp so crews and doctors can review what happened later.
Initial trials covered a wide range of medical scenarios. Outputs were measured with the Objective Structured Clinical Examination framework, a tool used to evaluate the clinical skills of medical students and working healthcare professionals. Doctors observed how the system gathered a history, reasoned through symptoms, and suggested care, then scored its performance. In three benchmark cases, diagnostic accuracy was 88% for an ankle injury, 80% for ear pain, and 74% for flank pain. These early results are promising, but more testing and refinement are required before the system is ready for operational use.
Google and NASA are now collaborating with medical doctors to test and refine the model. The aim is to harden it for real missions by validating it across more crews and conditions and by improving performance when bandwidth is limited or contact is lost. Next steps include expanding the library of cases, bringing in data from approved onboard devices, and running high fidelity simulations that mimic microgravity and long communication delays.

(NorthSky Films/Shutterstock)
If results in space hold up, the next hurdle on Earth is approval from regulators before any clinical rollout. Expect multi site trials, post deployment monitoring plans, and tight patient safety guardrails.
The AI medical assistant could also have another use case where it can support rural clinics when connectivity is weak, offering triage and step by step guidance that works offline and syncs later. It could help first responders in disaster zones by standardizing checks, flagging warning signs, and pointing to safe next steps when hospitals are far away.
AI is becoming a common thread across NASA programs. Just last month NASA showed an Earth observing satellite that could look ahead along its path using AI. It can analyze images on board and repoint an instrument in under 90 seconds, as claimed by NASA. By skipping cloud covered scenes and locking onto fast events like fires, eruptions, and severe storms, it sends down more useful data with less waste. That kind of autonomy delivers quicker science decisions, smarter use of bandwidth, and missions that learn and react while they fly.