
(Iryna Pohrebna/Shutterstock)
Artificial Intelligence is steadily transforming healthcare. Chatbots now help triage symptoms, and algorithms are getting better at spotting anomalies in scans. Each new system nudges the boundaries of what machines can do. But for all that progress, one tool has barely changed in over 200 years: the stethoscope. It’s still the first thing a doctor reaches for to listen to a patient’s heart and lungs. Now, with AI in the mix, the stethoscope is getting a lot smarter.
At least, that’s what’s being claimed by researchers at Imperial College London and Imperial College Healthcare NHS Trust, who are leading the TRICORDER study. They are testing the Eko DUO, an AI-powered stethoscope that promises to detect heart failure, atrial fibrillation, and valve disease in just 15 seconds.
As the clinician listens, the stethoscope records heart sounds and a single-lead ECG. That data is sent to the cloud, where ML models analyze it for signs of serious heart conditions. These models are trained to recognize subtle patterns that may be missed during a routine physical exam.
A few seconds later, the results are returned to the clinician’s phone or computer. If something appears abnormal, it is flagged for follow-up. It’s worth noting that the device is not designed to make clinical decisions on its own. It is there to support the doctor by adding another layer of valuable data.

(Shutterstock)
The device itself is built on the Eko DUO, a digital stethoscope created by Eko Health, a medical technology company based in California. However, the team at Imperial is leading the study on the technology to find out if it actually improves care. Eko is not involved in the research. This study is focused on whether a tool like this can help doctors catch heart problems earlier and more reliably during routine appointments.
If this tool delivers on its promise, it could genuinely change how heart disease is caught and treated. That is why this tool is being considered a game-changer for the outpatient service industry.
Right now, many people only find out something is wrong when it is already serious. Heart failure and valve disease often go undiagnosed until symptoms can no longer be ignored. By that stage, treatment becomes more complicated and the risk of long-term damage is much higher.
A quick listen during a routine visit could offer something different. The early warning could enable faster decisions, and that could make a huge difference in patient care. For doctors, it could offer crucial insights without adding time or burden to already packed appointments.
“The study is designed to address the unacceptable reality that cardiovascular disease, heart failure particularly, is most frequently detected at a late stage, after disease progression precipitates a hospital admission,” wrote the authors of the study.
What makes this system stand out is how the AI behind it has been trained. The models are not working from textbook definitions alone. They have been trained on thousands of real patient cases, including examples with complex, overlapping conditions. That helps the stethoscope recognize patterns that a doctor might miss in a fast-moving appointment. Rather than offering a final diagnosis, the system highlights areas of concern that may deserve a closer look. It acts like a second pair of ears, but one that has been trained on massive amounts of cardiac data.

(SOMKID THONGDEE/Shutterstock)
Early results from the TRICORDER study suggest the AI stethoscope could reshape frontline heart care. Among over 12,000 patients tested across hundreds of GP practices, those examined with the tool were more than twice as likely to be diagnosed with heart failure, nearly twice as likely for valve disease, and 3.5 times more likely for atrial fibrillation.
The results are impressive; however, the researchers acknowledged some key limitations. Since the AI stethoscope was introduced across a broad mix of clinics with no fixed rules for how often it had to be used, the level of adoption varied. Some doctors used it regularly, others barely at all. That inconsistency made it harder to draw direct lines between the tool and the diagnoses. Instead of collecting fresh clinical data, the team relied on existing patient records, which helped with scale but made it tougher to evaluate subtle or complex cases.
There were also limitations tied to the AI itself. The researchers could not directly measure how well the algorithms performed in edge cases or in patients with overlapping conditions. Because of the way the data was collected, they could not always see whether doctors trusted the AI’s suggestions or simply ignored them.
Since most clinical records lack detailed labeling, the system’s ability to distinguish between different types of heart failure, for example, remains unclear. These gaps will require deeper follow-up studies to understand where the AI tool truly shines and where it still falls short.