Leveraging AI for Safer and More Efficient Healthcare Delivery
"We can use technology to empower our providers. But if people do not adopt it and use it, we are never going to achieve what we are looking for."
Building AI where the stakes are life and death
When Ashley Beecy (MD, FACC, Medical Director, AI Operations, New York Presbyterian Hospital) stepped onto the stage, she did not open with futuristic sci-fi or glossy product demos. Instead, she started with an uncomfortable truth. In healthcare, there is an explosion of machine learning research and almost a trickle of real, approved products reaching patients.
The numbers tell the story. By 2024, more than forty thousand machine learning papers had been published in healthcare. Yet only around a thousand AI-based models had made it through the US Food and Drug Administration. That gap between ideas and impact is where Ashley lives her professional life, as Medical Director of AI Operations at New York Presbyterian Hospital.
Her talk was not about hype. It was about the hard work of turning AI from a promising paper into something a tired clinician will actually trust and use at two in the morning.
Asking better questions, not just building better models
Ashley starts with a deceptively simple question. Are we solving the right problems for the people who actually deliver care?
Healthcare systems are drowning in priorities, but some are impossible to ignore. Clinicians spend hours of “pajama time” at night, finishing documentation in electronic health records. Chronic diseases like heart failure consume staggering amounts of time, money and emotional energy.
If an AI product is going to matter, it has to meet these realities head-on. That means: Understanding the current workflow. Are you actually improving the standard of care, or adding another screen and another click to an already overloaded day?
Identifying the true stakeholders. There are people who want the product, such as quality leaders or executives, and the people who have to use it, such as doctors, nurses, and physician assistants. If both groups are not involved in the design, the product will stall. Testing whether AI is even needed. Sometimes, a rule-based system or a simple risk score does the job just fine. Ashley is very clear on this. In a hospital, the goal is not to use AI for its own sake. The goal is to improve care safely and efficiently.
Heart failure as a proving ground
To make this real, Ashley turns to the cardiovascular program at New York Presbyterian. Heart failure is a huge and growing problem. By 2030, more than eight million people in the United States alone are expected to live with it, and one in five adults over 40 will be at risk.
Ashley and her colleagues decided to treat heart failure as a test bed for what responsible AI in healthcare could look like. The hospital invested around ten million dollars to partner with Cornell Tech, building joint teams across clinical and technical disciplines. That first meeting, Ashley admits, felt like crickets. Clinicians had little sense of how machine learning really worked. Data scientists had never wrestled with the messy reality of electronic health records, where a single medication can have different names, formulations, and doses. So they slowed down. For three months, they did not race toward coding models. Instead, they worked out what questions actually mattered and where AI could do real good with acceptable risk.
In healthcare, AI can support prevention, diagnosis, prognosis, and monitoring, as well as treatment decisions. Right now, Ashley believes the most feasible and lower-risk opportunities are diagnosis, prognosis, and risk stratification. In that world, one concept becomes especially powerful. Opportunistic screening.
Opportunistic screening and quiet revolutions
Traditional screening is simple. You reach a certain age, and you are called for a mammogram or colonoscopy. Opportunistic screening flips that. It asks what hidden insight could be extracted from the tests and data a patient is already generating.
A patient comes in with a chest infection and has a computed tomography scan. Could that scan also reveal an undiagnosed weak heart? Someone twists an ankle and gets an electrocardiogram as part of routine care. Could that ECG quietly flag the possibility of cardiac amyloidosis years before symptoms become obvious?
Ashley’s team is building models that do exactly this, using data the hospital already owns. They have upgraded hardware so that more than ten million electrocardiograms can flow into a cloud-based platform and be used for model training. They use echocardiograms, CT scans, and ECG dynamics to predict advanced heart failure risk, reduced ejection fraction, and even measurements previously only obtainable through invasive procedures. In early studies, models predicting peak oxygen consumption from echocardiograms and reduced heart function from CT scans are performing well, with areas under the curve around 0.8. If these results hold through rigorous trials and real-world testing, the impact could be enormous. Fewer missed diagnoses. Earlier referrals for life-altering therapies. Better use of finite specialist capacity.
Perhaps most compelling is the equity angle. If an echocardiogram in a smaller hospital can reliably approximate an expensive cardiopulmonary exercise test, suddenly advanced decision-making becomes possible in places that do not have those tests at all. AI, in Ashley’s framing, can narrow gaps in access rather than widen them.
The work no one sees
If you ask a room of technologists which part of the machine learning lifecycle is most resource-intensive, many will pick model training or deployment. Ashley has a different view. In healthcare, data curation and integration into workflow are the silent giants.
She shows a picture of Lego bricks to describe how data arrives from the electronic health record. Medications with multiple brand names. Procedures are coded differently by different clinicians. Vital signs were recorded in inconsistent ways. Before a single model can be trained, teams have to upgrade infrastructure, standardise data, build registries, link research repositories, and ensure that sensitive information is handled correctly. None of this is glamorous. All of it is essential if a model is going to be safe, reproducible, and auditable.
And after the model is ready, the UX challenge begins.
Why UX inside the electronic health record matters so much
For Ashley, integrating AI into the electronic health record is not a nice-to-have. It is survival. As a practising physician, every time she is forced to click into a separate system, her frustration spikes. She knows her colleagues feel the same way.
There are several options for showing model output inside systems such as Epic. Pop-up alerts that interrupt the workflow. Risk scores at the top of the chart. Dashboards and lists that surface high-risk patients. Graphs that combine model predictions with familiar clinical values. Each choice has trade-offs. Alerts are powerful but can quickly lead to fatigue. Ashley describes how her hospital had to audit and cut back many Best Practice Alerts because clinicians were simply dismissing them. Static dashboards are safer but can be ignored in a busy clinic. Explainability is another hot topic. Many people argue that AI in healthcare must always be explainable. Ashley gently complicates that story. Clinicians already use medications where the precise mechanism of action is not fully understood. What matters most is high-quality evidence that a treatment works, clarity about which patients were studied, and transparent limits on when it should not be used.
She believes AI should be held to a similar standard. Rigorous trials, clear reporting of test characteristics such as prevalence and positive predictive value, and focused efforts to reduce false positives in opportunistic screening to avoid a wave of unnecessary testing and anxiety.
Governance, bias, and the human lens
One of the most striking moments in Ashley’s talk is also one of the most personal. She describes repeatedly asking an image generation model to create a picture of a medical director of AI operations giving a presentation. The images improved in quality over time, but there was a constant. She never once appeared as a woman.
At first, she barely noticed. By the fortieth time, it was impossible to ignore. In healthcare, bias like that is not just offensive. It is dangerous. If models are trained mostly on data from men, or from a single ethnicity, or from a narrow geographic region, then the predictions may simply not apply to the wider population. Ashley’s teams build checks for that into their development and governance processes. They ask whether the data used to train a model truly reflects the population where it will run. They question vendors whose products were developed for radically different patient groups. They also explore how AI can help recruit more diverse participants into clinical trials, so future evidence is more representative.
To manage all of this, New York Presbyterian has created an AI governance structure that focuses on three kinds of risk. Patient privacy. Quality and safety of care. Regulatory exposure. Around the table sit clinicians, informaticians, legal and regulatory experts, and data scientists. The goal is not to slow innovation for its own sake. It is to make sure innovation stays trustworthy when lives are at stake.
From pilots to powered-up practice
Ashley closes with a challenge to everyone building healthcare products. Spend more time on the idea. Include clinical partners from the first conversation. Make sure the value proposition is real, not theoretical. Run pilots, but do not get stuck in permanent pilot mode. If something works, move thoughtfully toward scale.
Most of all, do not confuse model performance on retrospective data with success in real clinical environments. Solution performance in the messy, time-pressed world of actual care is what matters. That means feedback loops when AI gets it wrong, adaptive monitoring for data drift, and a willingness to refine and retrain. Ashley is optimistic. Doctors and nurses are not scared of technology, she argues. They are excited, as long as they can trust it, see the evidence, and have a hand in shaping it. If that trust can be earned, the payoff is huge. Less administrative burden. Earlier detection of silent disease. More equitable access to advanced care. Technology that truly empowers providers rather than overwhelming them.
In that future, AI in healthcare will not be a buzzword. It will be part of how good medicine quietly gets done.
Want to watch the full talk?
You can find it here on UXDX: https://uxdx.com/session/leveraging-ai-for-safer-and-more-efficient-healthcare-delivery1/
Or explore all the insights in the UXDX USA 2025 Post Show Report: https://uxdx.com/post-show-report