Posted on

AI-Based Clinical Decision Support



Share
Theodore Zanos

AI-Based Clinical Decision Support at Scale

At New York Technology Innovation, , Associate Professor and Head of the Division of Health AI at Northwell Health, presents a clear view of how AI-based clinical decision support shapes the future through innovation and intelligence when it is built for measurable performance inside real clinical workflows. He sets a precise boundary that he does not talk about chatbots, then uses that constraint to keep attention on decision support systems that operate on well-defined clinical tasks and verifiable outcomes. Clinical AI earns trust when it reduces uncertainty for clinicians and improves execution for health systems, Zanos argues, and that trust is built through careful validation, disciplined deployment, and ongoing monitoring.

Clinical AI has a longer arc than the current cycle

Clinical AI rests on decades of precedent and iteration, Zanos notes. ELIZA in 1964 simulates conversation with a human therapist, and INTERNIST-1 in 1971 diagnoses patients based on symptoms. Neural networks and machine learning models are applied across medicine from the 1980s to the present, which places current decision support work in a long line of task-specific systems. Regulatory adoption reinforces this pattern. The FDA clears more than 1,200 AI algorithms, and zero rely on large language models, Zanos states.

Northwell’s environment turns AI into an operating capability

Clinical decision support improves when models are developed and evaluated in settings that reflect the complexity of care delivery. Northwell Health operates at a scale that supports that requirement, Zanos explains. Northwell stands as the largest health system in the Northeast United States, with 28 hospitals and more than 1,000 outpatient clinics. More than two million patients are treated annually across more than 5.5 million patient encounters. One of the most ethnically, culturally, genetically, socially, and economically diverse populations in the United States and likely the world is served across the system, Zanos notes, which makes model consistency across demographics and hospitals a central operating test.

Inpatient deterioration demands sharper prioritization

In-hospital deterioration remains a persistent risk because clinical teams make fast judgments under load. Undertriage and overtriage are present in the emergency setting, Zanos notes, with nurses undertriaging 17% and overtriaging 12% of cases, and doctors undertriaging 14% and overtriaging 14% of cases. Deterioration is also a ward reality. 5% to 15% of patients in medical-surgical wards deteriorate and require elevation of care, Zanos states. Decision support draws strength from breadth when it integrates signals from labs, vitals, radiographs, demographics, comorbidities, and medical notes, he adds.

Stop the unnecessary to protect attention and recovery

Better decision support starts by reducing work that does not improve care. "Let Sleeping Patients Lie" becomes a concrete example of that discipline, Zanos explains. More than 50% of overnight sleep interruptions are avoided, he states, citing Toth et al. 2020 in npj Digital Medicine. Clinical attention becomes more available for the patients who need it when unnecessary interruptions are removed, and the operating environment becomes easier to manage.

Prioritize the patients who need escalation

Risk stratification creates value when it concentrates clinical focus on the patients most likely to deteriorate. A model trained on more than 1.5 million hospitalizations predicts combined outcomes that include deaths, ICU transfers, and rapid response team calls, Zanos explains. Eighteen EHR features generate a seven-day risk probability. The model achieves high accuracy above 90%, outperforms the Epic Deterioration Model by 25%, and runs silently in seven hospitals, Zanos reports. Performance remains consistent across demographics and hospitals, he adds, with Patel et al. identified as in preparation.

Monitor and predict through wearable sensing

Continuous monitoring expands the signal window for clinical decision support. Wearables used in this work include VitalConnect VitalPatch, Biobeat Chest Monitor, and Isansys Lifetemp, Zanos notes. A four-year grant of 3.2 million dollars supports the effort across three devices. More than 8,000 patients and more than 18,000 days of data are already collected, with a goal of 10,000 patients and more than 25,000 patient days in one year, Zanos reports.

Deep learning with wearables shifts decision support toward earlier warning signals. Models trained on more than 1.3 thousand clinical patches achieve more than 94% sensitivity, Zanos states. The algorithm remains agnostic to the wearable, and the average lead time for clinical alerts and outcomes is 17 hours in advance, he reports, referencing Scheid et al. in Nature Communications 2025.

Apply prediction to staffing timing, not only clinical risk

Clinical AI also changes operations when it forecasts conditions that drive staffing pressure. Nurse separation triggers the hiring process and requires flex and overtime to maintain staffing levels, Zanos explains as the current state. A predictive model starts earlier in the future state, with flex and overtime positioned alongside an earlier start rather than waiting for separation to occur, he adds. Ettehadi et al. 2025 in Engineering in Medicine and Biology Conference Proceedings supports this direction.

Treat drift as a standard management problem

Decision support models change performance over time because the clinical environment changes. Performance drifts occur as patient case mix and numbers shift, new treatments emerge, and new variants in viral diseases appear, Zanos explains. Emerging and new diseases and comorbidities affect predictions of outcomes. Drift becomes especially salient when models are developed in a specific geography and time and used elsewhere or subsequently, and when models try to predict outcomes of dynamic diseases and conditions, Zanos notes. Self-monitoring and auto-updating models address this reality, he adds, referencing Levy et al. 2022 in Nature Communications.

Operating lessons for clinical AI programs

AI becomes valuable when it operates as a tool for clinicians and health systems at the point of care and in operations, Zanos states. Multimodal AI that combines wearables, EHRs, and imaging creates opportunities for a more holistic understanding, he notes. Real problems deserve priority even when they appear trivial or boring, Zanos adds, because practical value concentrates where waste and avoidable risk accumulate. Careful validation supports reliable deployment, and performance is sustained when models are monitored and updated when needed.

Disclaimer: The information contained in this article is provided for general informational purposes only and does not constitute legal, regulatory, tax, investment, financial, or other professional advice, nor should it be relied upon as such. Readers should seek independent advice from qualified professionals in the relevant jurisdiction(s) before making any decisions or taking any actions based on the content of this article. While reasonable efforts are made to ensure accuracy and timeliness, 1BusinessWorld makes no representations or warranties, express or implied, regarding the completeness, accuracy, reliability, or suitability of the information provided. To the fullest extent permitted by applicable law, neither 1BusinessWorld nor the author accepts any liability for any loss or damage arising from the use of, or reliance upon, this article. The views expressed are those of the author and do not necessarily reflect the views of 1BusinessWorld or its affiliates.