From Interview #96
With Emily Lewis
Emily Lewis, an AI thought leader, offers a pragmatic look at the evolving regulatory landscape around AI in healthcare. In this short but powerful segment, she explains how responsible AI hinges on clear, geographically sensitive oversight. Comparing approaches like the FDA in the U.S. and the NHS in the U.K., Lewis highlights emerging precedents that could ripple across global standards. She emphasizes the challenge of balancing innovation with patient safety and privacy—underscoring the need for foresight, harmonization, and continual learning. Her insights align with current concerns about how fast generative models are evolving and the urgency to build adaptable regulatory guardrails. This clip is particularly useful for professionals tracking the regulation of AI in healthcare and looking to stay ahead of global compliance risks.
From Interview #92
With Dr. Colleen Lyons
Dr. Colleen Lyons, Trust and Change Ambassador at the FDA, discusses the need for thoughtful regulation of AI in healthcare. She warns that overly deregulated markets create hype cycles that end in collapse, while overregulation can stifle innovation and entrench incumbents. Drawing parallels to the dotcom boom and bust, she highlights the dangers of unchecked growth followed by heavy-handed compliance regimes like Sarbanes-Oxley. Lyons stresses that regulation alone is insufficient—organizations must embed ethics and values into their culture to complement legal frameworks. She introduces her concept of 'sturdy leadership,' emphasizing the importance of democratizing values, encouraging employees to speak up, and treating AI as both a technology and a change management challenge. Ultimately, she argues that sustainable AI governance requires balancing innovation with ethical responsibility.
From Interview #92
With Dr. Colleen Lyons
Dr. Colleen Lyons, Trust and Change Ambassador at the FDA, explores the unresolved issue of AI liability in healthcare. She explains that while clinicians remain legally responsible for patient care, questions arise when AI tools influence decisions. Liability could extend to manufacturers, healthcare institutions, or even insurers depending on who vetted, deployed, or profited from the tool. Lyons compares the complexity to global supply chains, where corporations are accountable for ethical behavior across networks of suppliers. She argues that healthcare leaders must embed patient-centric values, train staff to recognize AI limitations, and establish 'sturdy leadership' frameworks. Ultimately, she emphasizes that organizations must prepare for both accountability and intelligent failure, ensuring clinicians have support when AI tools misfire.
From Interview #93
With Rajeev Ronanki
Rajeev Ronanki, CEO of Lyric, explores how to build trust in healthcare AI by addressing bias, ethics, and safety. He contrasts the unchecked rise of social media with today’s AI development, where more emphasis is placed on safeguards. Ronanki explains that biases in AI reflect human subjectivity embedded in training data, but solutions exist: testing for bias, establishing ethical guardrails, and ensuring AI models adhere to a 'do no harm' principle akin to a Hippocratic Oath. He envisions AI as a partner in care, capable of questioning unsupported treatment plans and fostering a two-way learning process with clinicians. However, this requires proactive work upfront—eliminating hallucinations, minimizing bias, and improving data quality—so AI becomes an enabler of trust rather than a risk to it.
From Interview #93
With Rajeev Ronanki
Rajeev Ronanki, CEO of Lyric, unpacks the FDA’s bold step with ELSA, an agency-wide AI tool designed to harness the mountains of data it already holds. He explains that ELSA could transform the FDA from a reactive regulator into a proactive shaper of therapeutic pathways, accelerating drug development and safety monitoring. By simulating drug efficacy, side effects, and applicability across populations, ELSA could shorten approval timelines and improve innovation. Ronanki acknowledges early challenges and skepticism but stresses the need to let the system learn over time, much like autonomous driving technology. If implemented correctly, ELSA could reduce bureaucracy, minimize bias, and build public trust by grounding FDA processes in data-driven science.
From Interview #93
With Rajeev Ronanki
Rajeev Ronanki, CEO of Lyric, explores the complex issue of data sharing in healthcare and how patients might be rewarded for contributing their information. He highlights the gap between resource-rich academic medical centers and community practices that lack infrastructure for data collection. Ronanki envisions a future where AI tools become affordable and ubiquitous, allowing every patient to have access to digital twins of their physicians embedded in mobile apps. These AI agents could provide 24/7 support, answer side-effect questions, and personalize care. He stresses the need for reimbursement models—such as royalties or shared savings—to fairly compensate both physicians and patients who share data, ensuring that community practices can benefit alongside large institutions.
From Interview #84
With Dr. Alister Martin
Dr. Alister Martin, CEO of A Healthier Democracy and an emergency physician at Harvard, explains what AI is used for in healthcare today: solving real, upstream pain points. Through Link Health, his team uses large language models to connect Medicaid patients to more than billion in unspent federal and state aid, reducing avoidable ER visits and addressing social emergencies like food and housing insecurity. The conversation explores how AI in nonprofits can streamline complex benefit navigation and support systems-level improvements aligned with ai in healthcare policy. For health leaders, this is a practical path to lower costs and improve outcomes by meeting patient needs before they become medical crises.
From Interview #78
With Bob Battista
Healthcare leaders ask why proven medicines still aren’t widely reused. In this short conversation, Bob Battista explains that the core barrier to drug repurposing isn’t technology—it’s policy and incentives. While AI drug repurposing and real‑world data can surface new indications, the most valuable knowledge remains locked inside pharmaceutical organizations and constrained by regulatory risk and reimbursement dynamics. Battista outlines how safe‑harbor data sharing and new financial instruments could let companies support niche indications without eroding primary markets, accelerating access for patients and clinicians. He also highlights the untapped insights from physicians’ off‑label use and patient experience—critical signals the healthcare system rarely aggregates. If you work in market access, clinical operations, or digital health, this is a clear roadmap to move the drug repurposing market from potential to practice.
From Interview #76
With Dr. Debra Patt
In this insightful discussion, Dr. Debra Patt explores the nuanced balance between patient privacy, data monetization, and the transformative role of AI in healthcare. She highlights that while individual patient records hold limited value, aggregated, de-identified data can drive significant medical advancements, such as expanding drug indications through real-world evidence. Dr. Patt also addresses the challenges posed by electronic health record (EHR) systems, noting their limitations as billing-focused tools that often fail to capture accurate clinical data in real time. For AI to truly revolutionize healthcare data use, she argues, both technology and clinical workflows must evolve together.
From Interview #78
With Bob Battista
Bob Battista draws parallels between liability in self-driving cars and AI in healthcare, suggesting that true transformation will come when patients can self-drive their own care. He argues that by giving patients access to their health data and enabling AI to process it, individuals can make informed decisions about treatment options without overburdening clinicians. This shift could reduce liability concerns while also accelerating the sharing of patient knowledge, allowing newly diagnosed individuals to start their care journey armed with the best available insights.
From Interview #83
With Pelu Tran
Pelu Tran, CEO of Ferrum Health, outlines why AI adoption in hospitals remains slow despite the technology’s readiness. The barriers lie in integrating modern, cloud-based AI into legacy systems, navigating multimillion-dollar onboarding processes, and addressing strict patient data governance. Tran warns that most AI tools underperform in real-world conditions and require continuous monitoring for bias, drift, and workflow impact. He advises hospitals to view AI as a lifecycle rather than a point solution, building governance frameworks to manage performance, safety, and cost over time.
From Interview #79
With Dr. Nigam Shah
Dr. Nigam Shah, Co-founder of Atropos Health and Chief Data Scientist at Stanford Healthcare, examines the sustainability challenges in AI development for healthcare. He explains that while AI in medicine has existed for decades, the current academic-centric development practices are not suited for scaling from research to real-world applications. The conversation highlights the cost, time, and regulatory complexities, as well as the need for localized and continuous model validation to maintain performance.
From Interview #83
With Pelu Tran
Pelu Tran, CEO of Ferrum Health, addresses the pressing issue of patient data security in the era of AI. He explains why hospitals overwhelmingly prefer AI systems to run within their own controlled environments—either on-premises or in their own cloud—rather than in vendor-controlled clouds. Pelu outlines the risks of vendor environments, including breaches, unauthorized model retraining, and misuse of aggregated data. He also discusses the regulatory barriers, such as FDA requirements that limit adaptive model updates, and how new open-source models could reshape secure AI deployment in healthcare.
From Interview #84
With Dr. Alister Martin
In this clip, Dr. Alister Martin outlines how both AI and healthcare policy can reduce the cost of care. While his organization, A Healthier Democracy, remains people-first in its approach, Dr. Martin strongly advocates for AI upskilling as essential in the modern workforce. He warns that it's not AI that will replace workers, but workers who use AI that will replace those who don't. On the policy side, Dr. Martin makes a compelling case for maintaining reimbursement pathways through Medicare and Medicaid to sustain initiatives that demonstrably lower emergency room visits and hospitalizations—highlighting the cost-effectiveness of AI in healthcare. His remarks provide actionable direction for organizations aiming to use AI for healthcare cost saving strategies.
From Interview #92
With Dr. Colleen Lyons
In this in-depth conversation, Dr. Colleen Lyons dives into the nuanced intersection of ethics and AI in healthcare, spotlighting overlooked risks and opportunities in today’s clinical AI tools. Drawing from her background at the FDA and in academia, she critiques the hollow nature of many ethical AI frameworks and calls for transparency and value-based governance over check-the-box compliance. The conversation covers the Belmont Report’s enduring relevance, AI's impact on autonomy and informed consent, and the systemic bias embedded in data. With regulatory landscapes still fragmented, Lyons argues for proactive, values-driven leadership to ensure AI adoption benefits patient care without unintended harms—particularly in vulnerable populations. If you're working at the frontline of AI implementation in provider systems or clinical operations, this episode is essential listening.