From Interview #99
With Stephen Speicher, MD, MS
Dr. Stephen Speicher, Head of Clinical Oncology and Safety at Flatiron Health, offers a pragmatic and optimistic perspective on healthcare AI safety and regulation. In this far-reaching discussion, he explains why nuanced governance is crucial to avoid overgeneralized policies that risk stalling innovation. He highlights the need for stratified oversight based on use case—from AI-driven diagnostic tools to administrative automation—and urges that responsibility for AI safety be shared across developers, deployers, clinicians, and health systems. Speicher also discusses the role of informed consent, patient data privacy, and the potential for AI to exacerbate or reduce health inequities. His thoughtful analysis resonates with IT, regulatory, and clinical leaders looking to safely scale AI in real-world settings.
From Interview #98
With Jordan Johnson, MSHA
In this illuminating interview, Jordan Johnson, MSHA, Founder and Principal of Bridge Oncology, unpacks the complexities behind healthcare data interoperability. Speaking with Dr. Sanjay Juneja, Johnson offers a deep dive into how interoperability—often oversimplified—functions in clinical, administrative, and technological workflows. Drawing from his experience as a legal and operational expert, Johnson discusses the downstream consequences of data misalignment and lack of standardization, especially in oncology and radiotherapy. With a strong stance on the need for regulatory frameworks and AI-powered infrastructure, Johnson highlights how true interoperability could reduce healthcare disparities, boost clinical efficiencies, and drive value-based care transformation. For any healthcare professional working with EHRs, payer systems, or health data, this conversation is essential.
From Interview #97
With Dr. Ben Rosner
Dr. Ben Rosner, a hospitalist and digital health researcher at UCSF, explores the promise and pitfalls of AI-generated discharge summaries. In this wide-ranging discussion, Dr. Rosner explains how LLMs can reduce administrative burden, improve communication at discharge, and potentially enhance patient safety—if implemented thoughtfully. He shares findings from his JAMA-published study evaluating LLM-drafted summaries and outlines how these tools perform against physician-written counterparts. The conversation expands into the risks of de-skilling, challenges of AI trust, and the need for systems like "LLMs as juries" to monitor AI-generated clinical documentation. Rosner also reflects on AI’s broader impact on medical education and the role of emerging roles like Chief Health AI Officers.
From Interview #96
With Emily Lewis
In this information-packed interview, Emily Lewis shares a compelling vision for the future of AI in patient care. Drawing from her work in machine learning and generative AI, Lewis highlights how these tools are not just enhancing clinician efficiency but reshaping how patients engage with their own health. She explains how multimodal AI applications, from avatars to audio interfaces, can personalize communication based on learning preferences. Lewis emphasizes AI’s potential to foster equitable partnerships between patients and clinicians. The conversation also explores patient education, self-care, and the structural hurdles of deploying AI across institutions. With attention to AI for patient engagement and AI-driven personalized care, Lewis offers a deeply insightful look into the systems and safeguards necessary for responsible AI implementation.