Balancing Innovation and Regulation in Healthcare AI With: Stephen Speicher, MD, MS

Dr. Stephen Speicher, Head of Clinical Oncology and Safety at Flatiron Health, offers a pragmatic and optimistic perspective on healthcare AI safety and regulation. In this far-reaching discussion, he explains why nuanced governance is crucial to avoid overgeneralized policies that risk stalling innovation. He highlights the need for stratified oversight based on use case—from AI-driven diagnostic tools to administrative automation—and urges that responsibility for AI safety be shared across developers, deployers, clinicians, and health systems. Speicher also discusses the role of informed consent, patient data privacy, and the potential for AI to exacerbate or reduce health inequities. His thoughtful analysis resonates with IT, regulatory, and clinical leaders looking to safely scale AI in real-world settings.

About the Guest

Dr. Stephen Speicher is a pediatric hematologist-oncologist and Head of Clinical Oncology and Safety at Flatiron Health. He specializes in health IT innovation and healthcare systems engineering. LinkedIn: https://www.linkedin.com/in/stephen-speicher-m-d-m-s-07395385/

Notable Quote

"AI can be the great equalizer, but only if it’s accessible across settings."

Key Takeaways

  • Regulation must reflect clinical impact, not just technology type
  • Health systems need to vet AI for workflow-specific risk
  • Equitable access to AI is key to avoiding a new digital divide

Transcript Summary

 

What does responsible regulation of healthcare AI look like?

A one-size-fits-all approach won’t work. Regulation must consider how AI is used—diagnostics and treatment decisions need more oversight than admin tasks. Healthcare deserves a carve-out in broader AI policy.

How should responsibility for AI safety be distributed?

Dr. Speicher champions a shared responsibility model—developers, deployers, clinicians, and health systems all have roles. He emphasizes the need to educate clinicians to understand, vet, and effectively use AI tools.

What about informed consent and patient trust in AI?

Informed consent is advisable but complex. While AI isn't yet standard of care, transparency is key. As AI becomes embedded in workflows, carving it out might not be feasible.

How can we ensure AI doesn't worsen health inequities?

Dr. Speicher warns that AI could widen the digital divide if access is limited to well-resourced systems. Community practices, though eager adopters, may lack integration capacity. Cost and deployment barriers must be addressed.

About the Series

AI and Healthcare—with Mika Newton and Dr. Sanjay Juneja is an engaging interview series featuring world-renowned leaders shaping the intersection of artificial intelligence and medicine.

Dr. Sanjay Juneja, a hematologist and medical oncologist widely recognized as “TheOncDoc,” is a trailblazer in healthcare innovation and a rising authority on the transformative role of AI in medicine.

Mika Newton is an expert in healthcare data management, with a focus on data completeness and universality. Mika is on the editorial board of AI in Precision Oncology and is no stranger to bringing transformative technologies to market and fostering innovation.

Get new episode alerts via email

By clicking the Submit button, you agree to xCures's Terms of Service and Privacy Policy.