How AI Ethics Must Evolve in Modern Healthcare With: Dr. Colleen Lyons
In this in-depth conversation, Dr. Colleen Lyons dives into the nuanced intersection of ethics and AI in healthcare, spotlighting overlooked risks and opportunities in today’s clinical AI tools. Drawing from her background at the FDA and in academia, she critiques the hollow nature of many ethical AI frameworks and calls for transparency and value-based governance over check-the-box compliance. The conversation covers the Belmont Report’s enduring relevance, AI's impact on autonomy and informed consent, and the systemic bias embedded in data. With regulatory landscapes still fragmented, Lyons argues for proactive, values-driven leadership to ensure AI adoption benefits patient care without unintended harms—particularly in vulnerable populations. If you're working at the frontline of AI implementation in provider systems or clinical operations, this episode is essential listening.
Episode Contents:
About the Guest
Dr. Colleen Lyons is a clinical research ethicist and former FDA professional, now affiliated with Champlain College’s AI Commons. She specializes in operationalizing values-driven frameworks for AI deployment in regulated environments.
Key Takeaways
- Ethical AI must go beyond frameworks—values must be lived within organizations
- Clinical AI adoption raises issues of informed consent and hidden bias
- Transparency about AI use is more critical than full explainability
- Regulatory clarity is lacking; leaders must adopt proactive, values-driven governance
- Organizations that ignore values in AI deployment risk reputational and legal consequences
Transcript Summary
Q: What Makes AI Ethics in Healthcare So Urgent Today?
A: We’re in a frothy phase where everyone wants in on ‘ethical AI,’ but most efforts are superficial. We need systems that endure—not just look good in PowerPoints.
Q: How Does the Belmont Report Apply to AI?
A: Autonomy, beneficence, and justice remain critical. But AI creates asymmetrical power—we must ask how consent and transparency work when a black-box model helps make your diagnosis.
Q: What’s the Real Risk of Bias in AI Tools?
A: Most medical datasets are not representative. If the training data is skewed, your AI is likely to misdiagnose people outside the majority group—especially underrepresented patients.
Q: Transparency vs. Explainability—What Really Matters?
A: Patients don’t need to know every algorithmic detail. But they deserve to know an AI was involved and understand its limits. That’s transparency, not explainability.
Q: Should AI Be Used in Diagnoses Without Patient Knowledge?
A: That's the crux—if AI influences treatment, clinicians have a duty to disclose that. Otherwise, you strip autonomy and risk trust.
Q: Where Does Accountability Lie When AI Goes Wrong?
A: Like a supply chain, everyone from developers to CEOs shares responsibility. You can't just blame the tool. Leadership must prepare for intelligent failure.
More Topics
- Healthcare Ethics and Policy
- AI in Patient Care
- AI in the Healthcare Industry
- AI and Medical Innovation
Keep Exploring
About the Series
AI and Healthcare—with Mika Newton and Dr. Sanjay Juneja is an engaging interview series featuring world-renowned leaders shaping the intersection of artificial intelligence and medicine.
Dr. Sanjay Juneja, a hematologist and medical oncologist widely recognized as “TheOncDoc,” is a trailblazer in healthcare innovation and a rising authority on the transformative role of AI in medicine.
Mika Newton is an expert in healthcare data management, with a focus on data completeness and universality. Mika is on the editorial board of AI in Precision Oncology and is no stranger to bringing transformative technologies to market and fostering innovation.