
From left, moderator Rob Smith, Julie Murchinson, Dr. Candice Balluru and Rashmi Raghavendra.
In early October, Formidable brought together a group of AI innovators, investors, critics and thought leaders to discuss the implications of this fast-moving new technology. During a series of wide-ranging panel discussions, we delved into trust, bias, climate change, health care and many more topics. In this four-part series, we'll feature takeaways from these discussions. To view the full panels, please visit our YouTube page.
Today we’re covering the panel “Dr. AI will see you now.” The panelists:
• Dr. Candice Balluru, head of AI Safety and Mental Health implementation at Mpathic, a Seattle tech startup analyzing conversational data to improve patient safety.
• Rashmi Raghavendra, a Formidable member and the managing partner of rcubed ventures, which invests in health tech companies.
• Julie Murchinson, a partner with Transformational Capital, a venture capital and private equity company which invests in health tech companies.
What you probably already know: AI is set to revolutionize how we detect disease, predict risk and extend care. However, as AI operates on human data sets, it also has the potential to exude bias, expose individuals’ most sensitive details and mislead when judgment is needed most. Its blind spots pose a threat, especially for women whose health has long been subject to inequity in medical data and practice. AI is already seeping into health care from diagnostics and personalized medicine to the growing number of people turning to chatbots like ChatGPT for health advice, symptom checks, nutrition plans, and even therapy. “AI is a beacon of hope in health care,” Raghavendra said, warning that health care systems adopting AI without adapting their organizations are “going to pay the price.”
Why it matters: Biased medical data sets lead to inaccurate results for women, people of color and other groups. The same health care inequities that have historically put women at risk — like how heart attacks present differently in women than in men — are reflected in the data AI learns from. There is also a growing concern around how health data is collected and used. “If you put your information into ChatGPT, that’s out there,” Raghavendra said, adding that users should remain logged out to protect their data. “(It’s) not in a medical setting and not covered under HIPAA laws.” The recent 23andMe data breach is a cautionary example of what can happen when sensitive information isn’t properly safeguarded.
What it means: No algorithm can replace the contextual understanding or emotional intelligence required in patient care. Murchinson highlighted the Mayo Clinic’s use of AI when organizing and analyzing data to improve patient outcomes. “There’s open evidence,” she said. “Journals and AI can bring information directly to doctors.” However, she and other panelists warned that introducing advanced technology without adapting the broader system can be detrimental. For patients and providers alike, the value of AI lies in its ability to augment, not replace, human judgment. The technology can help monitor conditions, analyze trends and provide education, but it remains up to clinicians and patients to interpret and act on the data.
What happens now? Now is the “wild west” of AI, a formative period when rules and norms are still taking shape. What happens next depends on how responsibly industries establish ethical frameworks and oversight. “I don’t want to leave you with doom and gloom, because as a clinician, I don’t feel that way,” Balluru said. However, “we need to guide models and create benchmarks and rubrics for large language models.” Women in leadership have a unique opportunity to influence how inclusivity, transparency and fairness are embedded in these systems. When diverse leaders shape the future of AI, the resulting models are more likely to reflect a broader spectrum of human experience and need.