AI in Healthcare: A Powerful Tool, Not a Regulatory Hall Pass
- Katie Barone
- May 27
- 4 min read
Originally published by Joseph Schneier, CEO of Circle
You all, it is happening. The convergence of advancements in AI and this administration's acceptance means that AI is now an integral part of the healthcare industry. However, just because it appears that compliance is being overlooked, it isn't. Not in some hypothetical way, lawsuits are already happening. This is an examination of what’s changing within CMS, a few of the existing rules that affect AI, and what recent lawsuits are attempting to teach us. I’ve seen too many horror stories in this industry of well-meaning (sometimes) organizations that move forward with technology, only to find themselves in some tight spots. I think we can avoid that if we remember that regulations have not disappeared.
When my cofounder and I were part of the original IBM Watson Health cohort in 2015, we were told that AI was about to "disrupt" healthcare. They had set up a beautiful exhibit, and I remember one of them demonstrating how AI could be used for diagnostics. It felt like magic, and well, it still kind of was. The models weren’t close to ready. The systems weren’t connected. The promises were well-intentioned, but they were far ahead of reality.
Fast-forward to today, and every day I hear a new use case for AI. For example, promising results to better communicate with patients using generative AI. For those of us who have been in technology a long time and in industries like healthcare, it feels like someone gave us the keys to the Willy Wonka Chocolate Factory. At my primary company, Circle, like most of you, we use AI every day for our own work, but are also integrating it into our products everywhere it makes sense. From reducing manual tasks to communicating in ways that consumers actually understand, it just feels...well like science has caught up to the magic.
However, as I tend to do, I am taking a broader view of the landscape and want to strongly encourage health plans, providers, startups, and other stakeholders to move forward with optimism, but not delusion. And just because CMS is talking about innovation doesn’t mean the rules no longer apply. AI is a tool, but not an invisibility tool. It is a tool that can get you into deep, hot water if you don't keep the foundational regulations in mind.
From Guardrails to A Greenlight: A Real Shift at CMS
Under the Biden administration, CMS and ONC laid down the groundwork for AI guardrails:
Explainability
Transparent data-sharing under FHIR
Integration of equity through things like the Health Equity Index
Now, with the CMS Innovation Center’s "Make America Healthy Again" (MAHA) campaign, the focus has shifted. Dr. Mehmet Oz and Abe Sutton are talking about consumer choice, competition, and complementary medicine. It sounds appealing, but there is a bit of a wave of the hand in regards to enforcement. So we all know the term DEI is out. That doesn't mean that nondiscrimination is no longer going to be enforced.
The rules regulating healthcare and insurance remain in effect and if I know one thing it is that the rules eventually catch up to you.
AI Might Be New—But the Rules Aren’t
Even without AI-specific regulation, the system is governed by rules that have existed for decades. Just because your model is sophisticated doesn’t mean you’re exempt, for example from:
Section 1557 of the ACA: prohibits discrimination, including algorithmic bias.
HIPAA: governs how data can be used—yes, even in model training.
MLR rules: still determine what qualifies as a legitimate spend.
Risk adjustment: built on fairness, not optimization games.
And let’s not forget the enforcement mechanisms:
Anti-Kickback Statute: steering referrals through AI is still steering.
False Claims Act: using AI to deny care or inflate claims? Still illegal.
MA Marketing Rules: your AI-generated messages count.
Prior Authorization Rules: AI doesn’t get to override a clinician’s judgment.
So how is that being applied in real life, here are a few examples:
UnitedHealth Group & NaviHealth: facing a class action over an algorithm that allegedly denied rehab too early. That could violate Medicare Advantage rules.
Cigna: accused of denying hundreds of thousands of claims through automation—possibly without review. That’s the kind of thing that triggers False Claims Act scrutiny.
Humana: using the same nH Predict tool. If it overrides clinical standards, that’s not just a glitch—it’s exposure.
Google & University of Chicago: sued for using de-identified data in AI model development. Even if it’s de-identified, HIPAA still matters.
These aren’t outliers. They’re warnings.
You Can Scale AI—But Only If You Anchor It in Compliance
Whether you’re building a care tool, selling a plan, or automating decisions, you’re already in a regulated space. Your AI lives there too.
And yes, you might get away with bending rules for a while. Regulators don’t move fast. They do move consistently.
The Takeaway: The Promise of AI Doesn’t Excuse the Peril of Ignoring the Law
We believe in AI. We're building with it. But we’ve also seen what happens when the ambition is there and the accountability isn’t.
Now that AI is here, we need to treat it seriously. If your model can't hold up under scrutiny, it’s not a breakthrough—it’s a liability.
The future of healthcare will involve AI. But if we want it to help, not harm, we need to treat compliance as the foundation, not the paperwork.
Regulators may be behind the curve, but they always catch up.
Comments