February 2026 | By Carrie Jobe
Key takeaways for healthcare and health plan leaders:
- Agentic AI is not “robots making decisions.” It is supervised intelligence that helps organize work, surface risk, and accelerate resolution without replacing human judgment.
- Responsible adoption starts with guardrails first: Observability, traceability, escalation paths, and human-in-the-loop oversight for high-impact outcomes.
- The fastest path to trust is using agentic approaches to reduce operational friction in the background before applying them to any sensitive member-facing workflows.
Agentic Artificial Intelligence (AI) is everywhere right now, and so is the noise around it. In healthcare, I hear two extremes almost every day. On one end, AI is positioned as a silver bullet that will instantly solve access, cost, and staffing challenges. On the other, it’s framed as moving too fast to be trusted, with risk that outweighs any real benefit.
This session was intentionally different. I wanted to move past the hype and the fear and have a practical conversation about where agentic AI fits today, where it clearly does not, and what responsible adoption looks like in regulated healthcare environments.
I was joined by three leaders at Softheon who each bring a different lens to this topic:
- Rob Miller, GM and SVP of Government Cloud at CITIZ3N
- Erik Driscoll, VP of Engineering
- Akshay Mathur, Group Product Manager
Together, we focused on real operational use cases, governance and guardrails, and how health plans can think about agentic AI as a tool for reducing friction and improving outcomes without giving up human accountability.
Watch the full conversation here.
What Is Agentic AI in Healthcare?
Q: When we say agentic AI, what do we actually mean, and how is it different from rules-based automation?
Rob opened with a useful reset: Not everything labeled AI is the same thing, and the term agentic can sound more dramatic than it needs to. Most regulated organizations are not aiming for fully autonomous decisioning. Instead, they are building supervised systems that accelerate work without replacing accountability.
“Agentic AI is sort of one of those terms that can sound maybe bigger or scarier than it actually is.” – Rob Miller
- Level 1 is deterministic automation. Systems that do exactly what they are told, every time. That is the foundation many healthcare systems run on today because repeatability and reliability are not optional.
- Level 2 introduces pattern detection and recommendations, while still keeping a human decision-maker in control.
- Level 3 begins to show early agentic behavior. Systems can plan steps, adapt, and propose actions across workflows, but still require supervision in regulated environments.
Healthcare is not starting from zero, and most organizations are not trying to jump straight into high-autonomy systems. They are trying to move from basic automation to supervised intelligence that reduces friction and improves outcomes.
How Agentic AI Connects Insights to Action
Q: What is new about agentic systems compared to recommendation engines and task orchestration?
Many healthcare organizations already have recommendation engines that surface insights, and orchestration systems that move tasks. The gap is that insights often stop at the suggestion.
“What’s new with these agentic system is having something that connects the two and stays accountable for the outcome.” – Akshay Mathur
Akshay described it as a bridge. Agentic systems consume recommendations as signals, decide whether to act, wait, retry, or escalate, then trigger orchestration only when needed. They do not just launch activity. They verify outcomes.
This is more than a technical distinction. Action has cost, compliance implications, and potential member impact. Overreacting can be as harmful as underreacting.
“Agentic AI is what decides whether action is even needed… because in healthcare that distinction is really critical because unnecessary action is often as harmful as inaction.” – Akshay Mathur
For health plan leaders, this is the value proposition worth paying attention to. Not more automation, but better judgment about what work should happen at all, plus accountability for closing the loop.
Why Human Oversight Is Critical for Agentic AI in Healthcare
Q: Where does human oversight fit when systems can plan and act independently?
If the earlier conversation was about capability, this was about accountability.
Rob was direct about what regulated healthcare and government environments demand: AI can help organize, recommend, and accelerate, but it should not make final decisions involving member data, eligibility, compliance, or financial outcomes.
“Human oversight is not optional. It’s a foundation,” shared Rob. If a system takes action and nobody can explain why, that is not innovation. That is risk.
This is where many organizations slow down, and they should. Healthcare environments are not tolerant of mostly right. Even rare failures can be unacceptable when the outcome affects real people.
“Any high impact events such as clinical outcomes, member eligibility or financial liability should not be fully autonomous and remain under human oversight.” – Akshay Mathur
Agentic systems can prepare decisions, recommend actions, simulate outcomes, and resolve low-risk, pre-approved scenarios. But for high-impact outcomes, final authority and accountability should remain with humans.
Real-World Agentic AI Use Cases in Healthcare Operations
Q: Where has Softheon already seen value from agentic approaches inside operations?
Erik shared an example that makes agentic AI feel real: Softheon’s Hydra automated testing framework generates synthetic data and realistic enrollments to test complex systems. Softheon built an AI agent that watches Hydra test results, identifies likely causes of failure, and proposes fixes. In one example, the agent detects a service that is not fully running, issues a request to turn it on, and reruns the test after approval.
Healthcare systems generate huge operational noise, including alerts, tickets, and recurring issues. Agentic workflows can reduce cognitive load by detecting recurring patterns, grouping related issues into a root cause, and routing them to the right human.
The fastest route to sustainable adoption is not member-facing agentic decisions. It is internal operational assistance that is auditable, reversible, and governed, where mistakes are recoverable.
You cannot bolt trust on later; build these guardrails into your AI-powered processes:
- Observability and traceability
AI actions must be logged with context. Teams need an audit trail that supports review, accountability, and continuous improvement.
- Escalation paths and thresholds
Systems need clear rules for when to hand off to a human. Confidence scoring, boundary detection, and explicit out of scope logic are building blocks.
- Human intervention
For high-impact outcomes, humans must be able to approve, intervene, and reverse. Oversight is part of the architecture, not a policy in a document.
“If you can’t explain what an agent did after it did… it shouldn’t be in your system, period.” – Erik Driscoll
Governance Guardrails for Responsible Agentic AI Adoption
If you are a healthcare leader who wants to move responsibly without waiting too long, start narrow, start internal, and treat agents like roles.
Rob recommended thinking of agents as teammates you onboard. Define their responsibilities, boundaries, and supervision model. Do not build “one agent to do everything,” because that is not how organizations work and it is not how trust is built.
Akshay framed it as focusing on decision-heavy workflows where humans are overwhelmed by volume and complexity. Do not just chase tasks that can be automated. Look for places where judgment, prioritization, and outcome verification are the bottlenecks.
If your organization is serious about adoption, the work starts with governance, trust, and operational discipline. That is where agentic approaches can deliver value now, while keeping sensitive decisions under human oversight.