By Andy Flanagan, CEO, Iris Telehealth
LinkedIn:Â Andrew Flanagan
LinkedIn:Â Iris Telehealth
Behavioral health providers are treating more patients than ever while operating in three overlapping zones of regulatory uncertainty. DEA prescribing flexibilities expire and get extended at the last minute, again. Telehealth reimbursement parity loses enforcement teeth even as virtual care becomes standard practice. And organizations deploy AI tools for patient triage and risk detection without clear answers about who’s liable when something goes wrong.
The DEA’s temporary e-prescribing exceptions have been renewed four times since the COVID-19 public health emergency ended, leaving providers and patients in repeated cycles of uncertainty every November and December. Telehealth reimbursement policies remain in flux while states introduce more than 250 health AI bills in 2024, and federal guidance remains voluntary at best.
The timing matters because 2026 will bring major telehealth policy decisions that determine whether virtual behavioral health remains financially sustainable. At the same time, health systems like Providence are already integrating virtual nursing, AI documentation tools, and new triage models in their emergency departments, proving that technology adoption won’t wait for policy clarity.
Behavioral health organizations can’t afford to treat these as separate challenges. They need frameworks that connect telehealth delivery models, prescribing compliance, reimbursement strategy, and AI governance, with explicit clinical accountability and humans making the final call on patient care.
The AI Accountability Gap: Legal Risk Without Clear Standards
AI adoption in healthcare climbed from roughly 5% to 8.3% between 2023 and 2025, but legal frameworks haven’t caught up. As AMA President Bruce A. Scott, MD, said, “Voluntary standards are not going to be enough, we need to make sure that the principles of AI implementation are regulated.”
Medical groups are legally accountable for harm caused by any tool they deploy, including AI. Yet adoption conversations focus on efficiency gains, not malpractice exposure. They don’t address the duty to act when AI flags a patient at risk, or what happens if that patient isn’t reached and ends up in crisis. Who answers for it when things go wrong?
Purchasing decisions often get made for the wrong reasons: competitors are doing it, vendors promise time savings that don’t account for implementation costs or workflow disruption, etc. The result then ends up being renewal problems and tech stacks that don’t integrate.
The organizations seeing actual returns focus on what AI does well. Sharp HealthCare reported work RVU increases of 3.5% to 6% per encounter using ambient AI for documentation. As Chief Medical Informatics Officer Dr. Brian Lichtenstein noted, “If you didn’t write it down, it didn’t happen.” The technology worked because it supported clinical documentation, not replaced physician judgment.
This points to where AI belongs in behavioral health today. Administrative and operational support, scheduling, resource allocation, documentation, delivers measurable value without crossing into clinical territory where AI isn’t ready. When AI moves into clinical decision-making or direct crisis response, it enters a territory where both the technology and regulatory frameworks aren’t mature enough to ensure patient safety. Our AI sentiment survey found that 49% of Americans would use AI to monitor their mental health, but 73% want a human provider making final care decisions.
The technology may evolve, but for now, AI’s value in behavioral health is operational, not clinical.
Building Integrated Frameworks for Responsible Scale
Behavioral health organizations must treat telehealth delivery, prescribing compliance, reimbursement strategy, and AI governance as connected projects. Reimbursement pressure drives AI adoption, which generates clinical insights that require documentation, which then feeds into DEA compliance for controlled substance prescribing. Each decision ripples through the others.
Organizations need frameworks that acknowledge these connections and establish clear accountability from the start. When something goes wrong, and eventually something will, you’ll need proof of deliberate structure, not just stated intentions. Build that structure now by establishing these core elements:
- Keep AI focused on administrative functions where it delivers proven value without clinical risk.
- Design systems that explain their conclusions. Over half (56%) of people in our survey said explainability is “extremely important.”
- Create escalation protocols that satisfy both DEA documentation and AI liability standards at the same time, so compliance efforts reinforce rather than duplicate each other.
- Plan for continued policy uncertainty rather than betting on extensions that may not materialize.
Waiting for regulatory certainty means waiting indefinitely. The alternative is building internal frameworks robust enough to handle policy shifts without grinding operations to a halt. Organizations that do this work now can scale access responsibly while others are still figuring out which rules apply.