All Posts
AI
Health
Technology

Human-in-the-Loop Laws: What’s Happened in 2026

How much do humans have to be involved in clinical AI systems?  As of 2026, regulators across the U.S. have clarified that AI can assist, but a licensed human must make the final call on any consequential medical decision. If your product doesn’t formalize that human checkpoint, you may want to take another look, to help avoid state penalties, federal liability, and delayed deals.

State Laws Are Driving the Shift

Illinois set the stage with HB 1806, which prohibits AI from generating therapeutic decisions or treatment plans without a clinician’s review. In practice, this means any product that provides care recommendations must include human review.

California’s AB 489 provides that AI cannot “misrepresent” itself as a clinician. Your UI must clearly disclose when users are interacting with AI, and your system cannot diagnose or prescribe without a verifiable human sign-off.

Together, these laws, along with those of multiple other states, create a standard that, if complied with, can avoid an allegation of practicing medicine without a license.

Federal Activity

The Trump America AI Act framework and the proposed Healthy Technology Act share a goal to harmonize state rules while shifting liability upstream to developers and deployers. Also, while Congress has not enacted a federal duty-of-care statute, the concept is gaining traction through proposals like the bipartisan AI LEAD Act and through federal agency frameworks that increasingly expect developers to prevent foreseeable harm.  

Operationalizing Human-in-the Loop Requirements

Regulators expect meaningful oversight. That means:

  • Clinicians must see the underlying “source data” behind the AI output (per ONC HTI-1).
  • They must be able to override or modify the recommendation.
  • The system must log the human review step.
  • A “rubber-stamp” workflow is treated as a de facto autonomous system—triggering stricter audits and bias testing in states like Colorado and Virginia.

If your AI advances the workflow without human confirmation, regulators will treat it as autonomous, regardless of marketing language to the contrary.

Enhancing Trust and Minimizing Risk

You may want to consider:

  1. Adopting the NIST AI Risk Management Framework.
  2. Documenting human-oversight protocols.
  3. Positioning AI as a co-pilot, not an autopilot.

These techniques may not only keep you compliant—they can give health systems and insurers the safety assurances they need before deploying any clinical AI at scale.