How much do humans have to be involved in clinical AI systems? As of 2026, regulators across the U.S. have clarified that AI can assist, but a licensed human must make the final call on any consequential medical decision. If your product doesn’t formalize that human checkpoint, you may want to take another look, to help avoid state penalties, federal liability, and delayed deals.
Illinois set the stage with HB 1806, which prohibits AI from generating therapeutic decisions or treatment plans without a clinician’s review. In practice, this means any product that provides care recommendations must include human review.
California’s AB 489 provides that AI cannot “misrepresent” itself as a clinician. Your UI must clearly disclose when users are interacting with AI, and your system cannot diagnose or prescribe without a verifiable human sign-off.
Together, these laws, along with those of multiple other states, create a standard that, if complied with, can avoid an allegation of practicing medicine without a license.
The Trump America AI Act framework and the proposed Healthy Technology Act share a goal to harmonize state rules while shifting liability upstream to developers and deployers. Also, while Congress has not enacted a federal duty-of-care statute, the concept is gaining traction through proposals like the bipartisan AI LEAD Act and through federal agency frameworks that increasingly expect developers to prevent foreseeable harm.
Regulators expect meaningful oversight. That means:
If your AI advances the workflow without human confirmation, regulators will treat it as autonomous, regardless of marketing language to the contrary.
You may want to consider:
These techniques may not only keep you compliant—they can give health systems and insurers the safety assurances they need before deploying any clinical AI at scale.