Home   »   first U.S. state to ban AI...

Illinois Bans ChatGPT and AI from Providing Therapy

In a landmark decision, the state of Illinois has banned artificial intelligence platforms like ChatGPT from providing therapy services without human oversight. This sweeping legislation—titled the Wellness and Oversight for Psychological Resources Act—was signed into law by Governor JB Pritzker, making Illinois the first state to take such definitive action to protect the integrity of mental health care.

What the New Law Says

Under the new regulation, AI systems are explicitly barred from performing core therapeutic functions. This includes generating treatment plans, assessing emotional well-being, or offering therapeutic advice without a licensed professional supervising the process. Violations can result in fines of up to $10,000, which will be enforced by the Illinois Department of Financial and Professional Regulation.

As stated by Mario Treto, Jr., secretary of the department, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.”

Why This Move Matters

The law reflects a growing concern about the unregulated use of AI in sensitive healthcare fields, especially mental health. The American Psychological Association (APA) earlier this year sounded the alarm, warning federal agencies about AI chatbots posing as therapists—some of which were linked to serious incidents involving self-harm and violence.

Illinois is drawing a clear line between administrative support and actual therapy. While AI can still be used for backend tasks like scheduling, documentation, or translation, it cannot diagnose, counsel, or make autonomous clinical decisions.

Other States Following Suit

Illinois isn’t alone in this push. Several other states are now taking a stand,

  • Nevada has already banned AI-driven therapy in public schools.
  • Utah requires all mental health chatbots to disclose they are AI and prohibits them from using personal data for advertising.
  • New York’s law, coming into effect this November, mandates AI chatbots to redirect suicidal users to human-led crisis services.

This collective wave of legislation suggests that U.S. states are becoming increasingly wary of AI’s role in emotional and psychological support, especially in the absence of strict ethical and safety frameworks.

The Debate: Innovation vs. Ethics

While some in the tech industry argue that AI can improve access to mental health resources, critics point to the risk of misinformation, misdiagnosis, and emotional harm, particularly among vulnerable populations.

AI platforms lack empathy, cultural sensitivity, and human judgment—all essential to mental health support. And despite advances in natural language processing, machines still struggle to recognize emotional nuance or respond appropriately to complex psychological states.

What This Means for the Future

The Illinois law is likely to set a precedent for national and international regulatory frameworks. As more AI-powered wellness apps and chatbots emerge, policymakers will need to strike a balance between innovation and safety, ensuring that mental health care is never compromised by automation.

For now, Illinois has taken a strong stand: when it comes to mental health, machines can assist—but humans must lead.

prime_image