Illinois Bans ChatGPT and AI from Providing Therapy
In a landmark decision, the state of Illinois has banned artificial intelligence platforms like ChatGPT from providing therapy services without human oversight. This sweeping legislation—titled the Wellness and Oversight for Psychological Resources Act—was signed into law by Governor JB Pritzker, making Illinois the first state to take such definitive action to protect the integrity of mental health care.
Under the new regulation, AI systems are explicitly barred from performing core therapeutic functions. This includes generating treatment plans, assessing emotional well-being, or offering therapeutic advice without a licensed professional supervising the process. Violations can result in fines of up to $10,000, which will be enforced by the Illinois Department of Financial and Professional Regulation.
As stated by Mario Treto, Jr., secretary of the department, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.”
The law reflects a growing concern about the unregulated use of AI in sensitive healthcare fields, especially mental health. The American Psychological Association (APA) earlier this year sounded the alarm, warning federal agencies about AI chatbots posing as therapists—some of which were linked to serious incidents involving self-harm and violence.
Illinois is drawing a clear line between administrative support and actual therapy. While AI can still be used for backend tasks like scheduling, documentation, or translation, it cannot diagnose, counsel, or make autonomous clinical decisions.
Illinois isn’t alone in this push. Several other states are now taking a stand,
This collective wave of legislation suggests that U.S. states are becoming increasingly wary of AI’s role in emotional and psychological support, especially in the absence of strict ethical and safety frameworks.
While some in the tech industry argue that AI can improve access to mental health resources, critics point to the risk of misinformation, misdiagnosis, and emotional harm, particularly among vulnerable populations.
AI platforms lack empathy, cultural sensitivity, and human judgment—all essential to mental health support. And despite advances in natural language processing, machines still struggle to recognize emotional nuance or respond appropriately to complex psychological states.
The Illinois law is likely to set a precedent for national and international regulatory frameworks. As more AI-powered wellness apps and chatbots emerge, policymakers will need to strike a balance between innovation and safety, ensuring that mental health care is never compromised by automation.
For now, Illinois has taken a strong stand: when it comes to mental health, machines can assist—but humans must lead.
In an important development in the banking sector, the Government of India has extended the…
In a major endorsement of India’s digital payments ecosystem, Google has launched its first-ever credit…
The Global Investment Risk and Resilience Index 2025, released by Henley & Partners in collaboration…
Multinational corporations continue to dominate the global economy, and the 2025 Hurun Global 1000 Report…
Many countries around the world are known for their beautiful islands, which attract travellers, nature…
Every year, people around the world decorate Christmas trees with lights, ornaments, stars, and colorful…