Illinois Bans ChatGPT and AI from Providing Therapy
In a landmark decision, the state of Illinois has banned artificial intelligence platforms like ChatGPT from providing therapy services without human oversight. This sweeping legislation—titled the Wellness and Oversight for Psychological Resources Act—was signed into law by Governor JB Pritzker, making Illinois the first state to take such definitive action to protect the integrity of mental health care.
Under the new regulation, AI systems are explicitly barred from performing core therapeutic functions. This includes generating treatment plans, assessing emotional well-being, or offering therapeutic advice without a licensed professional supervising the process. Violations can result in fines of up to $10,000, which will be enforced by the Illinois Department of Financial and Professional Regulation.
As stated by Mario Treto, Jr., secretary of the department, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.”
The law reflects a growing concern about the unregulated use of AI in sensitive healthcare fields, especially mental health. The American Psychological Association (APA) earlier this year sounded the alarm, warning federal agencies about AI chatbots posing as therapists—some of which were linked to serious incidents involving self-harm and violence.
Illinois is drawing a clear line between administrative support and actual therapy. While AI can still be used for backend tasks like scheduling, documentation, or translation, it cannot diagnose, counsel, or make autonomous clinical decisions.
Illinois isn’t alone in this push. Several other states are now taking a stand,
This collective wave of legislation suggests that U.S. states are becoming increasingly wary of AI’s role in emotional and psychological support, especially in the absence of strict ethical and safety frameworks.
While some in the tech industry argue that AI can improve access to mental health resources, critics point to the risk of misinformation, misdiagnosis, and emotional harm, particularly among vulnerable populations.
AI platforms lack empathy, cultural sensitivity, and human judgment—all essential to mental health support. And despite advances in natural language processing, machines still struggle to recognize emotional nuance or respond appropriately to complex psychological states.
The Illinois law is likely to set a precedent for national and international regulatory frameworks. As more AI-powered wellness apps and chatbots emerge, policymakers will need to strike a balance between innovation and safety, ensuring that mental health care is never compromised by automation.
For now, Illinois has taken a strong stand: when it comes to mental health, machines can assist—but humans must lead.
India has many cities known for their unique identity, and some of them are famous…
Global credit rating agency Fitch Ratings has revised India’s GDP growth forecast for FY26 to…
In a landmark shift in Pakistan’s military command structure, Field Marshal Asim Munir has been…
India’s Smart Cities Mission (SCM), launched in 2015, is entering its final stretch with an…
Welcome to the November 2025 Edition of the Affairs PDF – your all-inclusive monthly guide to…
The Reserve Bank of India (RBI) recently announced two major liquidity measures, a ₹1 trillion…