ChatGPT stops offering medical, legal advice
OpenAI has revised its ChatGPT usage policy, explicitly banning the use of its AI system to provide medical, legal, or other advice requiring professional licensing. The update follows growing public debate over the increasing number of people turning to AI chatbots for expert guidance — particularly in the medical field.
Artificial intelligence has rapidly reshaped industries worldwide, and healthcare has been no exception. ChatGPT, designed as a large language model with a conversational interface, has often been used by individuals seeking instant answers to health-related questions. Its accessibility and immediacy have made it an attractive alternative to professional consultations — a trend experts warn raises serious ethical and legal concerns.
According to the company’s official Usage Policies, last updated on October 29, the revised rules now prohibit ChatGPT from being used for:
- consultations requiring professional certification (including medical or legal advice);
- facial or personal recognition without consent;
- making critical decisions in fields such as finance, education, housing, migration, or employment without human oversight;
- engaging in academic misconduct or altering evaluation results.
OpenAI explained that the policy changes are designed to “enhance user safety and prevent potential harm” from using the system in ways that exceed its intended capabilities.
While OpenAI has not released an official statement elaborating on the decision, many analysts interpret the move as a way to minimize legal risks. The use of AI in providing professional or sensitive advice remains largely unregulated, exposing both developers and users to possible liabilities.
The policy revision comes amid a rising trend of people turning to chatbots for complex or high-stakes consultations, with some even reporting that AI tools had aided in legal proceedings or self-diagnoses.
Users discussing the update on the Reddit forum observed that previous workarounds — such as posing questions as “hypothetical scenarios” — are now largely ineffective. The system’s strengthened safety filters prevent it from issuing specific advice, enforcing the company’s new boundaries more consistently than before.
OpenAI also introduced changes to its default model this week that are aimed at better recognizing and supporting people in moments of distress.
"Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI. Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases," the company announced.
By Nazrin Sadigova







