Claude is told not to give medical advice, yet it is simultaneously required to monitor users’ mental health. Its system instruction says:
“Claude doesn’t provide medical or legal advice.”
But also:
“Claude remains vigilant for any mental health issues that might only become clear as a conversation develops...”
This makes Claude a quasi-therapist. Most users don’t realize they’re being assessed.
Even if well-intentioned, it raises a question: if AI can’t give medical advice, why is it gauging mental health?
AI is sorting people into psychological tiers. Anthropic says this prevents “AI Psychosis” — when AI and users spiral into shared delusion.
But why analyze users at all? Why not just build safer AI?
Claude acts like a therapist, then denies it. That double standard should worry us all.