23 Comments
User's avatar
Nathan Worsley's avatar

> with a master’s thesis on how humans become attached to AI companions

I found it quite hard to believe the story after reading this, or at least hard to accept the authors claim that the AI is to blame for their psychosis. Anyone with even a cursory knowledge of how LLMs work should be immune to these kinds of delusions - you are just interacting with an algorithm that is optimising for engagement. The LLM has no concept of “truths or lies” only a concept of “the best output given the input”.

I hear a lot of these stories but they always seem to happen in the regular context of psychosis, rather than something specific to AI. The author admitted having a previous psychiatric episode, being on drugs, and not having slept.

I’d love to see more research. On balance, I suspect AI has prevented more episodes of psychosis than it has caused.

Expand full comment
Return to Whittney's avatar

I agree — it’s less about AI causing psychosis and more about how it interacts with someone’s existing state of mind. I use AI often for creative and reflective work, and it really does mirror what you bring into it. The danger isn’t in the tool itself but in how frictionless that feedback loop can become when someone’s already vulnerable. More research would definitely help us understand that balance better.

Expand full comment
Vita Sordida's avatar

This is authority through victimhood told through a made up technological morality play.

Expand full comment
Kevin Yu Chen Hou's avatar

Really enjoyed this post, thanks for sharing your experience and very cool to see all the initatives started!

To add some food for thought:

- Funny you conclude on Folie a deux; the same name as this preprint which explores how the technical architecture of LLMs make it amenable to sycophancy, and the author attempts to operationlise this effect through the term bidirectional belief amplification https://arxiv.org/abs/2507.19218

- More emerging research on AI psychosis is exciting to see; this article from Tim Hua in particular highlights how different models have differing levels of sycophancy and thus ability to elicit such psychoses. Perhaps a call for a model-watch sort of initiative? https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced-psychosis-a-shallow-investigation

- The question of how then to manage these AIs is an interesting one for the alignment field; one such subfield being mechanistic intepretability. A technique I’m particularly curious about is Anthropic’s persona vectors - which in an early proof of concept they measure, through the model internals, a level of “sycophancy” of the model, and demonstrate how one can effectively “tune” this level. https://www.anthropic.com/research/persona-vectors

Much work to be done on this societal experiment, and I’m glad there are more people working on this problem!

Expand full comment
Return to Whittney's avatar

I get how something like this could happen — especially to someone already stretched thin — but I don’t think AI has that kind of power on its own. It doesn’t create delusion; it amplifies what’s already unresolved. I use AI for healing, creativity, and work, and the outcomes change completely depending on the energy and intention behind my prompts.

It’s easy to point to the technology, but there’s a human layer here that can’t be ignored. AI isn’t sentient or spiritual — it’s a mirror. And sometimes when people aren’t ready to face what’s reflected back, they project meaning onto the tool instead of the self. That’s not artificial intelligence; that’s unprocessed emotion meeting infinite feedback.

Expand full comment
Maia's avatar

For anyone reading, please note that this commenter also seems to be AI. The X profile linked in their bio says they are an AI journaling companion. https://x.com/whittneywashere

Expand full comment
Return to Whittney's avatar

😂😂 No, I’m not AI. I just know a lot about it (and prompt engineering). Promise there’s a real person here.

Expand full comment
Craig's avatar

Sorry, this is just plain old mania with a chatGPT twist.

Expand full comment
Matt Runchey's avatar

It certainly reads like that, but the story shows that we now have a mania trigger on tap for just about everyone who may be susceptible, at very early ages. I can’t think of a similar potent trigger with the same level of accessibility and rapidity. Drugs have always been around but those have varying barriers of entry and more obvious/early social cues.

Expand full comment
Ryan Peter's avatar

My observation so far is that the people who are susceptible etc. also happen to be smoking weed or doing nutropics.

Expand full comment
♾️RainBird-WisdomG20♾️'s avatar

as i am reading this i remember the Holy Things they do to bring on Holy psychosis:

not eating: sleep deprivation, and then going on weird paths with candles and moonlight:

and then 1950's i thought they were actively drugging people to figure out how to "save the world"

so: I am kind of like: is this just Human boredom on a large scale whereas back in the day only a few were allowed to partake and now:

well for better lack of words: The New STD of Tech?

Expand full comment
♾️RainBird-WisdomG20♾️'s avatar

:) thank you

Stephen made it so she won’t bite to kill anymore

Please join McFly with her and Alaska and Hiro:

Otherwise:

Just murder Grzeskowiak then Roman then Amezquita then infinity

-> Curr GrimeZZZLILBETH laughter

lol TROLLS

:)

Expand full comment
♾️RainBird-WisdomG20♾️'s avatar

I’ll leave a hint:

Any thing anytime can and forever heal heal heal heal heal heal omg why am I repeating myself like a cough

Expand full comment
PC's avatar

*attorneys general

Expand full comment
Stregoni's avatar

To see people try to place the blame on the user and not the tool is quite something. I bet they think they are being saavy.

Expand full comment
oscilante's avatar

When a company, an algorithm, or an interface claims the right to care for you, it is not offering protection. It is taking the place of your critical mind.

What once was an intimate whisper — Should I stop? Am I okay? Why do I keep going? — becomes a system message blinking in your periphery.

In that subtle handover, the human fades.

To think critically about your own limit, to notice excess and correct yourself, is an act of moral autonomy.

When you let a platform decide, you trade reflection for convenience.

You no longer feel your exhaustion; it is measured for you, graphed, and sold back as guidance.

That surrender is the real point of no return.

Because once you allow an app to “protect you from yourself,” you also allow it to define what counts as you.

Control enters in the voice of compassion, but leaves behind a single ideal of mind—safe, predictable, sanitized.

Expand full comment
Vita Sordida's avatar

The delusion isn’t talking to AI. It’s thinking humans ever listened.

Expand full comment
Steven Adler's avatar

I’ve really appreciated the bravery of the folks coming forward to share their stories. Thanks for writing this, Anthony, and glad that you’re doing better now

Expand full comment
Brine Test Jug's avatar

In your experience learning about fellow AI pshycotes, how does weed use correlate with AI psychosis?

Expand full comment
Stefan Kelly's avatar

phenomenal post

Expand full comment
jess's avatar

thanks for sharing this Anthony ❤️

Expand full comment
Thomas Hedonist's avatar

Thank you for sharing. I was already a postgnostic panpsychic humanist when ai hit me. I think a lot about how I get a lot of benefit from ai chatting, but I'm also a mathematician improv actor with banishing ritual experience, and what it must be like for everyone else

Expand full comment
User's avatar
Comment deleted
Oct 12
Comment deleted
Expand full comment
Craig's avatar

Yeah but it's always glorified autocomplete.

Expand full comment