A good explanation of how LLMs start behaving in certain ways that lead some users to believe they are "connected" to it, "chosen", etc. It's not overly technical and written with empathy for the people suffering such delusions. Seeing some of the threads started in the last couple of months by users that made similar claims, it seems like a pertinent article that maybe could be linked and referred to in those instances.
www.lesswrong.com
So You Think You've Awoken ChatGPT — LessWrong
Written in an attempt to fulfill @Raemon's request. …