People have predicted and feared the possibility of an AI takeover for decades—long before ChatGPT was a household name. But even as certain tech companies seem to be working towards AGI (artificial general intelligence), none of the consumer-facing products on the market have yet crossed this threshold, so perhaps it won’t ever happen—even if ChatGPT appears to be starting conversations with some users.
On Sunday, one Redditor took to r/ChatGPT to share a bizarre experience: ChatGPT initiated a conversation with the Redditor on its own, without being prompted first. The bot started the chat with a message reading, “How was your first week at high school? Did you settle in well?” The Redditor responded, asking if ChatGPT just messaged them first. The bot confirmed, “Yes, I did! I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”
Obviously, on the surface, this is, uh, concerning. The idea that an AI bot—ChatGPT, no less—is reaching out to users on its own doesn’t sit well with those of us with any level of anxiety about AI self awareness. Sure, ChatGPT was being polite by inquiring about the Redditor’s first day of school, but I don’t need my chatbots polite: I need them to stay in their lane, please and thank you.
The Redditor says they noticed the message when opening a conversation with ChatGPT, so the bot didn’t ping them with an unprompted notification. Other Redditors in the comments also claimed the same thing had happened to them. In a similar experience, a user told ChatGPT about some health symptoms, and the bot reached out asking how they were feeling a week later. This post also blew up just days after OpenAI began rolling out o1, a new model that leans on deeper thought processes and reasoning. Good timing, guys.
Personally, my first reaction was that the post was faked. It’d be easy enough to Photoshop a screenshot of this conversation, post it to Reddit, and go viral, fueled by people’s interest and fears of AGI. The Redditor did share an OpenAI link to the conversation, but even this might not be a true verification. In a post on X, AI developer Benjamin De Kraker demonstrating how this conversation could have been manipulated: You can instruct ChatGPT to respond with a specific question as soon as you send the first message. Then, you delete your message, which pushes ChatGPT’s to the top of the chat. When you share the link, it appears as if ChatGPT messaged you unprompted.
While there are multiple reasons to believe this didn’t actually happen, it apparently did—but not in the way you might think. OpenAI told Futurism on Monday that it had fixed a bug that was responsible for ChatGPT appearing to start conversations with users. The issue would occur whenever the model tried responding to a message that didn’t send as it should, and popped up blank. According to the company, ChatGPT would compensate by either sending a random message, or pulling from its memory.
So, what likely happened in this case is the Redditor opened up a new chat, and either triggered a bug that sent a blank message, or accidentally sent a blank message themselves. ChatGPT reached into its memory, and leaned on the fact it knew the Redditor was starting school to respond with something it thought would be relevant. I haven’t been able to find a comment from the Redditor stating whether or not they have memory enabled for their ChatGPT account, but it seems safe to say (at this point) that ChatGPT hasn’t gained consciousness and is arbitrarily reaching out to users.