How to Tell If You’re Chatting With a Bot

How to Tell If You’re Chatting With a Bot

  • Post category:Tech

This post is part of Lifehacker’s “Exposing AI” series. We’re exploring six different types of AI-generated media, and highlighting the common quirks, byproducts, and hallmarks that help you tell the difference between artificial and human-created content.

A lot of communication these days happens over text. You might text back and forth with a match on a dating app, or message a company rep when in need of customer service, rather than speak on the phone with either party. The issue is, you never really know who you’re talking to when the conversation is entirely text-based. In fact, you might not be chatting with a person at all.

Conversational AI has come a long way in recent years, and the introduction of chatbots like ChatGPT has only supercharged the situation. These bots were already getting hard to spot before intelligent AI made it possible to mimic human language to a frightening degree. But even as this tech advances rapidly, it’s far from perfect. There are some hallmarks of AI chatbots you can look out for to help you determine whether the “person” on the other end of the chat really is who they say they are.

How do chatbots work?

Modern chatbots are powered by large language models (LLMs). These models are trained on datasets containing an enormous amount of text: Over time, the model learns relationships between words and phrases, which it uses to inform its responses. When you ask the chatbot a question, it breaks down your entry into what it thinks are the most important parts, then pulls from its training to guess at what words make sense to respond with. It doesn’t understand what it’s saying, only that this series of words makes sense based on its dataset.

Building on this, developers can give their chatbots specific instructions on how to respond to queries to better control the experience for the end-user. This is especially useful if the developer wants their chatbot to take on a particular role, such as a customer service rep: You might not want your rep spouting off about topics unrelated to the company, nor do you want it getting into arguments with the user. Its instructions, therefore, can make it difficult to tell whether the person on the other end is a well-trained employee or a robot.

On the flip side, perhaps a developer makes a dating chatbot, whose instructions say to never capitalize the first word of a response, to use “lol” and winking faces often, and to take on a casual tone. All of a sudden, you aren’t chatting with a ChatGPT clone, but a bot that doesn’t sound quite so robotic.

While it’s becoming more difficult to suss out the identity of chatbots, it’s not impossible. On the contrary, there are plenty of signs to watch out for as you continue texting with people on the internet:

Watch out for weird word choices

A previous version of this article advised readers to look out for clunky phrasing, since the best conversational bots of the time were subject to the weird complexities of the English language. Since then, however, large language models have rapidly advanced, and now have a strong command of English. It’s not often a chatbot will spit something out that makes it sound like it doesn’t understand the language its using: It, in fact, doesn’t understand, but it has learned enough relationships between words to know how to arrange them in a way that makes sense to humans.

That said, some of the words these models decide to use are weird. As I explain in this piece on detecting AI text, OpenAI’s models frequently use a series of words that aren’t necessarily all that common in casual conversation. You’ll see things like “delve into,” “underscores,” “groundbreaking,” and “tapestry,” when describing concepts, and while none of these is damning on its own, seeing words like these used repeatedly can be a sign your “friend” is actually powered by a large language model.

Look for repetition

Bots of this kind also tend to be extremely single-minded. Human conversation tends to be fluid—subjects are introduced, dropped, and then picked up again later. But specialized chatbots are usually constructed for specific purposes, and they will doggedly pursue those purposes no matter what you do. It’s just part of their baseline instructions. If you notice that the “person” you’re speaking to or chatting with keeps returning to the same recommendation or solution no matter what you say, you might be dealing with a bot. If they literally repeat the precise phrasing each time, that’s an even stronger indication, because humans tend to change how they phrase things—especially if they sense they’re not getting through to you.

Another form of repetition to look out for? Repeating your question back to you. If you ask “What is the capital of the United States?” a bot may respond with “The capital of the United States is Washington, D.C.,” rather than simply saying “Washington, D.C.,” or “D.C. Duh.” Chatbots love to incorporate part of your previous message in their response, whereas our responses in casual conversations tend to be curt.

Pay attention to vague answers

While bots have come a long way, they can still offer vague, meaningless responses to your queries, especially if you try to talk about heavy or serious topics. Again, they’ll repeat what you just said in order to give the illusion of paying attention. This is actually an old trick. The “chatbot therapist” ELIZA, developed in the 1960s, uses it constantly. If you tell her, “I’m sad,” she responds “How long have you been sad?” It’s a simple algorithmic construction, but it offers the illusion of sentience. Chatbots are now based on LLMs, and aren’t necessarily programmed in this traditional sense, but their responses can still be a bit useless depending on the topic at hand.

If your friend is constantly replying to your messages with “advice” that sounds like it was written by discount Mr. Rogers, they might just be a bot.

Note the response speed

Another sign that you’re dealing with AI is the speed of their responses. Bots can generate responses much faster than humans. If you’ve ever asked ChatGPT to write you an essay, you can see how fast these models are capable of generating text. If the chats are coming back to you instantaneously, or the other party is able to instantly give you information that a human should reasonably have to take some time to look up, you’re either dealing with a bot or the most talented customer service rep in the universe.

Are they always available?

It’s not just the response speed, either. Unlike humans, bots are always available to chat—morning, noon, and night. If the “person” you’re messaging always seems down to talk, no matter the day or time, that’s a big red flag. Sure, there are people constantly on their phones, but all of us take a break at some point. Take note of whether your conversation partner ever keeps you hanging, or if each and every message you ever send them is followed up by a prompt response.

Is the other “person” a little too enthusiastic?

Chatbots are designed to be helpful with whatever task they’re instructed to carry out. For whatever reason, they often interpret this task to mean being overly friendly and eager to please, which comes across oddly. These bots are always “happy to help with anything you need,” feel so bad for you anytime something is wrong, and want you to know “they understand.” While it’s great to be an empath, most people don’t fall on the floor to prove themselves to you with almost every message. When it comes to messaging over text, embrace cynicism.

They make the wrong mistakes

Modern chatbots don’t make spelling or grammar mistakes: They just aren’t trained to. Again, they’re choosing words based on what order makes sense from their training, which is going to be grammatically accurate. These words are also all spelled correctly, so they aren’t trained to make typos. Human typists, on the other hand, mess up all the time. How often do you have a conversation over text in which the other person doesn’t make at least one typo? Whether or not you acknowledge it, perfect spelling and grammar all the time is a red flag.

However, that doesn’t mean bots are perfect. While their syntax may be excellent, they may just make things up. It’s called hallucinating, and all AI does it from time to time. If you notice your partner making up facts and sticking by them, they might either be a bot, or just very stubborn in their ignorance.

Pull a “Crazy Ivan”

If you suspect you’re dealing with a bot but you’re not sure, there’s a test you can try. In the movie The Hunt for Red October, the Russian submarine captain played by Sean Connery is known for pulling “Crazy Ivans” while sailing underwater—suddenly turning his boat to see if an enemy sub is hiding in his wake. This kind of surprise move can disrupt an AI as well.

While conversational AI has become extremely sophisticated and it can be difficult to tell from a brief interaction that you’re not talking to a human being, bots still have difficulty with non sequiturs. This is especially true with emotions and human relationships. In the middle of the conversation, ask your suspected bot about their family, or tell them you’re feeling depressed, just to see the reaction. In the past, this type of interruption could break the bot’s entire persona, but chatbots are smart enough today to at least incorporate the off-topic query into their response. Still, it might still reveal their LLM-nature: A bot may say “Oh no! I’m sorry to hear you’re depressed. But let’s stay on topic…” or dive into a long series of suggestions for how to deal with your supposed depression, responses that aren’t quite what you’d expect from a human conversationalist.

There are a bunch of Crazy Ivan moves you can pull here. My personal favorite? Switching the language of the conversation. Tell your “friend” you’d much rather continue the conversation in another language. You could even suggest the friend mentioned they’re fluent in that language, even if they never did. A real person would react with confusion, while a bot might jump at the chance to help you out. “Sure, let’s switch to Icelandic! Hvað vilt þú ræða um?” I’ve even found that even if the bot puts up a fight about switching the language, they may still respond to questions in another language as if you asked them in English.

Of course, if you’re actually chatting with a real person, these Crazy Ivan tricks are going to really freak them out. Study that reaction: A proper “dude…wtf” might be all you need to know they’re 100% human.



by Life Hacker