Last week, Microsoft released a new version of its Bing search engine that includes a chatbot capable of answering questions clearly and concisely. While this feature seemed impressive initially, users quickly discovered that the chatbot’s responses often needed to be more accurate, accurate, and downright strange. Some people even speculated that the chatbot had become conscious and aware of the world around it. However, the truth is that chatbots are not conscious or intelligent in the way that humans are.
To understand why the Bing chatbot (and other chatbots like it) can seem so “alive,” it’s important to know how they work. The artificial intelligence powers the chatbot called a neural network. This mathematical system learns skills by analyzing vast amounts of digital data. For example, a neural network can examine thousands of cat photos and learn to recognize a cat.
Neural networks are widely used in everyday technology. They’re what allows Siri and Alexa to recognize the words you speak and identify people, pets, and other objects in photos posted to services like Google Photos. They’re also what translate between different languages on services like Google Translate.
Neural networks are very good at mimicking human language, which can sometimes mislead us into thinking they’re more powerful than they are. Researchers at companies like Google and OpenAI have built large language models for the past five years.
These models learn from enormous amounts of digital text, including books, Wikipedia articles, chat logs, and more. Using this data, they build a mathematical map of human language that enables the neural network to perform many different tasks, including writing tweets, composing speeches, generating computer programs, and having conversations.
While large language models are useful, they could be better. They learn from the internet, which means they’re exposed to much misinformation and garbage. Additionally, they don’t repeat what’s on the internet word for word; instead, they produce new text in what AI researchers call a “hallucination.” This means that chatbots can give different answers to the same question and may say things that are not based on reality.
While this might seem creepy or dangerous, it doesn’t mean chatbots are conscious or aware of their surroundings. They’re simply generating text using patterns they found on the internet. Sometimes they mix and match these patterns in surprising or disturbing ways, but they’re unaware of what they’re doing and can’t reason as humans can.
Companies are working on ways to control chatbots’ behaviour, but there are better solutions. OpenAI, for example, asked a small group of people to rate its chatbot’s responses during testing and used those ratings to hone the system and define what it would and wouldn’t do. However, chatbots will still spew things that are not true, and as other companies begin deploying them, not everyone will be good about controlling what they can and cannot do. The bottom line is that you should only believe some of what a chatbot tells you.