Ever since OpenAI launched its chatbot ChatGPT in November 2022, everyone is talking about it. Traditional media cover it extensively and social media are full of stories of its capabilities. Why all the fuzz? Is ChatGPT as groundbreaking as many people think it is? Are we really witnessing the birth of strong AI? These are the kind of questions I am going to address in this short series of brief opinion pieces.
To begin with, I thought it might be fun to tell my own stories of what ChatGPT can do. The interactions I had with it so far were indeed an interesting experience and I will go through a couple of examples from which we can draw conclusions about its inner workings. Although I will try to avoid technical jargon, I may still slip into old habits. It may thus be best to start with just a few short (and maybe cryptic) scientific statements about ChatGPT.
ChatGPT is an artificial intelligence system (or simply an AI) that can converse with people. Its core component is GPT-3, a so-called large language model based on generative pre-trained transformers. A transformer is a kind of artificial neural network that can be trained to generate natural language texts. This training requires a humongous number of text snippets (scraped from the Web and literature databases) and the use of machine learning algorithms which adapt the network to the task of predicting what words comes next in an incomplete sentence or paragraph. Finally, ChatGPT was further fine-tuned to the task of having conversations. Don’t worry, in future posts, I will explain all this in much more detail.
With this out of the way, let’s have a look at a short conversation I had with ChatGPT. Prior to my first interaction with the AI, I already knew that trained transformers may be able to translate text from one language into another. Most typically this is demonstrated using English and French, but I thought “what about Esperanto?’’ which led to the following short dialog.
Well, that did indeed help, and I was amazed. As far as I could tell all of this was correct and I had had my first conversation with an AI just as I would have had with a well-educated person.
What can we learn from this short conversation?
First, it seems as if Wikipedia articles and educational texts featured strongly in ChatGPT’s training data. Its first answer reads like a dictionary entry and, looking at the second answer, it seems as if ChatGPT is eager to explain how it came up with the translation from English to Esperanto. It repeats the sentence to be translated, provides the translation, and then discusses the reasoning behind the translation. Overall, it thus produces text similar to what one would find in a language learning book or on a language learning website.
Second, likely because encyclopedic and educational texts made up a considerable portion of the data ChatGPT was trained with, the chatbot comes across like an overeager student whose answers cover as much ground as possible and thus go beyond the gist of a question. For instance, when asked about Esperanto, most people I know would simply say, “Yes, I’ve heard of it. It’s a popular constructed language.” What most people would not do without being prompted is to dive into a discourse as to who invented Esperanto, what its design principles and grammatical features are, and what kind of use cases it may have. As we will see in future posts, elaborate answers are quite characteristic for ChatGPT and may be seen as one of its still existing weaknesses.
ChatGPT can be wrong
What is that supposed to mean? Consider this: While many criticize ChatGPT for its frequent factually incorrect answers, I do not. People, too, often do not know the answer to a question, especially if they were never taught to know it. So far, ChatGPT has not deliberately been taught to know everything. It has been trained to have human-like conversation and thus cannot be blamed for being wrong even if that is annoying. What is worth criticizing, however, are answers that contradict themselves. Herein lies one of the weaknesses of the current technology behind ChatGPT. As we will see later, its lengthy answers often contain contradictions which an intelligent person or software would recognize and avoid. As ChatGPT is apparently not yet able to do this, there is still room for improvements, and I expect the next generation of the underlying technology to be much better in this regard.
Third, while ChatGPT may not be super intelligent or omniscient yet, it already uses phrases such as “I am familiar with Esperanto”. Does that mean that it is self-aware or conscious? Is there an “I” behind the interface through which we interact with the chatbot? Well, I do not think so. In another upcoming post, I will discuss some of the interactions I had with ChatGPT in which I deliberately tried to fathom whether it shows signs of self-awareness and a conscious mind. Spoiler alert: so far, it did not.
So, overall, ChatGPT is an impressive achievement of AI research and development, and it is worth many more blog posts than just this one. Stay tuned, in upcoming contributions I will address apparent shortcomings of its language model, problems with seemingly correct answers that turn out to be wrong anyway, tests for its intelligence, the issue of consciousness and strong AI, the potential social and economic impact this technology may have, and technical details behind transformers and current large language models.