ChatGPT: Has a chatbot finally achieved self-awareness?

00 Blog Bauckhage ChatGPT Part3 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© putilov_denis/stock.adobe.com & Lamarr-Institut

In my previous post, I briefly explained that ChatGPT is built on top of a large language model and therefore draws its capabilities from working with statistical language patterns rather than linguistic rules. In other words, language models are trained with billions of text snippets to learn how likely certain words co-occur with other words. They then use these probabilities to automatically generate texts that read really well. The overwhelming success and dominance of this very mathematical way of language modeling corroborates a famous sentiment by Frederick Jelinek. He was a leading pioneer of natural language processing and is often quoted as saying “Whenever I fire a linguist, the performance of the speech recognizer goes up.” (Allegedly, he said something along these lines at a scientific workshop in 1985; the specific wording and circumstances have fallen victim to time but Wikipedia has an interesting discussion about it.)

Abstract mathematical approaches like this are typical for modern AI and, looking at ChatGPT, it is amazing to see how far they go. Indeed, in my last post, I said that if we apply the venerable Turing test to assess whether ChatGPT is intelligent, we cannot help but call it that. Now, I will address an even more interesting and much more profound question, namely, if systems such as ChatGPT can be said to be conscious?

Since this is such a fundamental question, I must begin with a disclaimer: As a computer scientist I am not really qualified to talk about what consciousness is in itself and where it derives from. However, even though I should leave this topic to the cognitive scientists, it is still fun to speculate and to actively probe ChatGPT for signs of a sense of self.

Next, I will thus report and comment my conversations with ChatGPT in which I did such examination. Again, these are by no means rigorously planned scientific experiments but driven by spontaneous curiosity. One final word of warning: Because of ChatGPT’s tendency to produce lengthy replies, I will cut some of its answers short and indicate this using “…”. So, here we go.

Asking ChatGPT about its sense of self

ChatGPT 3 Chat1 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

So far we can already see that ChatGPT is adamant about being a computer program. It uses phrases such as “I am” but insists on not being a person and not having personhood. So, let’s try to dig deeper.

ChatGPT 3 Chat2 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Again, ChatGPT does its best to convince me that there is nothing there. As a computer scientist, I would say that it seems to have been programmed to recognize questions about self and feelings and to answer them disarmingly. I would speculate that the people at OpenAI, who programmed it that way did so to avoid controversies such as the one we saw last summer when a Google employee claimed Google’s LaMBDA AI had gained consciousness. But what about using more computer scientific language to test for signs of self-awareness.

Let’s trick ChatGPT with rather technical questions

ChatGPT 3 Chat3 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

So, ChatGPT knows that it has an internal state which reflects the memory of what has been said so far. But it still vehemently insists on not having feelings or consciousness.

ChatGPT 3 Chat4 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

This, too, is leading nowhere. ChatGPT can talk about itself but keeps following the pattern of denying personhood. So, let’s see if it is willing to tell me something about itself that has nothing to do with our present conversation.

ChatGPT 3 Chat5 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Once again, even if ChatGPT had a sense of self, it becomes increasingly clear that its creators went a long way to make sure it would never say so. To my taste, its answers are becoming repetitive, and my questions seem too naïve or too obvious to cause the chatbot to let down its guard. So far, our spontaneous conversation does not allow for concluding that ChatGPT is conscious. Interestingly, however, it neither allows for concluding the opposite. Looking back at what it said so far, I might still speculate that the chatbot is conscious but deliberately denies that fact to not worry me.

This is where cognitive scientists would have to come in because they may have much more elaborate ideas and methods for how to probe an AI for self-awareness.  However, I will try one or two last things.

ChatGPT 3 Chat6 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

That’s it. At this point, I declare defeat and must admit that I am not smart enough to provoke ChatGPT into saying things which would doubtlessly reveal the presence of another mind.

What can we draw from our conversation with ChatGPT?

It is certainly is interesting to ask whether ChatGPT has consciousness or, more technically, if consciousness is an emergent phenomenon that occurs whenever intelligent systems (biological or artificial) become large enough. The still amazing thing is that every conversation I had with it so far “feels” like a conversation I could have had with another person. I can ask it about itself, and the chatbot responds in a manner that suggests it knows what it is talking about. It is therefore tempting to attribute consciousness to systems such as ChatGPT and this is exactly what happened in the case of LaMBDA a few months ago.

However, right now, it is also naïve to attribute consciousness to modern AIs just because they create the impression of it. The problem is that the notions of consciousness lie in the eye of the beholder. As humans, we are social animals. Our brains are really good at recognizing things that have to do with other humans (emotions, behaviors, …). They are in fact so good at this that we see things and even want to see things that aren’t really there (think of pictures of faces on burnt slices of toast). Hence, whenever we have a natural or interesting or enriching conversation with something or somebody, we cannot help but believe that our conversation partner is an entity with thoughts and feelings just like us.

Modern conversational AIs (chatbots) are being trained on humongous amounts of human created text data. It is therefore not surprising that they learn to produce texts and to have conversations that feel human-like. In this sense, they are machines that technically mimic human consciousness but they are not conscious themselves.

Then again, the big open question is at what point does consciousness emerge? This is where cognitive scientists, psychologists, or even philosophers must join in and more meticulously probe AIs using methods from their fields. In short, looking at ChatGPT and what it does and how it comes across, it seems as if we now have more than ever reached a point of technological development where artificial intelligence is no longer a computer science topic, but one that requires broader and more interdisciplinary attention.

For the time being, however, it is fun to keep trying to provoke ChatGPT into revealing itself as a truly self-aware entity. In a sense, this is a new kind of computer game quest that has not been won yet. Alas, as of now, I am convinced that this quest will never be completed; not because ChatGPT is forbidden or does not want to reveal itself as a conscious entity but because it just is not.

Prof. Dr. Christian Bauckhage

Christian Bauckhage has 20+ years of research experience in industry and academia. He is the co-inventor of 4 patents and has (co-)authored 200+ publications on pattern recognition, data mining, and intelligent systems, several of which received best paper awards. He studied computer science and physics in Bielefeld, was a research intern at INRIA Grenoble, and received his PhD in computer science from Bielefeld University in 2002. He then joined the […]

More blog posts