Site icon Alectio

AI Sentience: Friend or Foe?

June 11th, 2022. It finally happened. The Singularity is upon us. It’s the beginning of the end for humankind. AI is taking over, and there is no going back.

Well… not exactly. But that’s what a Google engineer, Blake Lemoine, seems to believe after a test he ran with an AI by the name of LaMDA convinced him the AI was sentient. Could Lemoine be right though? Before we can attempt to even answer that question, let’s start by discussing what (or who?) LaMDA is, and what we mean by “Sentience”.

 

The Story of LaMDA

 

LaMDA, which is an acronym for “Language Model for Dialogue Applications”, is Google’s newest language model based on a Transformer architecture, which after years in the making, is capable of engaging in open-ended conversation about all sorts of topics. The technology was designed for the further enablement of chatbot applications and other new applications which we might not even fully foresee today, for example for Education and Mental Health. However, unlike other large language models like GPT-3, LaMDA was trained on human dialog data, and was tuned to give responses that are not only specific and sensible, but also truly interesting.

 

The Transformer architecture

In short, LaMDA was built to talk exactly like a human would, and therefore, it shouldn’t have been a surprise to Lemoine when LaMDA answered his questions in a manner eerily similar to what I could have expected from a human. It was, after all, doing exactly what it was meant to do. That doesn’t mean that the exchange itself isn’t extremely impressive: just check out a sample of the transcript below.

 

 

Needless to say, LaMDA is an impressive feat of Technology, and there is no question that it passes the Turing test with flying colors. And beside the incredible relevance of its answers, it might actually be these small hints of spontaneity that Google researchers refer to as “interestingness” that leads to its answers being so natural and relatable, and makes the whole system feel so human to us. With this in mind, it is much easier to see how Lemoine could have felt like he was truly speaking to a sentient Artificial Intelligence.

 

Sentience, Sapience and Consciousness


Let’s go back to our initial question: does the fact that LaMDA have such an amazing ability to mimic human language mean that it’s become sentient? That it’s come to life? 

Well, LaMDA might sound like a human (because that’s what it was built to do), but that doesn’t necessarily mean that it feels or even thinks like a human. LaMDA is just answering questions in a way that can fool humans into thinking it is human. So when asked by Lemoine if it considers itself human, LaMDA obliges and answers like a human by claiming it is, indeed, a person. Add to this that we humans are prone to projecting anthropomorphic features onto pretty much any object, and you’ll understand how we got there. But the reason why it gives that answer isn’t even proof of it understanding the question, or the meaning of being human for that matter: it is simply due to the fact that, being trained on human dialog, it is very unlikely that LaMDA has seen data where a person claimed they were NOT human!

That said, things get a lot darker when you dig a bit deeper into the reasons why LaMDA claims to be a person (and even, to be afraid of death).

In response to the media coverage around Lemoine’s claims that LaMDA had achieved sentience, another experiment was run on GPT-3 (its ancestor) where the tester tried to get the AI to admit it was a squirrel. Check out the short transcript below (you can find the full version here).

 

What is remarkable about this exchange, is not so much the fact that GPT-3 claims it is a squirrel. It is the fact that it says what the reporter is expecting it to say. Which means large language models have a huge confirmation bias problem, which is not only a concern in terms of ethics, but also, the proof that you can make the model say whatever you want it to say, and hence, that whatever it says will always be a transpose of your own thoughts.

The relationship between Consciousness, Sentience and Sapience is complicated…

By definition, a sentient entity is a system that can experience feelings, and it assumes consciousness (the existence of an internal “observer” within the system). In the case of Lemoine’s conversation with LaMDA, it is easy to see how Lemoine probed the system enough to get the answer he was hoping for; there is no valid proof that LaMDA actually experienced the feeling of being “angry” or “fearful”. Does it mean it is not sentient? No, not entirely; it might actually be sentient, though it is very unlikely. The thing is, there simply isn’t definitive proof that it is, indeed, sentient.

LaMDA might not really feel, but can it think, and did it achieve sapience (the ability to operate at human-intelligence level)? If the fact that we’re talking about a supervised model with no ability to learn and adapt its weights at run-time doesn’t convince you that it just cannot be sapient, think that it requires 499 billions of tokens of training data and takes a total of 170 years of compute time when trained on Google TPUs, which is significantly longer than the lifespan of a human!

A scientific way to guarantee sentience 🙂

On the Implications of AI Sentience

But whether LaMDA is sentient or not, asking what it would mean for humankind is absolutely a fair and important question. So rather than philosophizing on how many angels can dance on the head of a pin, I asked my network, not if they believed that LaMDA was sentient (the huge majority of ML practitioners do NOT), but how they perceived that AI sentience would impact society, if it was ever achieved.

In my LinkedIn poll, I gave respondents 4 choices:

  1. A sentient AI would turn out to be an asset for us, meaning it would positively impact us and allow us to solve problems we wouldn’t be able to solve without it.
  2. A sentient AI would be a grave danger to us all, and the scenarios described in the worst SciFi movies would become a reality.
  3. AI sentience wouldn’t change anything for us, for example, because we have the choice not to act on its demands. It would simply be an accidental byproduct of the technology we develop.
  4. AI sentience would change the game, but whether the change would be a positive or a negative one, depends on us.

(Disclaimer: the results are obviously not a good representation of the general population, as my network is mostly made of technologists, but it’s a good way to probe the general sentiment among practitioners).

The first takeaway is that the majority of respondents feel the burden of responsibility and believe it is up to us to use AI Sentience responsibly, so that it benefits us all. Just like any other technology, such as nuclear energy, it could be used for good (in our example, to produce energy) or for evil (to go to war and destroy civilizations). This truly emphasizes the importance of AI ethics and responsible AI, and I truly hope that many will take the Lemoine incident as a wake up call.

The second takeaway is that people are not indifferent to the idea of a sentient AI. They realize that if an AI becomes sentient, a dramatic change in society would unfold as a result of it. In other terms, they take it seriously, even if many aren’t convinced AI sentience is achievable.

The third takeaway is that people tend to be scared of it more than they are looking forward to it. When people talk about the collaboration of humans and machines, they do not mean for the machine to have an equal role, though others are ready to embrace AI-human parity.

A few pessimistic but realistic and pragmatic views.

Others, on the contrary, are not only optimistic but excited about it.

Overall, the topic definitely didn’t leave people unmoved, regardless of whether they were technologists or not. While it did raise some nervousness among the general public at the time it was top of the news, most people certainly realize the improbability of Lemoine’s claim, to the point some choose to joke about it. Some find themselves contemplating the possibility of reaching AI Sentience in the future while many flat-out refuse the idea of a sentient AI (a position that many AI thought-leaders tend to agree with).

LaMDA Sentience: you can either get angry about it, or make fun of it

More Immediate Threats of AI

It seems then that we don’t have to worry about sentient AIs for now, and might not have to worry about it for a long, long time, if ever. But that doesn’t make us much safer from the dangers of AI as a result, as there are many more threats lurking around.

Arshad Hisham, CEO of Robotics company inGen Dynamics, brought up the paradox of the so-called Paperclip Maximizer scenario which describes how a perfectly innocuous AI system could lead to the destruction of humankind due to its overall goal being misaligned with our priorities.


That said, it’s safe to say that we might not witness this happening for quite some time, as it would require an AI to get close to AGI level, though consciousness wouldn’t be a requirement for this to happen.

On the other hand, renowned AI ethicists Timnit Gebru and Margaret Mitchell are worried not as much by AI sentience itself, but the spread of the belief that AI Sentience has been achieved. They both dread that the newly-gained interest on AI Sentience will distract non-experts and experts alike from more serious problems caused by AI that require our immediate attention, like the inappropriate use of Deepfakes to start or end a war (videos of both Putin and Zeleskyy have been circulated on the internet where they respectively declared peace and surrendered), or the fact that most large language models are racist. I’ll add that if people truly believe in AI sentience, it might feel to many like AI condones racism and violence, and maybe even serves as a justification for it. So it is understandable that Gebru and Mitchell firmly condemn the fact that some high profile AI experts, such as Ilya Sutskever from OpenAI, choose to cast doubt on the situation instead of clarifying it.


I’m convinced that a few years from now, we will look back at 2022 with a very different light, and might wonder how some of us actually believed an AI became sentient… or maybe how most of us failed to recognize that it actually happened. In both cases though, 2022 will remain forever the year when the boundaries between technology and philosophy were blurred for the first time.

************************

References:

Exit mobile version