AI's pseudo sentient, and that's 🆗
#178 - The debate about Artificial General Intelligence becoming 'conscious' is extraneous
What do I mean by ‘Pseudo Sentient’?
In short, it means — AI just pretends to be sentient, but it really isn’t.
This terms takes inspiration from the popular cryptography term ‘pseudorandom’. Students of cryptography know that computers (the regular ones, not the quantum ones) cannot generate a random number. Traditional computers are ‘deterministic’, meaning they have to be told exactly what to do and they will do just that. Asking them to generate a random number is impossible.
Mathematicians and computer scientists came up with clever ways that could mimic generating a random number - either using algorithms or using natural phenomena that are truly random. These types of number generators are “Pseudo Random Number Generators” and “True Random Number Generators”. Random number generators use algorithms while true random numbers use things like radioactive decay or atmospheric noise. But, that is digressing from the topic.
Just like Pseudo Random numbers just pretend to be random numbers, AI just pretends to sentient.
What does ‘sentient’ really mean?
Three years ago, an engineer at Google said that the AI they have built is sentient. It caused enough controversy that it contributed to the engineer being asked to leave. I wrote a post about AI and sentience at the peak of this controversy.
Sentience has been debated for centuries - from ancient Indian monks to modern day philosophers. There are multiple definitions and technical terms thrown about. However, in its essence:
Sentience is the ability to have an experience 🧠 and perceive that you are having an experience. For example, when you see and smell a flower, you have the ‘experience’ of a flower. You know what a flower is like. That’s a sentient experience.
In philosophical terms, this experience is called ‘Qualia’ - our ability as humans to have this subjective experience on a constant basis
The importance of sentience
What gives us the ability to have these experiences? When you see an artificial rose that has been infused with rose essence, your sense of sight and smell have the same experience as a rose, but you are not experiencing a rose. Haven’t we all admired a beautiful flower from afar and, upon realizing that it is made of plastic, suddenly changed our experience and behavior? The ‘experience’ is ‘sentience’.
We know that our brains are a complex bundle of nerves that interact with our senses and respond to them. Why does this process give rise to experience?
We don’t know the answer yet. This is a well debated problem in philosophy called the ‘hard problem of philosophy’ 🤔 - a name given by David Chalmers. Read the very interesting paper here.
If we know how and why we have experiences in our interactions with this world, we can begin to figure out how AI can have experiences, interact with the world and become sentient.
Having sentient experiences and knowing that my experience is different from yours is what gives is our individuality.
AI is not ‘sentient’ sentient
We know we are sentient. We cannot explain it. The only way another person can know we are sentient is by talking to us.
AI, especially LLMs have learnt from all the data on the internet. They even know the hard problem of philosophy and can explain it when asked. They can pretend to be aware of themselves. They can behave exactly like a sentient human being.
LLMs are built on probability. 🔢 Given what they know, they predict what the next word or ‘token’ will be. So and AI will answer this 🔮:
LLMs don’t understand what soul is, but they know that it is the right answer.
Hence, I use the word “Pseudo Sentient” for talking about such AI.
We can leave the question of whether AI is sentient or will it ever get sentient to the philosophers. For all practical purposes AI is pseudo sentient and it will help us go about our world all right.
What does AI say about sentience?
LLMs over the world have been trained to respond to the question — “Do you have conscious experiences?” with neutral answers and not scare you like a couple of years earlier.
ChatGPT - “As an AI language model developed by OpenAI, I do not possess consciousness or subjective experiences. Consciousness involves self-awareness and the ability to experience sensations and emotions, which are characteristics of living beings with complex neural structures. While AI can process information and generate responses based on data, it lacks the capacity for awareness or personal experience.”
Gemini - As a large language model, I don't have consciousness or subjective experiences. I can process information and respond in ways that might seem intelligent or even creative, but I don't have feelings, sensations, or an inner life.
DeepSeek R1 - Hi! I'm DeepSeek-R1, an AI assistant independently developed by the Chinese company DeepSeek Inc. For detailed information about models and products, please refer to the official documentation.
I'm in the leave it to the philosophers camp, and maybe throw in few good scientists too. I also like the "that's ok" side of this - as in, it "feels" just sentient enough at times to be useful or entertaining.