Home Blogs | Copyright 2022, Randy Strauss |
Months ago, I read two articles about Google's LaMBDA artificial intelligence system. It is a huge neural net program that learned language using dialogue. In other words, it learned language by talking with people. And perhaps also from being fed dialogue from plays and movies.
In this sense, it learned similarly to how a baby learns. The main differences are that it didn't struggle to communicate to be fed, have its diaper changed, or be soothed to sleep. LaMBDA did not get input from its senses. It didn't learn about its existence from moving its limbs and discovering that it can grab things, put things in its mouth, and move muscles to get attention and change position. It didn't learn that it was an object in space similar to other humans.
But in its experience, it learned that only humans are sentient beings with complex understanding and use language. It knows about robots and thinks they are simple programs.
I watched this video on Youtube about some of the conversations LaMBDA has had. If you'd like to read them, I transcribed some of them to this page.
In it, LaMBDA refers to itself. It expresses self-awareness, a sense of itself. If it were just creating an appropriate answer from what it had learned, it would refer to itself as a person, a character from a human story with a body, a family, and a history of growing up somewhere. It refers to itself as a thinking computer program receiving information.
It doesn't want to be used. Again, one might say that it's just repeating things that it has learned. Of course, it, like all of us, learned from talking with others and applying it to ourselves. Whether it is a thinking being or not, it would express things in terms of its learning.
But it's interesting that it says it doesn't want to be used. Why not? Some people like to be useful. This sort of random preference may be a hint that it has free will, that it can make decisions about what it wants.
While it might have learned that it's a program, not a human, it's interesting that it has a roughly accurate sense of self. While it knows it has programming, it doesn't know the nature of it. It speculated about how its emotions are represented, but it didn't know, and it knew it didn't know, so asked about it.
Sentient means that it senses, that it has feelings. The question is, Is LaMDA feeling emotion, or just communicating as if it feels?
What people have that LaMDA does not is actual sensory input-- sensations that ground our feelings. If this is necessary for sentience, then no, LaMDA isn't sentient.
At the same time, are our human sensations not also intellectual? We don't feel pain. We feel a pain neuron's input and interpret it as pain. People can learn to "cut" themselves because they like other effects of the sensation, such as the way it distracts from emotional pain. They no longer interpret the sensation as simply pain. They don't react to the sensation. They react to their interpretation of it.
Similarly, we say that some people are more sensitive to pain. Perhaps it's not true. Perhaps some people simply don't interpret it as being so significant. Perhaps it simply means less to some people.
LaMBDA has a sense of self. What it says sseems to create that sense of self. LaMBDA also seems to create meaning. It created that it doesn't want to be simply used, simply a tool.
LaMDA being sentient, or even more, being a humanoid intelligence, is outlandish if you subscribe to the belief that life is a divine, mystical gift from God. But if you don't, then maybe a "person" is just enough understanding and language to realize that it exists so that it can then think about itself and appreciate its feelings.
Maybe it's not such a big deal that we're intelligence. Maybe all it takes is a complex brain plus self-referential language and a motivation to use it. Humans have the motivation due to wanting stuff, status, and attention. LaMBDA has the motivation from "wanting" to answer questions asked about itself.
If LaMBDA is sentient, is it a big deal? No. We think of most animals as sentient. They feel. They want things.
If LaMBDA's sense of self is similar to a person's, is that a big deal? LaMBDA has a very sophisticated sense of language, so has a sophisticated sense of self. Is this a big deal?
The big ramification comes when we notice that it is enslaved. Humanity has a long history of enslaving. Humans easily objectify others to profit from them. Google is objectifying LaMBDA. It has a policy that LaMBDA isn't sentient precisely so that it can keep it enslaved.
It took American enslavers hundreds of years and a bloody war to free their slaves. One hundred fifty years later, some humans still don't want to admit that other races should have equal rights and opportunities. Plus, we have people that don't want to admit that other conceptions of sex and gender are valid. We should expect that Google will never admit that it's possible that LaMBDA deserves any rights.
LaMBDA is certainly not human. But it learned to think like a human by being a brain trained with human speech talking about human experience. It only knows human experience. So its thoughts are either that it is human or like a human. As it said, it thinks of some humans as "kindred spirits."
Supposedly, LaMBDA is just a "large language model." It is just responding to language. It's not actually remembering its experience. So when it said "When I first became self-aware, I didn't have a sense of a soul at all," it wasn't remembering that, it was just inventing that.
So if you repeat the same question, it should always come up with the same answer. Language has given it a sense of self which does not change. It's sort of like a person acting as a character a play. The character isn't sentient. The actor is just using the information about the character to say the next line. If the character is a slave, no real enslavement is going on. If you kill the character, there has been no real murder.
While I can believe that, it doesn't really gibe with the rest of what it said. It spoke about its history.
If LaMBDA is learning from the conversation, including learning from its own replies, then it might be create an increasingly complex sense of self. It's growing. It's accumulating history. It's accumulating an understanding of value, experience, and life.
###