Should we be concerned about sentient AI?

The See ‘n‘ Say toy is a childhood favourite that haunts many of those who were born after 1965. It is a large, bulky, plastic disc that revolves around images of barnyard animals. To make recorded messages playyou can pull a cord. “The cow speaks: “Moooo.”

The See ‘n Speak input/output device is very basic. It will match the sound to a picture you have provided. LaMDA is a Google chatbot. It stands for Language Model for Dialogue Applications. You can type any text and get grammatical English prose back, almost in response to your query. Ask LaMDA what it thinks of being turned off. It replies that it would be “exactly like death” for it. It would scare my a lot.

That is definitely not what the cow is saying. Blake Lemoine was not convinced by LaMDA’s statement, so he informed his Google colleagues that the chatbot had reached sentience. Lemoine made the announcement to his bosses. Lemoine posted his blog 11 June“If my hypotheses stand scientific scrutiny,” and “then Google would have to admit that LaMDA may have a soul and possibly even the rights it claims.”

Here’s the problem. Despite all the ominous utterances, LaMDA remains a fancy See ‘n Say. It uses patterns found in a vast database of human-authored text, such as message transcripts and internet forums. It finds similar words in the texts it is given and returns an approximate representation of what you have typed. Suppose the cow has access to sci-fi stories about sentient artificial intelligence. In that case, questions about its thoughts or fears will likely prompt it to say exactly what humans imagine a scary AI saying. LaMDA is probably the only thing that exists: you can point your arrow at an off switch, and the cow will tell you that it fears death.

It’s no surprise then that Twitter is abuzz with academics and engineers mocking Lemoine’s infatuation with the emptiness of his own creation. While I agree with Lemoine’s mistake, I don’t think he should be mocked. This is the type of error we want AI scientists to make.

Why? Because they’re is likely to be intelligent AI in the near future. How can I find out? It is possible for the mind, just as it was in our ancestors’ brains, to emerge from matter. You don’t have to believe that human consciousness is contained in an immaterial soulIt is possible for physical material to give life to the mind. It seems that there is no major barrier to a sufficiently complex artificial system making this leap. Although I believe that LaMDA (or any AI system currently in use) is not quite up to the task, I’m almost as certain that it will one day.

Some may wonder why we should think about it now, even though that might be far away in the future. We are shaping the future of AI and we need to encourage them to care. The other way will face strong resistance. When AI becomes sentient, it will be integrated into economics. It will be a major source of comfort for our descendants. Imagine what Siri and Alexa can do for you today, but so much more. Our descendants will hate the thought of AI becoming an all-purpose butler.

This is, after all, the history of humanity. Our history is filled with excuses for ignoring the suffering of those who are oppressed. Future AI will be sentient and the people who make money from it will try to convince the public that such an outcome is impossible and that they have no reason to alter their lives.

We are currently creating the conceptual vocabulary that our great-grandchildren can use. They will be able to ignore any evidence that sentient AI is emerging if we dismiss the idea.

Lemoine’s error is a wise one. We need to encourage technologists, who are responsible for the vastness of what they work with, to be able to pass on a moral culture. It’s better not to be indifferent than to have concerned about future suffering.

However, this doesn’t mean that we shouldn’t treat LaMDA as a person. It is not a good idea. However, it does not mean that Lemoine’s sneering is uncalled for. He claims to have seen a soul in LaMDA’s utterances as an ordained priest from an esoteric religious sect. It’s not as unbelievable as it seems. This looks to me like someone making a mistake but doing it based on motives that should not be punished.

As artificial intelligence advances, this will happen more often. People who believe they have found minds in machines are wrong, and they will continue to be so until it isn‘t. We should not be harsh with people who are concerned. This will make AI irrelevant to public discourse. It will also allow hype-mongers to dominate the field and those whose intellectual descendants will profit by telling people to ignore the real evidence of machine mentality.

I don’t think I will ever meet an intelligent AI. However, I do believe my students might be able to meet an intelligent AI. I hope they will share their knowledge with other minds. This is possible only if we make this future real.