Philosophy

Should we worry about Google AI being aware?

Should we worry about Google AI being aware?

From virtual assistants like Apple’s Siri and Amazon’s Alexa, to robotic vacuum cleaners and self-driving cars, to automated investment portfolio managers and marketing bots, artificial intelligence has become a huge part of our daily lives. Still, when thinking about AI, many of us envision human-like robots that, according to countless science fiction stories, will one day become independent and rebellious.

However, no one knows when humans will create intelligent or sentient AI, he said. Juan Baslassociate professor of philosophy in Northeastern’s College of Social Sciences and Humanities, whose research focuses on the ethics of emerging technologies such as AI and synthetic biology.

“When you hear Google talk, they talk like this is just around the corner or definitely within our lifetimes,” Basl said. “And they are very arrogant about it.”

John Basl, Assistant Professor of Philosophy, poses for a portrait at Northeastern University. Basl believes that conscious AI will be minimally conscious, aware of the experience it is having, capable of feeling positive or negative feelings and having desires. Photo by Matthew Modoono/Northeastern University

Perhaps that is why a recent Washington Post history has made quite a stir. In the story, Google engineer Blake Lemoine says the company’s AI chatbot generator, LaMDA, with whom he had numerous in-depth conversations, could be sensitive. It reminds him of a 7- or 8-year-old, Blake told the Washington Post.

However, Basl believes that the evidence mentioned in the Washington Post article is not enough to conclude that LaMDA is sentient.

“I think reactions like ‘We have created sentient AI’ are extremely exaggerated,” Basl said.

The evidence seems to be based on LaMDA’s language skills and the things he talks about, Basl said. However, LaMDA, a language model, was specifically designed to speak, and the optimization function used to train it to process language and converse incentivizes the algorithm for it to produce this linguistic evidence.

“It’s not like we went to an alien planet and never gave any incentive to start communicating with us. [began talking thoughtfully]Basil said.

The fact that this language model can trick a human into thinking it’s sentient speaks to its complexity, but it would need to have other capabilities beyond what it’s optimized to show sentience, Basl said.

There are different definitions of sensitivity. Sentient is defined as being able to perceive or feel things and is often compared to sapient.

Basl believes that sentient AI would be minimally conscious. You may be aware of the experience you are having, have positive or negative attitudes such as feeling pain or wanting not to feel pain and having desires.

“We see that kind of range of capabilities in the animal world,” he said.

For example, Basl said his dog doesn’t prefer the world to be one way over the other in any deep sense, but he clearly prefers his biscuits to his kibble.

“That seems to trace some internal mental life,” Basl said. “[But] she is not terrified by climate change.”

Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, Calif., on Thursday, June 9, 2022. Photo by Martin Klimek for The Washington Post

It is not clear from the Washington Post story why Lemoine compares LaMDA to a child. It could mean that the language model is as intelligent as a young child or that it has the capacity to suffer or desire as a young child, Basl said.

“Those can be different things. We could create a thinking AI that doesn’t have feelings, and we can create a feeling AI that isn’t very good at thinking,” Basl said.

Most researchers in the AI ​​community, which consists of machine learners, artificial intelligence specialists, philosophers, technology ethicists, and cognitive scientists, are already thinking about these far-future issues and worry about the future. part of thought, according to Basl.

“If we create an AI that is super intelligent, it could end up killing us all,” he said.

However, Lemoine’s concern is not about that, but about the obligation to treat rapidly changing AI capabilities differently.

“I am, in a broad sense, sympathetic to that kind of concern. We’re not being very careful with that. [being] possible,” Basl said. “We don’t think enough about the moral things around AI, like, what do we owe sentient AI?”