Can computer systems develop their own consciousness and thus real intelligence and even feelings? In the USA, this discussion is currently being conducted very seriously. Google employee Blake Lemoine, who is now on leave of absence, says the neural network LaMDA has a soul, its own consciousness and should therefore now also enjoy the same rights as a human being.
The discussion is being observed critically in professional circles, as experts assume that artificial intelligence (AI) cannot develop its own consciousness. And even experts who do not want to finally rule it out expect that it will take decades before the first machines can be ready. BILD spoke to Prof. Dr. Julian Nida-Rümelin (67). The philosopher is a member of the German Ethics Council and deals with the consequences of digitization.
His explanation for Lemoine’s observation: “IT engineers sometimes develop programs in such a way that they simulate human behavior. However, a simulation should not be confused with reality. It is a grandiose self-deception. There are better explanations for how software systems behave than assuming they were conscious.”
So Nida-Rümelin assumes that LaMDA , which was developed to create chatbots, now strings together sentences and statements so skillfully that even an expert like Lemoine can no longer tell the difference between human and machine communication. That’s exciting because that’s pretty much the definition of the Turing test.
Nida-Rümelin assumes that AIs cannot develop their own consciousness: “The simulation of human characteristics is based on an interpretation of physical states that we as humans make, there is no reason for hocus-pocus: We don’t become God, we don’t know people produce technically according to our ideas. These adolescent dreams belong in Hollywood films, they will not come true in reality.”
Unimaginable step backwards
But how did the current discussion come about? The philosopher sees the cause in Lemoine’s cultural environment: “The USA is more susceptible to abductions by extraterrestrials, visits by UFOs and some other oddities. If half the population there believes the earth is only 6,000 years old, then I’m not surprised that American software engineers want to play God.”
Should a system nevertheless succeed in developing consciousness, that would not be progress for Nida-Rümelin. The philosopher on BILD: “If LaMDA and soon other software systems had consciousness and intelligence, including emotionality, this would be a serious setback for technical progress, because then we would have to take digital actors into consideration, we could not use them for our purposes claim that they have the status of a person and enjoy the legal and ethical protections afforded to persons.”
#GoogleAI #LaMBDA #Clever #computers #terrific #selfdeception