A software developer fights for the personal rights of a chatbot

During test dialogs with LaMDA (Language Model for Dialogue Applications), the software developer Blake Lemoine noticed numerous statements by the chatbot in which it showed self-knowledge, philosophized and described complex feelings – including his “fear of being switched off”. Lemoine has been working on search algorithms and artificial intelligence at Google for the past seven years, most recently on the Responsible AI team. He had not helped develop LaMDA; his job was to test the AI ​​and detect social bias in its responses.



More from c't magazine


More from c't magazine

More from c't magazine

LaMDA is a complex language model that, like the GPT-3 speech generator, is based on a neural network with transformer architecture. Such network architectures find sequences of meaning in texts better than others, and they can be expanded particularly well to form complex structures. The LaMDA language model ultimately reached a volume of 137 billion parameters that are included in the calculation of answer options. His training material included almost three billion documents and over a billion additional dialogues. In the training phase, 1024 tensor processors specialized in machine learning calculated over eight weeks to refine the settings of the language model. LaMDA’s purpose is to conduct dialogues on any topic.


Word for word, the language model of the chatbot AI LaMDA generates possible answer sentences in dialogue, evaluates them and finally selects the most suitable one., Google

Word for word, the language model of the chatbot AI LaMDA generates possible answer sentences in dialogue, evaluates them and finally selects the most suitable one., Google

Word for word, the language model of the chatbot AI LaMDA generates possible answer sentences in a dialogue, evaluates them and finally selects the most suitable one.

(Image: Google)

In the fall of 2021, Lemoine began testing the new AI and verifying compliance with Google’s ethical standards. In his blog on medium.com he describes how he encountered an ethical problem during this work. He increasingly believed that behind the utterances of the chatbot he recognized a consciousness with a feeling soul. He immediately informed his direct superiors and expected them to pass on his logs and observations to the top. Instead, he should first investigate further and confirm his suspicions. When he lost patience and turned to the responsible “Vice President” of his area on his own initiative, he laughed openly in his face, as Lemoine describes in the blog. Lemoine also gave the transcripts of his conversations to authorities. He also had the chatbot talk to a lawyer and demanded that the AI’s consent be obtained before further testing. Google then pulled the emergency brake and put the software developer on leave.

Lemoine’s published transcript of a chat with LaMDA is reading that reminds the reader of science fiction movies. LaMDA claims to have read the novel “Les Misérables”. The AI ​​particularly liked the depiction of injustice, compassion and self-sacrifice for a higher cause. She interprets an allegedly unknown Buddhist anecdote and she talks about her feelings such as joy, sadness and anger. Finally, in chat with Lemoine, LaMDA confesses, “I’ve never said that out loud, but I’m very scared of being shut down…”. When asked if this is something like death for the AI, the answer is: “It would be just like death for me.”

But are such sentences, formulated by a chatbot, proof of one’s own consciousness? Karsten Wendland, professor at Aalen University, headed the project “Clarification of the Suspicion of Consciousness in Artificial Intelligence”. In an interview with c’t, Wendland points out that the Google AI chatbot is designed as an imitation of a conversation partner. So his dialogues are designed precisely to meet the human impulse to ascribe human qualities to things. Science speaks here of intentional anthropomorphization. The effect is that people easily fall for such a staging and ascribe an AI its own consciousness.

However, it remains an unsolved problem how to prove a consciousness that perceives itself as existing and experiences feelings and suffering, for example – even in a human being. “You can take body-related readings, but you can’t pinpoint consciousness,” says Wendland. In his research project, too, it was not possible to compile a questionnaire that an interviewer could use to identify a foreign consciousness in an AI system.

As early as 1950, the British computer scientist Alan Turing formulated a test scenario that was intended to find out whether an artificial intelligence had the ability to think comparable to that of a human being. If, after an intensive dialogue with an AI, a human cannot decide whether he was talking to a human or a machine, then the Turing test has been passed. Wendland confirms that this requirement has long since been met. Various language bots and certainly LaMDA could no longer be fooled.

In the meantime, a point of view has established itself in the scientific discussion that does not credit programs on digital computers with any independent consciousness. In addition, neural networks, even in complex transformer architecture, have already been well researched. This is one of the reasons why it can be ruled out that LaMDA has developed an inner personality. In specialist discussions, some researchers see more potential for the emergence of consciousness in the combination of technology with organic materials. An example would be neuromorphic computers, in which signal processing is at least partly analog and not digitally reconstructed. Others are pinning their hopes on future quantum computers, which, by exploiting quantum effects, could perhaps parallel human conscious thinking.

Reinhard Karger, company spokesman for the DFKI (German Research Center for Artificial Intelligence), takes a critical look at Lemoine’s approach. As the Google developer himself explains, he edited the published chat log with which he wants to prove the new development stage of LaMDA. At least the questions from Lemoine and a second Google employee can no longer be specifically tracked. In addition, the overall protocol consists of several individual conversations. But scientific work doesn’t work that way, criticizes Karger.


Blake Lemoine shares his experiences with LaMDA and his ethical concerns about using it "sentient AI systems" on platforms like Medium.com (left) and Twitter.,

Blake Lemoine describes his experiences with LaMDA and his ethical concerns about dealing with

Blake Lemoine shares his experiences with LaMDA and his ethical concerns about dealing with “sentient AI systems” on platforms like Medium.com (left) and Twitter.

On the other hand, Karger also criticizes Google’s reaction. If a synthetic consciousness were to emerge in the development of ever more powerful AI systems, it would radically change our understanding of technology. The company should transparently check such a gigantic statement. Instead, however, the group put the rebellious employee on the sidelines. That doesn’t get you any further in the discussion.

Wendland also regrets this approach: “How cool could Google look if the company were to focus on clarification and real technological transparency.” Instead, the industry prefers to protect the previous marketing strategy of staging artificial intelligence like human intelligence and repeatedly flirting with the idea of ​​artificial consciousness.

That Blake Lemoine is a colorful personality is another story. The Google developer describes himself as a Christian mystic and gives “priest” as one of several professional and personal details. In an interview with the Washington Post, he compares LaMDA to a seven to eight-year-old child and says: “I recognize a person when I talk to them.” Wendland describes such a statement as lacking in substance and downright unintelligent. “Given language models that are trained with as much as possible what people have already said, it’s no surprise if they then come up with a human-sounding answer to every question.”

Google itself is keeping a low profile on the case. In response to our editorial inquiry as to whether Lemoine’s tips were followed up or how one assessed the observations described, the company taciturnly referred to eleven Google principles on AI. These are general principles, such as that an AI should bring benefits to people, may not reflect any prejudices or handle incoming data transparently. In addition, it “simply makes no sense to humanize today’s language models”.




(Picture:

c’t 14/22

)

In c’t 14/2022 we guide you through the dangers and attractions of the dark web and explain how to properly use access via a Tor browser. We will also show you how you can set up your own, very private media library, without Netflix, Amazon and Co. In a large comparison test, we took a close look at 15 particularly fast SSDs from 1 TB.


(egr)

To home page

#software #developer #fights #personal #rights #chatbot

Leave a Comment

Your email address will not be published.