LaMDA, AI and Consciousness: Blake Lemoine, we need to philosophize!

(This article is also available in German)

Neural networks are parameterized and therefore in principle general function approximators. When the number of their parameters runs into the billions, even the smartest data scientists lose track of exactly what the networks are doing and where. The result is surprised people whose astonishment at the supposed black box AI takes on a wide variety of forms.

Google employee Blake Lemoine is surprised at how self-reflective the chat AI writes LaMDA and wants to protect it. Since only a small part of Google’s own employees have access to LaMDA, the story fits perfectly into the narrative of a Hollywood blockbuster: In a secret laboratory, shady scientists have committed hubris and created a mysterious being that can be fascinating and maybe even dangerous could. And even if this recipe has already led to many an exciting film, one should not be fooled: Google’s AI department already knows what they have built there. LaMDA is also just a big Transformer who is good at writing. That shouldn’t come as a surprise if you’ve seen GPT3.

Pina Merkert lives on the border between dramaturgy and technology, between programming projects and hardware tinkering. The result is mostly useful, but sometimes just cool. This also applies to their intensive engagement with what is commonly referred to as artificial intelligence.

If you read the chat logs that Lemoine published to Google’s annoyance, you can still have doubts. The machine writes more reflected and clearer than many people we meet in everyday life. Who are we to give the swaggering idiot from the subway a consciousness, even though his behavior is far from intelligent, and deny the same honor to the eloquent AI?

This question reveals that Lemoine’s confusion is actually a philosophical problem: Descartes’ famous phrase “I think, therefore I am” allows us to be sure of our own existence. So we are self-aware, we have consciousness. But what about everyone else? They look like us and behave like us. It is therefore natural to assume that they have the same kind of consciousness. But it is not provable. Or, to paraphrase Ludwig Wittgenstein: We have no conditions at all according to which we can call machines conscious. Even if a machine were conscious, we cannot detect it because we have not sufficiently defined the concept of consciousness. So we base our assumption on behavior and save ourselves from drawing a clear line that separates conscious life from unconscious things.

But it is precisely this borderline that the argument between Lemoine and his employer is about: Lemoine only applied his external view of consciousness to the LaMDA thing. He couldn’t avoid putting the AI ​​in the same group as his fellow human beings. However, Google’s AI experts know exactly how LaMDA’s formulations are created and do not see the thing as less of a thing just because it is a better tool.

The problem is that the knowledge of the AI ​​experts faces a large knowledge gap: How does human thinking actually work? A human brain has several orders of magnitude more synapses than the largest AI models to date have simulated. Accordingly, brain research has little overview of the processes that take place in a human head. We assume that something significantly better is happening there than in an animal brain or a simulated neural network. But actually we don’t know if anything structurally different is happening at all, nor what exactly constitutes this crucial ability. Neural networks “only” make fully automatic statistics and are therefore correct more often than humans in some applications. What if Descartes’ thinking proved nothing more than that his automatic statistics found the famous sentence most appropriate?

Google has suspended Blake Lemoine for breaching confidentiality obligations in his contract. The internet made a Hollywood story about it. Descartes wanted to be sure that he existed. And we? We should seize the opportunity to better understand our own thinking. The need to perceive the processes in one’s own head as something special seems to exist in all people from all cultures. But if we want to draw a line, then we should also know where it is. Personally, I have a hunch: it has to do with drawing boundaries where the world actually presents us with a continuum. Somehow these limits bring us further, otherwise we would not have come so far as humanity. But they can also introduce errors into our automatic statistics. And when we draw the lines differently, conflicts arise. In such a conflict then sometimes someone loses their job. “I dismiss, therefore I am.” – Descartes would be delighted.


To home page

#LaMDA #Consciousness #Blake #Lemoine #philosophize

Leave a Comment

Your email address will not be published.