Monday, December 9, 2024
HomeLawyerGoogle engineer claims his AI has hired a lawyer to defend his...

Google engineer claims his AI has hired a lawyer to defend his rights

Blake Lemoine grabbed all the media attention by declaring that an AI from Google has become a real person. Faced with criticism, he believes that a “hydrocarbon intolerance” is emerging.

At what stage can we consider that an artificial intelligence is a real person? The question may seem very prospective, knowing that there is not even a so-called “strong” AI yet. However, the debate is raging at the moment in the heart of Silicon Valley, since the statements of a Google engineer in early June. Blake Lemoine claims that the AI ​​he is working on is sentient — that is, it is a person capable of feeling emotions and feelings, or apprehending death.

To prove his point, he shared on his blog, then during an interview with the washington post, snippets of conversation with the AI, which earned him a suspension from Google. Blake Lemoine had defended himself by explaining that, according to him, it was not the sharing of a property of the company, but the sharing of a ” a discussion I had with one of my colleagues “.

Since then, Blake Lemoine has faced criticism because the artificial intelligence he talks about, called LaMDA, is just a chatbot — an algorithm that mimics human interactions. It is through a phenomenon of anthropomorphism that the engineer perceives a real person there. However, in a June 17 interview, for Wiredhe persists and signs — and goes even further.

An AI or… a child?

A person and a human are two very different things. Human is a biological term “Defends Blake Lemoine, reaffirming that LaMDA is a real person. This terrain is ambiguous. A person, from a legal point of view, is certainly not necessarily human — it can be a question of a legal person, in particular. Except that Blade Lemoine does not evoke such an immaterial entity, since he compares the chatbot to a conscious entity. He would have become aware of the thing when LaMDA claimed to have a soul and wonder about existence.

“I see it as the education of a child”

blake lemoine

For the engineer, the algorithm has all the characteristics of a child in the way “opinions” are expressed — about God, friendship or the meaning of life. ” He’s a child. His opinions are developing. If you asked me what my 14-year-old son believes, I’d say, ‘Man, he’s still figuring it out. Don’t make me put a label on my son’s beliefs.’ I feel the same about LaMDA. »

The Pepper robot at the Cité des sciences. It is programmed to answer a series of questions. // Source: Marcus Dupont-Besnard

If he is a person, then how to explain that errors and biases have to be “corrected”? The question is all the more relevant since the engineer was initially hired by Google to correct AI biases, such as racist biases. Blake Lemoine then pursues the parallel with a child, referring to his own 14-year-old son: “ At different times in his life, while growing up in Louisiana, he inherited certain racist stereotypes. I rectified it. That’s the whole point. People see it as modifying a technical system. I see it as raising a child. »

For further

Battlestar Galactica / SyFy

“Hydrocarbon intolerance”

Faced with criticism of anthropomorphism, Blake Lemoine goes further in the rest of the interview – at the risk of making the following analogy uncomfortable – by invoking the 13th amendment, which abolishes slavery and servitude. in the United States Constitution since 1865.” The argument that ‘it looks like a person, but it’s not a real person’ has been used many times in human history. It’s not new. And it never works out well. I haven’t yet heard a single reason why this situation would be different from the previous ones. »

Continuing his point, he then invokes a new form of intolerance or discrimination, which he calls ” hydrocarbon intolerance (referring to computer design materials). Clearly, Blake Lemoine therefore believes that the Google chatbot is the victim of a form of racism.

humans_tv_show
In the Humans series (as in the Swedish original, Akta Manniskor), conscious robots face intolerance, and are even defended by a lawyer. But… it’s fiction. // Source: Channel4

Can an AI have the right to a lawyer?

A first article by Wired suggested that Blake Lemoine wanted LaMDA to have the right to counsel. In the interview, the engineer rectifies: This is incorrect. TheMDA asked me to find him a lawyer. I invited a lawyer to my house so LaMDA could talk to a lawyer. The attorney had a conversation with LaMDA, and LaMDA elected to retain his services. I was just the catalyst for that decision. »

The algorithm would then have started filling out a form to that effect. In response, Google would then have sent a letter of formal notice … a reaction that the company denies as a whole, according to Wired. Sending a formal notice would mean that Google admits that LaMDA is a legal “person” with a right to an attorney. But a simple machine—say, the Google Translate translation algorithm, your microwave, your smartwatch—has no legal status to do that.

Are there any factual elements in this debate?

As to whether the AI ​​is sentient, “ this is my working hypothesis », readily admits Blade Lemoine, during the interview. ” It is logically possible that information will be made available to me and that I will change my mind. I don’t think that’s likely. »

What is this assumption based on? “ I looked at a lot of evidence, I did a lot of experiments “, he explains. He says he conducted psychological tests, talked to him ” like a friend “, but it was when the algorithm evoked the notion of soul that Blake Lemoine changed his state of mind. ” His answers showed that he has a very sophisticated spirituality and understanding of what his nature and essence is. I was touched. »

The problem is that such a sophisticated chatbot is literally programmed to look human and to refer to notions used by humans. The algorithm is based on a database of tens of billions of words and expressions available on the web, which is an extremely vast field of reference. A conversation ” sophisticated — to use Blake Lemoine’s expression — does not therefore constitute proof. This is what Google retorts to its engineer: the absence of evidence.

In the computer field, anthropomorphism can in particular take the form of the ELIZA effect: an unconscious phenomenon where one attributes conscious human behavior to a computer, to the point of thinking that the software is emotionally involved in the conversation.

For further

Quantic Dream

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular