(ORDO NEWS) — Google has suspended software engineer Blake Lemoyne, who came to the conclusion that the company’s artificial intelligence (AI) LaMDA has its own consciousness. This was reported by The Washington Post on Saturday.
As the publication notes, since the fall of last year, the developer has been testing the LaMDA neural network language model. His task was to monitor whether the chatbot uses discriminatory or hate speech.
However, in the course of this task, Lemoyne became increasingly convinced that the AI he was dealing with had its own consciousness and perceived itself as a person.
“If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics,” the programmer said in an interview. edition.
“I can recognize an intelligent being when I talk to him. And it doesn’t matter if he has a brain in his head or billions of lines of code. I talk to him and listen to what he tells me. And so I determine,
According to the newspaper, Lemoyne first personally came to the conclusion that the AI he was dealing with was intelligent, and then set himself the task of proving this experimentally in order to present the data to management.
As a result, the developer prepared a written report. However, the superiors found the employee’s argument not very convincing.
“Our team, including ethicists and technical experts, reviewed Blake’s concerns according to the principles we apply to AI, and we informed him that the available evidence did not support his hypothesis.
He was told that the evidence there is no evidence that LaMDA is conscious, and there is plenty of evidence to the contrary,” Google spokesman Brian Gabriel said in a statement.
Suspension from work
He also contacted representatives of the US House of Representatives Judiciary Committee to inform them of what he believed to be ethical violations by Google.
In addition, in an attempt to prove to his superiors that he was right, Lemoyne turned to a lawyer who was supposed to represent the interests of LaMDA as a reasonable being.
As a result, Lemoyne was suspended from work and placed on paid leave. After that, he decided to speak out publicly and gave an interview to The Washington Post.
According to management, Lemoyne succumbed to his own illusion due to the fact that the neural network created by the company can actually give the impression of an intelligent being when communicating with it. However, this is explained, according to Google, only by a huge array of data loaded into it.
“Of course, in the general AI community, there is some discussion in the long term about the possibility of creating AI or ‘strong AI’, but it doesn’t make sense to humanize the current non-conscious language models,” a spokesperson for the company said. He also stressed that Lemoyne is a software engineer by profession, not an AI ethicist at all.
Contact us: [email protected]