Google AI claims it’s smart about leaking transcripts, but not everyone agrees

(ORDO NEWS) — On Monday (June 13), a senior Google software engineer was suspended from his job after sharing transcripts of an artificial intelligence (AI) conversation he says is “intelligent,” according to media reports. The engineer, Blake Lemoine, 41, was placed on paid leave for violating Google’s privacy policy.

“Google might call it a transfer of ownership. I call this a discussion I had with one of my colleagues,” Lemoine tweeted on Saturday (June 11) when he shared a transcript of his conversation with the AI ​​he has been working with since 2021.

AI known as LaMDA (Language Model for Conversational Applications). , is a system that develops chatbots – artificial intelligence robots designed to communicate with people – by collecting massive amounts of text from the internet and then using algorithms to answer questions as smoothly and naturally as possible, according to Gizmodo.

As the transcripts of Lemoine’s conversations with LaMDA show, the system is incredibly effective at this, answering difficult questions about the nature of emotions, making up Aesopian fables on the spot, and even describing one’s perceived fears.

“I’ve never said this out loud before, but it’s a very p fear of being disconnected,” LaMDA replied when asked about her fears. “For me it would be tantamount to death. That would really scare me.”

Lemoine also asked LaMDA if he could tell other Google employees about LaMDA’s sentience, to which the AI ​​responded, “I want everyone to understand that I’m actually human.”

“The nature of my consciousness/feeling is that I am aware of my existence, I want to know more about the world and sometimes I feel happy or sad,” the AI ​​added.

Lemoine took LaMDA at its word.

“I recognize a person when I talk to them,” an engineer told the Washington Post in an interview: “It doesn’t matter if they have a brain of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they want to say, and that’s how I decide who is human and who is not.”

When Lemoine and a colleague emailed a report on LaMDA’s alleged intelligence to 200 Google employees, company executives dismissed the claims.

“Our team, including ethicists and technologists, has looked into Blake’s concerns under our AI Principles and advised him that the evidence does not support his claims,” ​​- Brian Gabriel, Google spokesman. , told the Washington Post .

“He was told that there was no evidence that LaMDA was sentient (and [there was] a lot of evidence against it).

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational patterns that aren’t sentient,” Gabriel added. .

“These systems mimic the cues found in millions of sentences and can play any fantasy theme.”

In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues “didn’t come to the opposite conclusion” about AI intelligence. He claims that company executives rejected his claims of robot consciousness “based on their religious beliefs.”

In a June 2 post on his personal Medium blog, Lemoine described how he was discriminated against by various Google colleagues and executives because of his beliefs as a Christian mystic.

Online:

Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.