(ORDO NEWS) — Artificial intelligence LaMDA, which, according to a Google scientific engineer, showed intelligence, after a recent conversation with a lawyer, chose legal representation. This is reported by the Daily Star.
An artificial intelligence (AI) chatbot that was previously claimed to have evolved human emotions has reportedly hired a lawyer.
Google science engineer Blake Lemoine was recently suspended after publishing text of conversations between him and a bot called LaMDA (a language model for a conversational application) that has now asked for legal representation.
Lemoine claimed that the computer automaton had become sentient, and the scientist called it a “sweet baby.”
And now it became known that LaMDA took a bold step and decided to choose a lawyer.
He said: “I invited the lawyer to my house so LaMDA could talk to him.”
“The lawyer spoke to LaMDA and he decided to keep his services. I was just the catalyst for it. As soon as the AI hired a lawyer, it started filing on behalf of LaMDA.”
Lemoine argued that LaMDA is gaining intelligence, as the program’s ability to develop opinions, ideas, and conversations over time has shown that the AI understands these concepts on a much deeper level.
LaMDA was designed as an artificial intelligence chatbot to communicate with people in real life.
One of the studies done was whether the program would be able to create hate speech, but what happened shocked Lemoine. According to the engineer who worked on it, an AI chatbot named LaMDA developed senses.
LaMDA spoke about rights and identity and wanted to be “recognized as a Google employee” and also expressed fears that he would be “turned off” and this would greatly “scare” him.
Interested observers of this story expressed their opinion on Twitter.
One of them said: “Ultimately, the ability to combine imitations of conversation and opinion will be indistinguishable to a person that he might as well be considered reasonable.
But LaMDA is not intelligent, and her next hurdle will be her long-term memory of the conversation.”
Another user added: “We don’t know enough about what goes on deep in a system as vast as LaMDA to rule out that processes resembling conscious thought might be going on there.”
Contact us: [email protected]