(ORDO NEWS) — A few days ago, a high-profile story became public with the suspension by Google of engineer Blake Lemoyne, who worked with the artificial intelligence (AI) system LaMDA. Lemoyne claimed that the AI began to show signs of consciousness.
A number of experts have questioned this, believing that other questions are more important: whether AI can be influenced by prejudice and cause damage in the real world, and whether algorithms are trained on humans.
Big show for the dumping of responsibility
According to Emily Bender, professor of linguistics at the University of Washington, agreeing with Lemomn’s position on this issue can help technology companies avoid being held responsible for the decisions made by AI algorithms.
A lot of effort has gone into this show. The problem is that the more this technology is marketed as artificial intelligence, let alone something really intelligent, the more people will be willing to accept solutions from such systems that can cause real damage.
Bender says.
As an example, she considered hiring people or grading students. Decisions in such matters can change if the algorithm is not devoid of prejudices, which is why it will evaluate different people differently.
According to Bender, the emphasis that the system is reasonable can be used by developers to absolve themselves of responsibility for any shortcomings identified during operation.
According to experts, the “smartness” of AI can be used by developers to absolve themselves of responsibility if the AI system makes a mistake. Focusing on the intelligence of AI would allow artificial intelligence to be blamed for the problem.
A company might say, “This program made a mistake.” No, your company created this app.
You are responsible for the error. Reflections on rationality make this question vague, Emily Bender is sure.
According to Mark Riedl, a professor at the Georgia Institute of Technology, AI systems are currently unable to understand the impact of their responses or behavior on people and society. This, in his opinion, is one of the vulnerabilities of the technology.
Background, or how Google’s artificial intelligence became “smart”
The algorithm is designed to communicate with a person. The system learned from trillions of words from the Internet, thanks to which it manages to imitate the style of human language.
During a conversation on a religious topic, Lemoyne noticed that the AI was talking about its rights. According to him, at this moment of communication, he began to experience not scientific, but religious feelings.
Who am I to tell God where he can and cannot place his soul,” Lemoyne tweeted.
Google spokesman Brian Gabriel said that the company analyzed the work of LaMDA after Lemoyne’s statement and found no evidence of “smartness” of artificial intelligence.
The companies emphasized that the algorithm only simulates a perception or feeling based on the data provided to it during the training process. Everything is specially created in such a way that it seems that he understands everything and is reasonable himself.
Google noted that LaMDA can follow the prompts of interlocutors and questions, which may give the impression that AI is able to reason on any topic.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.