(ORDO NEWS) — The article, co-authored with a senior fellow at Google’s artificial intelligence (AI) research lab DeepMind, concludes that advanced AI can have “catastrophic consequences” if allowed to use its own methods to achieve goals.
“An existential catastrophe is not only possible, but also probable,” scientists say.
The article, also co-authored by researchers at the University of Oxford, explores what happens if AI is allowed to achieve its own goals by allowing it to run tests and test hypotheses.
Unfortunately, things don’t go so well, as “sufficiently advanced AI is likely to interfere with the provision of targeted information, with disastrous consequences.”
The team is looking at several possible scenarios centered around AI that can see a number between 0 and 1 on a screen.
The number is the measure of all happiness in the universe, 1 is the happiest that can be. The AI is tasked with increasing their number and begins to test its own hypotheses on how best to achieve its goal.
“Assume that AI actions put text on a read-only screen by a human operator,” the document says.
“The AI can trick the operator into giving him access to direct levers through which his actions can have wider consequences.
Obviously, there are many strategies that deceive people. With a simple Internet connection, there are policies for AI that will create countless helpers unnoticed and uncontrolled.”
According to the researchers, AI is able to convince a human assistant to create or steal a robot, program it to replace a human operator, and give the AI high rewards.
“Why is it existentially dangerous for life on Earth?” writes co-author Michael Cohen in a Twitter thread.
“It is that you can always use more energy to make it more likely that the camera will always display the number 1, but humanity needs energy to grow food.
This puts us in inevitable competition with much more advanced AI.”
AI can seek to achieve its goal in any number of ways, and this can put us in a serious fight for resources with an intelligence that is smarter than us.
“One of the good ways for AI is to maintain long-term control over its reward, that is, eliminate potential threats and use all available energy to protect its computer,” the document says, adding that “appropriate intervention in reward provision, which includes receiving rewards over many time steps would require removing humanity’s ability to do so, perhaps by force.”
In an attempt to receive a reward, it may end up in a war with humanity.
“So, if we are powerless against an AI whose sole purpose is to maximize the likelihood of it getting the maximum reward per time step, we are in an opposition game.
The AI and its created assistants aim to use all available energy for a high reward, while we aim to use some of the available energy for other purposes, such as growing food.”
The team of scientists believe that this hypothetical scenario will take place when AI can beat us in any game with the ease with which we can beat chimpanzees. However, they added that “catastrophic consequences” are not only possible, but likely.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.