(ORDO NEWS) — As scientists continue to make machines smarter and smarter, many people are wondering, “What happens if computers get too smart, and don’t they just act for their own good?” The entertainment industry – from The Matrix to The Terminator – has already begun to wonder if the robots of the future will ever threaten humanity. An international team of scientists in the course of their new study concluded that there is no way to stop the development of machines. In their opinion, humans will not be able to prevent super-artificial intelligence from doing what it sees fit.
Scientists at the Center for the Study of People and Machines at the Max Planck Institute began pondering what such a machine would look like. Imagine an AI system whose intelligence is far superior to that of humans – so much so that it can learn on its own without additional programming. The researchers say that if such a machine is connected to the Internet, then AI will have access to all of humanity’s data and even be able to take control of other machines around the world.
The study authors pose the question, what could such a mind do with all this power? Will all this work for the benefit of people? Will huge computing power be devoted to solving problems such as climate change? Or will the machine seek to take control of the lives of its human neighbors?
Controlling the uncontrollable – the dangers of superintelligent artificial intelligence
Both computer scientists and philosophers have studied whether there is a way to keep superintelligent machines from potential attack on their creators, while ensuring that computers of the future do not harm their masters. New research, unfortunately, shows that keeping superintelligent AI in check is nearly impossible.
“The superintelligent machine running the world sounds like science fiction today. But there are already machines that independently perform certain complex tasks, and programmers do not fully understand how they learned this. Therefore, the question arises whether it can at some point become uncontrollable and dangerous for humanity, ”says co-author of the study, head of the Digital Mobilization Group at the Center for People and Machines Manuel Cebrian.
The study created a theoretical containment algorithm that would prevent AI from harming humans under any circumstances. In simulation, AI stops functioning if researchers consider its actions harmful. In the real world, the study authors say, it just might not work.
Contact us: email@example.com