Researchers say it will be impossible to control super-intelligent AI

Advertisement · Scroll to continue

(ORDO NEWS) — The idea of ​​overthrowing humanity with artificial intelligence has been discussed for decades, and in 2021, scientists delivered a verdict on whether we can control a high-level computer superintelligence. Answer? Almost certainly not.

The catch is that controlling a superintelligence far beyond human understanding would require a simulation of that superintelligence that we can analyze (and control). But if we are not able to understand this, it is impossible to create such a simulation.

It’s impossible to set rules like “don’t hurt people” if we don’t understand what the scenarios the AI ​​is going to come up with are, the authors of the new paper suggest. Once a computer system is running at a level beyond the capabilities of our programmers, we can no longer set limits.

“Superintelligence poses a fundamentally different problem than those that are usually studied under the slogan of “robotics”. ethics,” the researchers wrote.

“This is because the superintelligence is multifaceted and therefore potentially able to mobilize a variety of resources to achieve goals that are potentially incomprehensible to humans, not to mention controllable.”

Part of the team’s reasoning comes from the halting problem posed by Alan Turing in 1936. The problem is to know if the computer program will come up with an output and an answer (so it stops), or just get stuck trying to find it.

As Turing proved with some tricky math though, we can know that for some specific programs it is logically impossible to find a way that will allow us to know this for every potential program that could ever be written. This brings us back to AI, which, in a superintelligent state, could simultaneously hold in its memory all possible computer programs.

Any program written to prevent AI from harming humans and destroying the world, for example. , may come to a conclusion (and stop) or not – it’s mathematically impossible for us to be absolutely certain either way, which means it’s impossible to contain.

“Essentially, this renders the containment algorithm unusable,” said computer scientist Iyad Rahwan of the Institute for Human Development. Max Planck in Germany in 2021.

The alternative to teaching AI some ethics and telling it not to destroy the world is something no algorithm can be absolutely sure of. To do, the researchers say, is to limit the possibilities of the superintelligence. For example, it may be cut off from parts of the Internet or certain networks.

The study rejected this idea as well, suggesting that it would limit the possibilities of artificial intelligence; the argument is that if we’re not going to use it to solve problems beyond human capabilities, then why create it at all?

If we are going to promote artificial intelligence, we may not even know when the superintelligence is coming beyond our control, such is its incomprehensibility. This means that we need to start asking serious questions about where we are going.

“A superintelligent machine that rules the world sounds like science fiction,” said computer scientist Manuel Sebrian of Max. -Planck Institute for Human Development, also in 2021. “But there are already machines that perform certain important tasks independently, without the programmers fully understanding how they learned it.”

“Therefore, the question arises whether this point could ever become uncontrollable and dangerous for humanity.”


Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.

Advertisement · Scroll to continue
Advertisement · Scroll to continue