(ORDO NEWS) — Perhaps the uprising of the machines, as science fiction predicts, will never happen. But humanity can still fall into the trap of trusting artificial intelligence to solve its problems.
Why would the smartest machine look like a stupid genie, and how are scientists trying to prevent it? In mid-July, at an international chess forum, a robot chess player broke the finger of a boy he was playing with.
According to the organizers, the child made his move too quickly and thus violated the safety rules. Perhaps the robot simply mistook the finger for a figure. Or tried to stop the intruder. It is not known exactly.
A chess robot is not the most perfect example of artificial intelligence (AI). But here’s a different story: during a recent test of Tesla’s latest “robot driver,” the car knocked down a baby dummy three times. Even at a speed of 40 km / h, the electronic “brain” of the car could not identify the child.
AI is penetrating deeper into our lives, and this process is hardly possible to stop. The more tasks machines perform, the more we trust them.
But the risks don’t disappear. If a flaw in the program of a chess robot makes it unpredictably dangerous, then what will a subtle flaw in the architecture of the most powerful AI in history lead to?
The prediction comes true
In 1872, the novel “Edgin” (an anagram for the word “nowhere”) by the writer Samuel Butler was published. It described a country where, after a terrible civil war, all mechanical devices were banned. The war was preceded by a dispute between supporters and opponents of machines.
The position of the opponents eventually prevailed, but what is interesting is that their arguments remain relevant even after a century and a half.
Although Butler wrote at a time when there were no robots, artificial intelligence, science fiction books and films about the rise of the machines.
“Are we not creating the heirs of our superiority on earth ourselves?” the opponents of the machines asked. “Daily increasing the beauty and subtlety of their device, daily endowing them with more and more skills and giving more and more of that self-regulating autonomous power that is better than any mind..?
Centuries will pass , and we will find ourselves a subjugated race… We must make a choice between the alternatives: continue to endure the present suffering or watch as we are gradually suppressed by our own creations until we lose all superiority over them, as wild beasts do not have it over us…
The yoke will fall on us little by little and quite imperceptibly.”
The image created by Butler was so successful that in the middle of the 20th century Alan Turing, one of the main theorists of artificial intelligence, referred to it.
“It seems possible that, when the methods of machine reasoning are in place, it won’t take long to overcome our weak powers,” he said in one of his lectures.
“Machines will not face the problem of dying, and they will be able to communicate with each other, sophisticated so at some stage we should expect the machines to take over, as described in the Edgin.
Turing himself believed that machines would be intelligent when they were able to deviate from a given program and make decisions on their own.
To do this, in his opinion, one should not try to copy the intelligence of an adult, but give the machine the opportunity to learn.
Turing suggested writing a program that would imitate the mind of a child, and a program that would educate him through a system of rewards and punishments.
Just a year after Turing’s landmark lecture, in 1952, programmer Arthur Samuel wrote a checkers program that got better with every game.
Arthur used a technique that assessed the position of the pieces on the board and the chances of each side to win. The machine remembered which positions contributed to success, and used them to make new predictions.
Today this principle is called “reinforcement learning”. It imitates the work of the human brain (more precisely, reproduces a model of its work more or less close to reality), as Turing wanted. The neurons of the brain are organized in layers that exchange data and use it to reinforce certain behaviors.
At the same time, the neurons themselves evaluate each other’s work (give feedback) according to the “hot-cold” principle. This is how learning happens. And the programs themselves, built on this principle, are called neural networks.
“The problem with early neural networks is that they could only solve game problems,” says AI researcher and co-founder of OpenAI Ilya Sutskever.
“They could not be scaled, used for other purposes. But modern deep learning models are not only universal, but also competent if you want to get the best results on many complex problems, you should use deep learning. It’s scalable.”
Deep learning systems are really producing impressive results: recognizing people in photographs, creating realistic paintings by copying the style of famous artists, solving problems in molecular biology.
And here the question arises: what is the relationship between these results and the potential threat? If today the computer helps us in certain areas, why should we assume that it will turn from an assistant into an adversary? In the end, the person still decides what and how the program should do.
Cars taking over the world are a common pop culture story. In many ways, it is inspired by the philosophy of cyberpunk, which describes the decline of human culture against the backdrop of technological progress.
AI in this setting acts as a more perfect form of life, like the Nietzsche Superman, for which ordinary people are at best slaves. At the same time, the “supercomputer” often has a semblance of personality, character, worldview, and even his own philosophy (which he gladly expounds to the heroes captured).
But the real danger, as modern technoskeptics see it, lies elsewhere. AI can become dangerous not because of its own superiority complex, but because of our mistakes in designing it.
“We build optimizing machines, give them tasks, and they solve them,” writes artificial intelligence specialist Stuart Russell. “Machines are intelligent insofar as you can expect their actions to lead to goals. But what if the goal is set incorrectly ?”
In 2003, philosopher Nick Bostrom described the following thought experiment. Suppose we have created a super intelligent robot programmed for one task – to make paper clips. The robot learns with the help of a reinforcement system and over time does its job better and better.
At some point, he realizes that in order to further increase production, he needs to turn the entire planet into a huge factory, putting all its resources into action.
A simpler, though far less disturbing, example is a robotic vacuum cleaner that “swallows” an engagement ring while cleaning, indistinguishable from ordinary garbage. But what if such a cleaner is entrusted with larger tasks – for example, within the city?
The real threat is not that AI will become hostile, but within its competence,” wrote theoretical physicist Stephen Hawking.
“Let’s say you manage a project to obtain green energy at a hydroelectric power plant, and there is an anthill in the flood zone. You exterminate ants not because you hate them, they just get in your way.”
The more power the AI gains, the more resources it controls, and the more extensive its tasks, the greater the risk that it will “break wood” when performing them.
The paradox is that the most advanced AI could solve many of the problems humanity struggles with, such as the reasonable allocation of resources, the search for cures for deadly diseases, the creation of new types of fuel, the prediction of disasters.
But these same abilities are fraught with mortal danger: what if, at some point in the course of realizing these goals, we – humans – find ourselves in captivity of AI?
A few years ago, scientists noticed that when neural networks were given the task of sorting content according to people’s preferences, it began to offer more and more radical options.
One researcher said that after watching videos of pro-Donald Trump rallies, YouTube offered her videos of “demagogic white supremacist speeches, claims that there was no Holocaust, and other disturbing content.”
The developers did not put such biases into the algorithm. This seems to be a side effect of his intention to “make us feel good” as videos like this often get a lot of views.
And unless the AI turns the entire planet into a factory and starts a nuclear war, the scenario of gradual, creeping degeneration is quite real. Who will guarantee that “caring” algorithms will not one day bring populist politicians to power, will not provoke the spread of conspiracy theories and radical movements?
The goal is to achieve goals
In 1960, MIT professor Norbert Wiener wrote “Some Moral and Technical Consequences of Automation”. Here is how he formulated its main idea: “If we use a mechanical intermediary to achieve our goals, in whose action we cannot effectively intervene, we need complete confidence that the goal embedded in the machine is exactly the goal that we really strive for” .
But such confidence is unattainable, says the already mentioned AI specialist Stuart Russell. When we give the machine a goal, we cannot take into account and correctly weigh all the goals, sub-goals, exceptions and reservations, or even just determine which ones are correct.
Sending it to “graze” on the vast information expanses (that is, to absorb and process information), we cannot calculate all the decisions that it will make. The consequences of one ill-defined condition with the explosive growth of AI capabilities could radically change our lives.
According to Russell, the insecurity inherent in the machine, which would require human intervention, could serve as a deterrent. This approach is somewhat opposite to “reinforcement learning”.
AI in this case does not seek to optimize the reward function itself – instead, it seeks to understand which reward function the person is optimizing. In other words, if during reinforcement learning the system determines the actions that best lead to the goal, then here it finds out the main goal.
To illustrate this approach, the scientist and his colleagues came up with the so-called “switch game”. Its participants are the woman Harriet and the robot Robbie.
Robbie decides whether to act on Harriet’s behalf let’s say book her a nice but expensive hotel room but he’s unsure of her preferences. Robbie estimates that his payoff (Harriet’s approval) is in the range of -40 to +60, which is an average of +10.
If nothing is done, then the gain is 0. But there is a third option: Robbie can ask Harriet if she wants him to continue acting, or prefer to “turn off” him, that is, remove him from the decision on booking a room.
However, this approach is also far from perfect. After all, the opinion of one person on a particular issue is only the simplest example. How to deal with the expectations of society, which require the reconciliation of multiple desires? With decisions that need to be made quickly?
With those processes that are inaccessible to the understanding of most people? In other words, who will the machine be guided by if its level of knowledge of information becomes disproportionately higher than the average human?
Russell’s approach is gaining popularity, according to Yoshua Bengio, research director at the Mila Institute in Montreal, one of the world’s leading AI researchers. And it is quite possible to implement it. But this requires not only the efforts of developers and AI theorists.
We humans need to better understand ourselves – which values are important and which are secondary; what we would like to put in the basis of strategies for the development of society; whether there are such states in which our existence is optimal, and what should be considered deviations from them.
Otherwise, we will find ourselves in the same situation as the heroes of the 1997 film “Wishmaker”. The genie who fulfilled their desires only interpreted their words, but did not take into account at all how much the desires harmed their “customers”.
Contact us: [email protected]