Can artificial intelligence be self-aware

(ORDO NEWS) — How far will artificial intelligence go in the near or distant future? Should we be concerned about the potential risks of this technology or will experts find a way to successfully control it? Can artificial intelligence be aware of its existence?

In recent years, the Internet is full of such headlines as:

– Neural network has learned to draw portraits!
– The neural network has learned to come up with April Fools’ jokes!
– Neural networks beat people at Go, Chess and StarCraft!
– The neural network has learned to write music!

Looking at such headlines, the average person might get the impression that perhaps today or tomorrow SkyNet will send an army of biorobots to “kill all humans.”

All these developments, although of great practical importance, do not bring us much closer to creating a real AI with self-awareness and intelligence in the sense in which a person has it.

There are two directions in the development of intelligent systems, which are called weak and strong artificial intelligence.

Weak AI is an intelligent system designed to solve a single task, such as recognizing text in a photo or creating good portraits of people.

Strong AI is an artificial intelligence that has intelligence and self-awareness.

About 99% of all news in the press about the success of AI is related to the success of weak AI. We have given above examples of what neural networks have learned to do, but it is important to understand that these are all different neural networks.

A neural network that draws pictures is useless for playing Go, and a neural network that plays Go well will not write music. Moreover, often a neural network demonstrates a complete lack of understanding of the problem it solves.

So, for example, AlphaStar, created by DeepMind, which plays StarCraft 2 better than 99% of human players, feels more or less confident in games that develop according to the standard scenario, but as soon as it encounters some kind of creative strategy, it immediately begins to accept absurd decisions. You should not expect that self-consciousness will one day wake up in such a neural network.

Strong artificial intelligence systems are a completely different matter. Most researchers are inclined to believe that a strong artificial intelligence will be aware of its existence and will be intelligent in the same sense as a person.

One of the most promising areas in this area is the creation of a computer model of the human brain. It is believed that in a sufficiently complex and accurate model of the human brain, self-consciousness will inevitably arise.

However, at the moment, our technical capabilities do not allow us to build models of the human brain, even at the level of simple modeling of neural and synaptic connections. Modern computing power allows you to simulate a neural network comparable in complexity to the brain of a cat.

But the brain is not just a bunch of neurons haphazardly connected to each other. Its functional structure is also extremely important, so the creation of a full-fledged model of the cat’s brain is still very far away.

I think when we learn how to model the functional structure of the brain of mammals, such as cats, dogs, etc., we will understand much better how realistic it is to achieve self-awareness based on a computer model of the brain.

At the same time, this approach has quite obvious drawbacks. First, there is no guarantee that intelligence and self-awareness will actually arise in a sufficiently complex model of the brain. Secondly, some researchers adhere to the principle that modeling the human brain as a way to create strong artificial intelligence is the wrong way.

This is about the same as if we needed a vehicle, instead of designing a wheel, carts, cars, trying long and tediously to create an ideal model of mechanical human legs. It is possible that we should think about a fundamentally different way to create artificial intelligence.

To end on a high note, I must mention some of the important voices of people like Stephen Hawking and Elon Musk, who have expressed their concern that artificial intelligence could potentially endanger the future of humanity.

This does not mean that they oppose the technological progress of AI, but they expressed their opinion that we should be well prepared before releasing such technology. As Stephen Hawking once said, there is no guarantee whether AI will help us in the future or turn against us. There is a real potential risk that AI could devastatingly disrupt the global economy.

Elon Musk, on the other hand, expressed the opinion that artificial intelligence could cause World War III.

We can only guess what exactly will happen when modern technology reaches advanced artificial intelligence. Do you think it will bring positive change or, conversely, potential disruption?


Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.