(ORDO NEWS) — The head of SpaceX and Tesla, Elon Musk, in an interview with the head of the Axel Springer media group, Matthias Döpfner, spoke about what he is most afraid of – the billionaire listed three threats that could destroy humanity.
“I spent a lot of time talking about declining birth rates. Perhaps this is the biggest threat to the future of human civilization, ”says businessman Elon Musk. He stated this in a recent interview with Axel Springer CEO Matthias Döpfner.
Musk regularly argues that talk about overpopulation of the Earth is not justified, and in fact, people should think about having children to save civilization.
“It’s been bothering me for years now because I just don’t see any change for the better. Every year it gets worse. And I drive my friends crazy with these experiences, ”added the billionaire.
In addition, Musk fears that artificial intelligence will get out of hand in the near future. “I have concerns that artificial intelligence will not develop according to plan,” the entrepreneur warned.
The head of SpaceX is much more pessimistic about AI compared to other techno-businessmen. Out of fear of humanity’s inability to control the power of artificial intelligence, Musk created OpenAI, a company that conducts its own open-source developments so that AI does not catch humanity by surprise.
The entrepreneur also considers the Neuralink startup and its developments in the context of defeating AI – by installing a chip in the brain. To begin with, the chip will replace the smartphone, and then completely allow a person to connect with artificial intelligence.
As conceived by Musk, humanity needs to implement at least these two technologies in order to coexist peacefully with AI.
The billionaire called religious extremism the third fear. According to Musk, adherence to extremely radical and fundamentalist views on the interpretation of the dogma threatens the development of science.
The businessman noted that an increase in the number of supporters of such religious movements can lead to problems both for the countries themselves and for their scientific progress.
“I think that religious extremism is another existential threat to humanity,” he assured.
Musk did not directly indicate which countries and religious movements in question. In the past, the entrepreneur, on the contrary, said that he was close to Christian values.
“I agree with the principles Jesus stood for. There is great wisdom in the teachings of Jesus, and I agree with these teachings, ”he said in an interview with the satirical website The Babylon Bee.
The coordinator of the Russian Association of Futurologists, Konstantin Frumkin, in an interview with Gazeta.Ru, agreed that the fear of a declining birth rate is the most unconditional threat facing humanity in the foreseeable future.
“This in itself is not a problem yet, because it is necessary that humanity has enough resources for everyone. But the question arises of which part of humanity will be able to work, and which part will live dependent on the rest, ”he said.
According to the expert, it is not yet possible to determine how the extension of the working age will change due to the aging of people and whether labor productivity will increase.
“But all this matters for whether humanity will become poorer or richer. This should be solved by the development of technologies and biotechnologies,” Frumkin added.
The futurist noted that powerful artificial intelligence is now more difficult to consider a real threat. “So far, the problem exists only in science fiction. All current AI systems are pretty simple things that work exclusively within the framework set by people, ”the expert said.
According to him, this is an abstract threat that can become real. “But there are still no real cases and understanding of who exactly can create a development that will lead to such a development of events,” the speaker stated.
Frumkin also stated that he considers religious extremism a threat to the present. “Its scope and danger are clearly visible even now, including in the current political situation in Iran and Afghanistan,” he said.
According to the futurologist, so far religious extremism is almost completely limited to Islamic countries. “Religious extremism exists in India, but insignificantly, it does not pose a danger to the country’s political system,” the specialist added.
Frumkin also named two more dangers that, in his opinion, threaten humanity. The futurist believes that it is much more important to solve the climate problem and the depletion of energy resources in the near future than out of control AI or religious extremism.
He noted that the most difficult thing will be with the search for those resources that cannot be replaced by renewable energy sources. The specialist concluded that to solve this problem, developments are being carried out with the aim of mining on asteroids.
Among other things, the head of SpaceX admitted that the company intends to increase the production of Raptor rocket engines needed for a reusable spacecraft. Recall that the latter are just designed to transport people to Mars.
One of the users decided to ask Musk how many Raptor engines the company is aiming for, receiving in response “From 800 to 1000”.
“About that much is needed to create the fleet needed to build a self-sufficient city on Mars in 10 years. The city itself will probably take about 20 years, so we hope that it will be built around 2050, ”said the head of SpaceX.
He shared his thoughts on the podcast of Lex Friedman, an American scientist of Russian origin Alexei Fedotov.
According to Musk, $1 trillion is needed to send colonists to Mars under the current conditions. This is explained by the fact that people need not only a rocket, but also medical equipment, means of communication and more.
Musk also noted that he wants to build self-sustaining cities on Mars with underground hydroponic farms, in which the necessary conditions will be created for the uninterrupted cultivation of crops and providing food for the colonists. The farms will run on solar energy.
“We can’t colonize Mars because it’s too expensive for now. Sending humans to Mars today would cost $1 trillion at the moment. The fact is that one rocket is not enough, people need medical equipment, means of communication,” Musk said.
The uprising of an armada of robots out of control, a supercomputer that decided to take charge of humanity is a typical plot of many science fiction films.
Such a development of events, but already in the real future, is trying to avoid countries around the world. To do this, the authorities and technology companies pass codes and bills that lay straw in the field of artificial intelligence.
These documents regulate the ethics of AI, trying to level the harm that smart machines can potentially do to humanity.
The Secret of the Firm spoke with AI developers and cybersecurity specialists about whether the Matrix a reality controlled by robots out of control could appear in the future, and what threats modern developments in the field of artificial intelligence pose.
Laws of robotics of the 21st century
Countries around the world are adopting laws and codes to regulate AI. In Russia, a similar document was signed) at the end of October 2021. It was developed by the largest technology companies (Yandex, VK, Rostelecom, etc.) together with the government.
The document says that a person, his rights and freedoms in the development of AI should remain the greatest value. Technology cannot be used to harm people, property, or the environment. The responsibility for possible damage lies with the developer, not the machine.
Representatives of the business community, science and government agencies will develop guidelines and select the best and worst practices for resolving emerging ethical issues in the life cycle of artificial intelligence.
As Nikita Kulikov, CEO of the non-profit organization PravoRobotov, told Sekret Firmy, the Russian version of the code of ethics clearly has borrowings from European legislation. At the EU level, such a document was adopted in 2019.
The European Commission has developed seven requirements for robots. First of all, Europeans were worried about the possibility of human control of robots.
Also in the EU, they urged to be more careful about data privacy. They didn’t even forget about the environmental responsibility of robots – the green theme in Europe is now in trend.
According to Kulikov, the EU Code of Ethics is now considered the most advanced legal act in this area. “The Europeans have some superiority here, the developers of all similar documents from other countries looked back precisely at the EU code,” the expert explained.
Other parts of the world are not far behind. The US Department of Defense in 2020 followed on the heels of the Europeans, developing and endorsing similar ethical standards. With one small nuance: they concerned the use of robots in war. They include five principles.
- The responsibility of the developer for the creation and use of such weapons.
- Impartiality – Developers will try to reduce the “unintentional bias of robots”. This means that machines should not make decisions outside their area of responsibility.
- Manageability – the final decision should always be made by people, not robots.
- Reliability – AI solutions will be constantly tested, they must be completely safe for humans.
- Obedience – robots must not get out of the control of people.
In 2021, China also introduced a code of ethical principles for robots. It focuses on human control over robotics. The document says that robots should improve the well-being of people, they cannot be used in crime. In addition, it is emphasized that robots should not interfere with privacy.
Matrix from the future
All accepted ethical codes proceed from the fact that AI is capable of carrying threats to humans. It is to prevent them that these documents were developed.
Philosophers and futurologists have placed this problem at the center of their intricate theories for decades. The threats from robots have been drawn more and more fantastic with each passing decade. Robots were often portrayed as hostile invaders capable of enslaving the world.
Today, such scenarios no longer seem entirely unrealistic. Therefore, scientists and developers have focused on working out specific scenarios of possible threats from artificial intelligence and how they can be avoided.
For example, will self-driving cars increase the number of accidents in the future? Or will deepfakes turn into weapons of mass destruction in the hands of scammers?
Stanislav Ashmanov, CEO of the Nanosemantika IT company, identifies three threats associated with the development of artificial intelligence.
“The first is a violation of privacy caused by the automatic analysis of a large amount of data – transactions, correspondence, face recognition from video, etc. The second threat is the replacement of people in the workplace by robots.
First of all, this applies to representatives of mass professions – cashiers, call center employees, drivers. This process has already begun. And finally, the third threat that worries people is the emergence of intelligent machines capable of enslaving humanity.
Such machines will gain control over weapons systems, which will lead to the destruction of humanity as a species. It seems to me more of a fantasy, although today we already see how artificial intelligence controls some types of weapons – for example, drones that destroy targets on the ground, ”the expert told Sekret Firmy.
According to him, also negative scenarios can be associated with the development of neural networks.
Sometimes neural networks get out of control today. So, the voice assistant Oleg from Tinkoff Bank in 2019 suggested that the client cut off her fingers. The woman wrote to the chatbot that the fingerprint login system in the application did not work – and received such an unexpected answer.
“Also, neural networks can make mistakes when making a decision on issuing loans. For example, an application for a loan in one of the banks can receive an unreasonable automatic refusal in 60 seconds, with the explanation: “Artificial intelligence made such a decision based on big data,” Ashmanov said.
In addition, the expert drew attention to the threats from deepfakes.
“The voice of a person and the image could have been synthesized before, but it was expensive and time-consuming, made to order at Hollywood studios. Now this technology has become mass, cheap, available.
This is dangerous, as scammers can start using it by creating fake videos with a person. They can put pressure on the victim, try to use them to circumvent the protection of banks, deceive people using social engineering, ”concluded the expert.
The possibility of hacking a robot also poses a big threat, says Alexei Lukatsky, an information security expert at Cisco Systems.
“Against artificial intelligence systems, hackers use different attack methods than against company servers. The latter can be simply put. AI-related servers are also possible, but they often do more unusual things with them. For example, they attack learning models.
This leads to the fact that artificial intelligence begins to make wrong decisions on the same initial data as before. You can, on the contrary, replace the data or mix fake data in them – artificial intelligence will then draw incorrect conclusions, ”the expert explained.
He added that hackers are also capable of attacking the very infrastructure that powers AI. This can be done, for example, using encryptors. Another target for hackers is autopilots, such as unmanned vehicles. If some elements of the signs are blocked for them, then they will make the wrong decisions.
In the future, the problem of hacker attacks on artificial intelligence will enter the top three or five of the most serious problems in the computer world, says Lukatsky.
“However, today it is not so acute. There are no hacker groups specializing in hacking artificial intelligence yet. True, groups that use AI in their work have already appeared. They use it to send phishing messages or create fake sites and deepfakes,” the expert concluded.
Nikita Kulikov added that before being afraid of robots going out of control, one must understand what kind of control they will get out of.
“If a robot is used in medicine, trained on test results, pictures where there is a disease, it can accurately learn to identify patients.
When the robot almost stops making mistakes, then human control over it may not be needed. And it is precisely then that there will be a really strong risk that a mistake will be made, and there will be no one to correct it, ”the expert noted.
The second case – the robot will be used as a third opinion. People will only consult with him, he will become an addition. Then there are no risks for humanity, the robot will simply become an assistant, Kulikov concluded.
Devastating deepfakes, bandit-controlled robots, or super-smart electronic medical assistants and chatbots with super-mathematical powers – no one knows what artificial intelligence will be like in the future.
However, it is already clear today that it is worth controlling these technologies. But robotics will become necessary when technologies are already formed, and threats become part of reality.
So far, all attempts to regulate this sphere are reminiscent of a rite – an attempt by mankind to utter magic words, with the hope that they will be heard somewhere.
The population of our planet has reached 7.8 billion people, the German Foundation for the Population of the Earth (DSW) recently announced. However, according to the fund, over the past 30 years, world population growth has fallen by about a third. The number is increasing only in sub-Saharan African countries.
The population in many countries has declined. This is evidenced by a new report published by DSW. The reason for the reduction is a sharp drop in the birth rate: the average number of children per woman is steadily declining. If this figure falls below 2.1, the population begins to decline. According to the foundation, in 1990 this figure fell from 3.2 children per woman, today the world average is 2.3 children.
The only region of the world where the population is not only not declining, but has the highest birth rate in the world, is Sub-Saharan Africa. According to DSW, there are 4.7 children per woman in this region, compared to an average of 1.8 in high-income countries.
The problem, experts say, is that these countries lack modern contraceptives and have a high rate of unwanted teenage pregnancies.
“Early pregnancies put thousands of girls into a spiral of poverty each year,” said Jan Kreuzberg, head of DSW.
In sub-Saharan Africa, 16% of births are to adolescent girls. According to Kreuzberg, one in two women is denied access to contraceptives – about half of all women who would like to avoid unwanted pregnancies.
Infectious diseases, climate problems, the politicization of natural resources and the growing digital divide have become threats to humanity for the next 10 years.
This was reported by the authors of the Global Risks 2022 report on the perception of risks by the population, prepared by the Swiss World Economic Forum (WEF). Also, survey participants among the threats called the growing discontent of the youth and economic inequality.
Regarding the near future, most of the respondents are afraid of the threat of the spread of infectious diseases, including COVID-19, lack of money and the growing digital divide.
“Progress towards digital inclusion is threatened by growing digital dependency, rapidly accelerating automation, suppression and manipulation of information,” the study says.
In the next 3–5 years, humanity is threatened, according to respondents, by economic risks, price instability, debt crises and the politicization of natural resources. Among the risks for the coming years, respondents also named possible conflicts between states and the risk of “disillusionment with the youth”.
The younger generation has been through the effects of the financial crisis, is exposed to social inequalities and faces serious challenges in terms of education, economic prospects and mental health, the report says.
However, relative to the next 10 years, the respondents are most concerned about climate change, including:
- Loss of biodiversity
- Natural resource crises
- Failures in the fight against climate change
- Extreme weather conditions
Respondents also pointed to “debt crises”, “geo-economic contradictions”, the proliferation of weapons of mass destruction, the collapse of statehood and the collapse of international relations as the most serious risks over the next 10 years.
All of the above problems can undermine social cohesion and provoke geopolitical risks, the authors of the report noted.
It is also noted that among the respondents, only 16% positively and optimistically evaluate the prospects for the development of countries, and only 11% believe that the recovery of the world economy will accelerate.
Most respondents instead expect the next 3 years to be characterized by drastic changes in the world that will divide society into winners and losers.
Contact us: [email protected]