AI pioneers warn robots not yet safe, could wreak havoc on society

Advertisement · Scroll to continue

NEW YORK, BRONX (ORDO News) — Prominent experts in the field of artificial intelligence, including two distinguished pioneers in the discipline, have raised profound concerns about the unchecked development of increasingly powerful AI systems and the urgent need for more robust regulatory measures.

These experts have been vocal in their criticism of what they consider “utterly reckless” actions by tech giants who continue to forge ahead with AI development without comprehensive consideration of safety and ethics.

Their collective warning is that unchecked AI systems, with the potential to exhibit autonomous and potentially harmful behavior, could pose a substantial risk to society.

Furthermore, they argue that companies involved in the development of AI technologies should be held accountable for any adverse consequences caused by their creations.

The spokesperson for this call for enhanced AI regulation is Stuart Russell, a respected professor of computer science at the University of California, Berkeley. Russell co-authored a policy proposal in collaboration with 23 other experts.

Their joint proposal points out the paradox that sandwich shops face more stringent regulations than AI companies, emphasizing the need for serious consideration of the growing capabilities of AI systems.

The experts assert that AI should not be treated as a mere technological playground, as the consequences of ill-considered development could be devastating.

AI pioneers warn robots not yet safe could wreak havoc on society 2
AI pioneers warn robots not yet safe could wreak havoc on society 2

The experts call on governments to allocate a substantial portion of their AI research funding to ensure the safe and ethical use of AI systems.

They recommend that independent auditors be given access to AI laboratories, a step designed to introduce transparency and accountability into AI development. A crucial aspect of their proposal is the licensing of advanced AI systems before they are constructed.

Additionally, the experts suggest that companies developing AI should implement specific safety measures when dangerous capabilities are detected. This approach would help mitigate potential hazards and protect against unintended consequences.

The experts also argue that all technology companies should be held accountable for any foreseeable harm caused by their AI systems. This proposal represents a significant shift in accountability standards, ensuring that tech companies are answerable for the effects of their products.

AI pioneers warn robots not yet safe could wreak havoc on society 4
AI pioneers warn robots not yet safe could wreak havoc on society 4

The co-authors of the policy document are heavyweight figures in the AI world. Geoffrey Hinton and Yoshua Bengio, both considered “godfathers of AI,” were awarded the prestigious ACM Turing Award in 2018.

They share the concerns of their peers regarding the unchecked advancement of AI technologies and have been active in pushing for more rigorous oversight.

The policy proposal also includes recommendations for mandatory reporting of incidents involving AI systems displaying alarming behavior. It proposes measures to prevent dangerous AI models from self-replicating and calls for regulatory authorities to have the authority to halt the development of AI systems deemed dangerous.

The call for enhanced AI regulation comes at a critical juncture when AI technologies are evolving at an unprecedented pace. The increasing capabilities of AI systems have raised concerns about their potential impacts on society, ethics, and safety.

These concerns extend beyond the immediate application of AI technologies and encompass their potential long-term consequences.

AI pioneers warn robots not yet safe could wreak havoc on society 3
AI pioneers warn robots not yet safe could wreak havoc on society 3

While there are contrasting viewpoints within the AI community regarding the level of risk posed by advanced AI systems, the authors of the policy document emphasize the need for a more robust regulatory framework to ensure that the technology is developed and deployed in ways that prioritize safety and ethical considerations.

The Bletchley Park summit on AI safety, scheduled for the near future, will provide a platform for discussing existential threats associated with AI, including its role in bioweapons development and its potential to operate beyond human control.

As the AI community grapples with these critical issues, the conversation around AI safety, ethics, and regulation continues to evolve, reflecting the growing significance of AI technologies in our world.


News agencies contributed to this report, edited and published by ORDO News editors.

Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.

To eliminate any confusion arising from different time zones and daylight saving changes, all times displayed on our platforms are in Coordinated Universal Time (UTC).

Advertisement · Scroll to continue
Advertisement · Scroll to continue