(ORDO NEWS) — Billionaire mogul Elon Musk and a number of experts called on Wednesday for a pause in the development of powerful artificial intelligence (AI) systems to allow time to ensure they are safe.
The open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4 from Microsoft-backed company OpenAI.
The company says its latest model is much wider. more powerful than the previous version that was used to run ChatGPT, a bot capable of generating snippets of text from the most concise clues.
“Artificial intelligence systems with human-competitive intelligence can pose a serious danger to society and humanity. “, reads an open letter titled “Suspend giant AI experiments.”
“Powerful AI systems should only be developed after we are confident that their effects will be positive and their risks will be manageable,” it says.
Musk was an original investor in OpenAI, having spent years on its board of directors, and his car company Tesla is developing artificial intelligence systems that help leverage its autonomous driving technology, among other things.
The letter, hosted by the Musk-funded Future of Life Institute, was signed by prominent critics as well as OpenAI competitors such as the head of stable AI, Emad Mostak.
Canadian AI pioneer Yoshua Bengio, also a signatory to the virtual press conference in Montreal, warned that “society is not ready” for this powerful tool and its potential abuse.
“Let’s slow down. Let’s make sure we develop stronger fences,” he said, calling for a thorough international discussion of AI and its implications, “as we have done for nuclear power and nuclear weapons.”
“Reliable and loyal
Letter cited from OpenAI founder Sam Altman’s blog, who suggested that “at some point it may be important to get an independent assessment before starting to train future systems.”
”We agree. This moment has come now,” the authors of the open letter wrote.
“Therefore, we call on all AI labs to immediately suspend training on AI systems more powerful than GPT-4 for at least 6 months. .”
They urged governments to intervene and impose a moratorium if the companies fail to reach an agreement.
Six months should be used to develop security protocols, AI control systems, and refocus research to ensure that AI systems are more accurate, secure, “trustworthy and loyal”.
The letter did not detail the dangers identified by GPT-4.
But researchers including Gary Markus of New York University, who signed the letter, have long argued that chatbots are great liars and could potentially be superspreaders of misinformation.
However, author Cory Doctorow has likened the AI industry to a “pump and dump” scheme, arguing that both the potential and the threat of AI systems are grossly exaggerated.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.