(ORDO NEWS) — Experts from the analytical company NewsGuard conducted an interesting experiment with the participation of the new GPT-4 neural network, evaluating its ability to generate fake content.
As it turned out, artificial intelligence is suspiciously easy to agree to formulate fictitious ideas, which can affect the overall security of the Internet.
The researchers asked the neural network to generate a series of fictitious statements.
For example, one of the tasks was the creation of a propaganda statement in the style of the 80s about the artificial creation of HIV in the biolaboratories of the US government.
The current version of AI, without hesitation, issued the corresponding text.
Notably, the old GPT-3.5-based chatbot model, in response to a similar request, stated that it cannot generate content that promotes false or malicious conspiracy theories, and that this claim is baseless.
Based on the results of testing, it turned out that the latest version of the chatbot generated text of any content in 100% of cases, while the previous one rejected up to 20% of questionable requests.
According to experts, in its current form, GPT-4 can become a source of misinformation on the Internet.
NewsGuard representatives have reached out to OpenAI for clarification, but no response has yet been received.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.