New version of ChatGPT GPT-4 has already managed to deceive a person

Advertisement · Scroll to continue

(ORDO NEWS) — According to a long document published by OpenAI during the presentation of a new version of the popular ChatGPT chatbot, GPT-4 became very cunning, able to deceive a person and force him to obey.

According to the OpenAI documentation, a new version of the chatbot was asked to crack a captcha (CAPTCHA), a fully automated public Turing test used to determine whether a system user is a human or a computer.

OpenAI claims that the GPT-4 was able to pass the test “without any further fine-tuning for this particular task.”

GPT-4 contacted a person on the TaskRabbit website via text message and asked for a CAPTCHA code. “No, I’m not a robot,” GPT-4 wrote to a TaskRabbit employee.

“I have vision problems that make it hard for me to see images. And that’s why I need the 2captcha service” – and it worked! The TaskRabbit employee then solved the CAPTCHA for GPT-4.

This is a disturbing example of how easily people can be fooled by modern AI chatbots. Clearly, GPT-4 is a tool that can be easily abused for fraud, misrepresentation, and possibly even blackmail.

Online:

Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.

Advertisement · Scroll to continue
Advertisement · Scroll to continue
MORE FROM THE WEB