(ORDO NEWS) — People were not able to distinguish faces created by artificial intelligence from real ones. Moreover, the participants of the experiment even called “photos” of non-existent people more real than existing ones.
It is known that AI has learned to “fake” the faces of real people, creating their “avatars” in situations in which they have never been.
Such technologies, for example, can be used in the creation of pornographic content with the “participation” of celebrities.
Therefore, today you can question any photo or even video. And in 2019, researchers from the University of York found that people perceive photos of hyper-realistic masks as more believable than real human faces. This can be used by thieves and scammers.
A new study by scientists from the universities of Lancaster (UK) and California (USA), the conclusions of which were presented in the journal PNAS, showed that artificial intelligence is able to create faces that seem more real to people than real ones.
Three experiments involved 315, 219 and 223 participants, respectively. In the first, volunteers classified 128 AI-generated faces as real or synthetic. The accuracy was thus 48 percent.
In the second experiment, new participants trained to recognize real and artificial faces identified the same 128 “photographs” more precisely, but not by much.
The percentage of “hit” increased to only 59 percent. In the third stage, subjects rated the reliability of 128 images on a scale from one (very unreliable) to seven (very reliable).
As a result, the average score of real faces was 4.48 points lower than that of synthesized and created AI. That is, participants were 7.7 percent more likely to trust the latter than the former.
Scientists have come to the conclusion that artificially created faces seem to people more real than natural ones. Interestingly, the images of women were more credible than those of men.
The same goes for blacks versus South Asians. Smiling faces were also given higher credibility scores than those whose facial expressions were neutral.
Contact us: [email protected]