We can’t tell apart deepfakes from real people but we ‘trust’ them more

Sam Fried

Most people are unable to distinguish between real and AI-generated faces, and even tend to give the “fake people” higher ratings of trust, a new study found. In a new success story for the tech industry and an alarming threat for societies and democracies, deepfakes have become so realistic that […]

Most people are unable to distinguish between real and AI-generated faces, and even tend to give the “fake people” higher ratings of trust, a new study found.

In a new success story for the tech industry and an alarming threat for societies and democracies, deepfakes have become so realistic that they are indistinguishable from real faces and even appear more trustworthy to us.

A series of experiments in a study published in peer-reviewed journal PNS on Tuesday found that people cannot tell apart high-quality fake imagery to pictures of real people, and tend to give higher ratings of trust to the synthetic media.

“Synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy—than real faces,” said researchers Sophie J Nightingale of Lancaster University and Hany Farid of University of California Berkeley.

To evaluate the photorealism of computer-generated images, the researchers showed hundreds of participants a series of images that featured a diverse cast of fake and real faces.

They found that people were able to pick out the fakes with less than 50 percent accuracy if given no feedback and only slightly above chance with feedback.

And more surprisingly, people perceived fake faces as more trustworthy than real ones, which the researchers hypothesise is because “synthesised faces tend to look more like average faces.”

“We know that people show a preference for average or “typical-looking” faces because this provides a sense of familiarity,” Nightingale told TRT World.

“Therefore, it might be this sense of familiarity that elicits, on average, higher trust for the synthetic faces. Essentially, we’re more likely to trust something that feels familiar to us,” she added.

READ MORE: New detection method sniffs out audio deepfakes

Study figure showing “the most (Top and Upper Middle) and least (Bottom and Lower Middle) accurately classified real (R) and synthetic (S) faces.”
(Sophie J. Nightingalea and Hany Faridb)

Trustworthiness

Researchers asked 223 participants to rate people in a series of images on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Some of the faces were of real people and some of computer-generated fakes.

The fake faces were created with generative adversarial networks (GANs) that arrange random pixels together to synthesize realistic faces. 

Each face goes through a series of iterations as the advanced technology learns and adapts from mistakes to produce increasingly more realistic versions until they are indistinguishable from real faces.

The images shown were diverse across gender, age (from children to older adults), and race (Black, Caucasian, East Asian, and South Asian faces). 

They found that people rated fake faces as being slightly (7.7 percent) more trustworthy than real ones.  

Gender also played a significant role in ratings as women were voted significantly more trustworthy than men. While race played a limited effect with Black faces rating more trustworthy than South Asian faces.

Previous research has said facial features such as eyebrows and cheekbones can. Higher inner eyebrows and pronounced cheekbones are typically seen as trustworthy features, while the reverse is seen as untrustworthy.

However, a New York University study in 2014 monitored the brain’s amygdala activity to find that we judge a face’s trustworthiness even if we can not consciously see their features.

READ MORE: A deepfake future is closer than you think. Should we be worried?

Study figure showing
Study figure showing “The four most (Top) and four least (Bottom) trustworthy faces and their trustworthy rating on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Synthetic faces (S) are, on average, more trustworthy than real faces (R).””
(Sophie J. Nightingalea and Hany Faridb)

Which is real and which is fake?

In their first experiment, 315 participants determined if a series of images were AI-generated or photos or real people, producing an accuracy rate of 48.2 percent.

Then, another 219 participants were asked to do the same but this time were given feedback about each guess along the way. This feedback slightly improved the overall accuracy to 59 percent.

“When made aware of rendering artifacts and given feedback, there was a reliable improvement in accuracy; however, overall performance remained only slightly above chance,” the researchers said.

“The lack of improvement over time suggests that the impact of feedback is limited, presumably because some synthetic faces simply do not contain perceptually detectable artifacts.”

For both experiments, White faces were the most difficult to classify with lower accuracy ratings, and male White faces were less accurately determined than female White faces. 

“We posit that this is because the synthesis techniques are trained on disproportionally more white male faces,” Nightingale told TRT World.

“I expect that any current differences will eventually vanish as the synthesis techniques improve and the training data sets expand in size and diversity,” she added.

All participants were aware of the purpose of the study, read an explanation of what a synthetic face is, and also were given a short tutorial to identify synthetic faces.

READ MORE:
Deepfakes and cheap fakes: the biggest threat is not what you think

Call for safeguards

These types of realistic fake imagery pose a high level of threat to media consumers such as fraud and disinformation campaigns.

While current technology to produce the images is advancing rapidly, counter techniques to detect deepfakes are not efficient or accurate enough to shift through the abundance of online content.

In order to prevent deepfakes from being used maliciously, the researchers called for ethical guidelines and safeguards to be put in place such as placing watermarks on fake imagery and videos. 

“Given the rapid rise in sophistication and realism of synthetic media (aka deepfakes), we propose that those creating these technologies should incorporate reasonable precautions into their technology to mitigate some of the potential misuses,” Nightingale told TRT World.

“More broadly, we recommend that the larger research community consider adopting specific best practices for those in this field to help them manage the complex ethical issues involved in this type of research,” she added.

READ MORE: Authorities must tackle deepfakes before the train has left the station

Source: TRT World

https://www.trtworld.com/magazine/we-can-t-tell-apart-deepfakes-from-real-people-but-we-trust-them-more-55037

Next Post

Why buy a Mac Studio when you can decapitate your own MacBook

People had a lot of questions when I pulled out my M1 MacBook Air at a party over the weekend: “What is that?” “What happened to your laptop?” “Is that the new Mac?” This was to be expected, as there certainly was something different about mine. See, my MacBook Air […]

Subscribe US Now