The Brain Unconsciously Excels at Spotting Deepfakes

Photo by Cottonbro on Pexels

When looking at real and ‘deepfake’ faces created by AI, observers can’t consciously recognise the difference – but their brains can, according to new research which appears in Vision Research.

Convincing fakes made by computers, deepfake videos, images, audio, or text are rife in the spread of disinformation, fraud and counterfeiting.

For example, in 2016, a Russian troll farm deployed over 50 000 bots on Twitter, making use of deepfakes as profile pictures, to try to influence the outcome of the US presidential election, which according to some research may have boosted Donald Trump’s votes by 3%. More recently, a deepfake video of Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media, muddying people’s understanding of the war in Ukraine with potential, high-stakes implications.

Fortunately, neuroscientists have discovered a new way to spot these insidious fakes: people’s brains are able to detect AI-generated fake faces, even though people could not distinguish between real and fake faces.

When looking at participants’ brain activity, the University of Sydney researchers found deepfakes could be identified 54% of the time. However, when participants were asked to verbally identify the deepfakes, they could only do this 37% of the time.

“Although the brain accuracy rate in this study is low – 54 percent – it is statistically reliable,” said senior researcher Associate Professor Thomas Carlson.

“That tells us the brain can spot the difference between deepfakes and authentic images.”

Spotting bots and scams

The researchers say their findings may be a starting-off point in the battle against deepfakes.

“The fact that the brain can detect deepfakes means current deepfakes are flawed,” Associate Professor Carlson said. “If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter.”

They project that in the more distant future that technology, based on their and similar studies, could developed to alert people to deepfake scams in real time. Security personnel for example might wear EEG-enabled helmets to alert them of a deepfake.

Associate Professor Carlson said: “EEG-enabled helmets could have been helpful in preventing recent bank heist and corporate fraud cases in Dubai and the UK, where scammers used cloned voice technology to steal tens of millions of dollars. In these cases, finance personnel thought they heard the voice of a trusted client or associate and were duped into transferring funds.”

Method: eyes versus brain

The researchers conducted two experiments, one behavioural and one using neuroimaging. In the behavioural experiment, participants were shown 50 images of real and computer-generated fake faces and were asked to identify which were real and which were fake.

Then, a different group of participants were shown the same images while their brain activity was recorded using EEG, without knowing that half the images were fakes.

The researchers then compared the results of the two experiments, finding people’s brains were better at detecting deepfakes than their eyes.

A starting point

The researchers stress that the novelty of their study makes it merely a starting point. It won’t immediately – or even ever – lead to a foolproof way of detecting deepfakes.

Associate Professor Carlson said: “More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected.

“Our finding about the brain’s deepfake-spotting power means we might have another tool to fight back against deepfakes and the spread of disinformation.”

Source: The University of Sydney