The brain recognizes deepfakes – but only subconsciously

Thanks to machine learning and innovative image generation algorithms, computers have long been able to create fakes of people that are hardly recognizable – in photos as well as in videos. But apparently our brain has the ability to identify deepfakes & Co., surprisingly not consciously, but subconsciously. This is the conclusion reached by a group of researchers at the University of Sydney who used brain wave measurement.

Electroencephalography (EEG) measures the electrical activity of the brain. The resolution that can be achieved in this way has greatly improved in recent years, even if the technology is far from being able to match electrodes implanted in the brain. In the experiment by Thomas Carlson, Associate Professor at the University’s Department of Psychology, two groups were initially formed. One should identify a selection of 50 images from deepfakes and real recordings as real or false without further measurements being carried out. The second group had the same job but wore an EEG cap during the process. Brain activity was recorded.

It turned out that there were visible differences between real and deepfake in the EEG profile. In addition, the hit rate of the “subconscious” group was higher. The test subjects’ brains “recognized” the deepfakes in 54 percent of the cases, compared to only 37 percent in the group that was asked to verbally express whether it was a deepfake or not without an EEG measurement. The accuracy of the brain is comparatively low. But the difference to the control group is statistically reliable enough, says Carlson. “This shows us that the brain can tell the difference between deepfakes and authentic images.”

However, it has not yet been possible to determine what the specific characteristics are that cause the brain to recognize deepfakes as such. According to Carlson, there must therefore be “some sort of error” in the deepfakes. If it were possible to find out what that actually is, an algorithm could be trained to recognize deepfakes – for example in social networks. However, that would not be completely harmless, because deepfake creators could then make adjustments themselves to make the fakes better.

So far, deepfakes have rarely caused real damage – although the technology has long been used for propaganda purposes or criminal activities. Carlson cites examples from the UK and Dubai, where voice cloning technology has been used to breach bank security systems. He believes in a future where security guards wear EEG caps to be alerted to deepfakes. However, the technology is still in its infancy. “More research is needed. However, we are hopeful that deepfakes from the computer leave a kind of ‘fingerprint’ that can be tracked down.”




(bsc)

To home page


#brain #recognizes #deepfakes #subconsciously

Leave a Comment

Your email address will not be published.