So-called ‘super-recognisers’ are people with a much greater ability to recognise faces in a variety of contexts, but their ability has not been empirically tested. A new study in PNAS shows that super-recognisers do in fact possess greatly superior facial recognition compared to normal peers.
While police departments have known of their abilities for quite some time, it was just over a decade ago, when super-recognisers were described in the literature as having exceptional facial processing abilities. With the increasing use of CCTV in police investigations and the potential for human error, there have been questions raised as to whether super-recognisers could do a better job – or indeed, whether they have empirically superior abilities. A means for actually identifying and defining a super-recogniser as opposed to someone who merely seems to better at recalling faces is therefore needed.
The performance of people with normal facial recognition abilities is not very impressive. While performance is good when people are familiar with the person pictured, studies report an error as high as 35% with unfamiliar faces. Even when people are asked to compare a live person standing in front of them with a photo, a recent study found they still got more than 20% of their answers wrong.
For this study, researchers enrolled 73 super-recogniser and 45 control participants. They compared the two groups on performance on three challenging tests of face identity processing recommended for super-recogniser identification; as well as performance for perpetrator identification using four CCTV sequences depicting five perpetrators and police line-ups created for criminal investigation purposes. They found that the face identity processing tests used here are valid in measuring such abilities and identifying super-recognisers. In addition, they determined that super-recognisers excel at perpetrator identification relative to control participants, with more correct perpetrator identifications, the better their performance across lab tests.
To recognise a famous voice, human brains use the same centre that is activated when the speaker’s face is presented, according to the results of an innovative neuroscience study which asked participants to identify US presidents.
The new study, published in the Journal of Neurophysiology, suggests that voice and face recognition are linked even more intimately than previously thought. It offers an intriguing possibility that visual and auditory information relevant to identifying someone feeds into a common brain centre, allowing for more robust, well-rounded recognition by integrating separate modes of sensation.
“From behavioural research, we know that people can identify a familiar voice faster and more accurately when they can associate it with the speaker’s face, but we never had a good explanation of why that happens,” said senior author Taylor Abel, MD, associate professor of neurological surgery at the University of Pittsburgh School of Medicine. “In the visual cortex, specifically in the part that typically processes faces, we also see electrical activity in response to famous people’s voices, highlighting how deeply the two systems are interlinked.”
Even though the interplay between the auditory and the visual brain processing systems has been widely acknowledged and investigated by various teams of neuroscientists all over the world, those systems were traditionally thought to be structurally and spatially distinct.
Few studies have attempted to directly measure activity from the brain centre – which primarily consolidates and processes visual information – to determine whether this centre is also engaged when participants are exposed to famous voice stimuli.
Researchers recruited epilepsy patients who had been implanted with electrodes measuring brain activity to determine the source of their seizures.
Abel and his team showed five participants photographs of three US presidents – Bill Clinton, George W. Bush and Barack Obama – or played short recordings of their voices, and asked participants to identify them.
Recordings of the electrical activity from the region of the brain responsible for processing visual cues (the fusiform gyri) showed that the same region became active when participants heard familiar voices, though that response was lower in magnitude and slightly delayed.
“This is important because it shows that auditory and visual areas interact very early when we identify people, and that they don’t work in isolation,” said Abel. “In addition to enriching our understanding of the basic functioning of the brain, our study explains the mechanisms behind disorders where voice or face recognition is compromised, such as in some dementias or related disorders.”
Some people have the ability to never forget a face, and some help police departments and security agencies identify suspects. It is only now that researchers have come to understand a bit more about this ability, which until now was believed to derive from photographic memory.
In a paper published in Psychological Science, psychologists challenged this view, proving that super-recognisers look at faces just like all of us, but do it faster and more accurately.
UNSW Sydney researcher and study lead author, Dr James Dunn, explained that when super-recognisers catch a glimpse of a new face, they divide it into parts and then store these in the brain as composite images.
“They are still able to recognise faces better than others even when they can only see smaller regions at a time. This suggests that they can piece together an overall impression from smaller chunks, rather than from a holistic impression taken in a single glance,” Dr Dunn said.
Co-lead author Dr Sebastien Miellet, University of Wollongong (UOW) expert in active vision, used eye-tracking technology to analyse how super-recognisers scan and process faces and their parts.
“With much precision, we can see not only where people look but also which bits of visual information they use,” Dr Miellet said.
When studying super-recognisers’ visual processing patterns, Dr Dunn and Dr Miellet realised that contrary to typical recognisers, super-recognisers focused less on the eye region, instead distributing their gaze more evenly than typical viewers, extracting information from other facial features, particularly when learning faces.
“So the advantage of super-recognisers is their ability to pick up highly distinctive visual information and put all the pieces of a face together like a puzzle, quickly and accurately,” Dr Miellet said.
The researchers will continue to study the super-recogniser population. Dr Miellet says that one hypothesis is that super-recognisers’ superpower may stem from a particular curiosity and behavioural interest in others. Super-recognisers may also be more empathetic.
“In the next stages of our study, we’ll equip some super-recognisers and typical viewers with a portable eye tracker and release them onto the streets to observe, not in the lab but in real life, how they interact with the world,” Dr Miellet said.
In a study published in Scientific Reports, cognitive psychologists at the believe they have discovered the answer to a 60-year-old question as to why people find it more difficult to recognise faces from visually distinct racial backgrounds than they do their own.
This phenomenon named the Other-Race Effect (ORE) was first discovered in the 1960s. Humans seem to use a variety of markers to recognise people, instead of photographically memorising their faces, which may be based on what they observe in others around them. Hair and eye colour may be used by white people to tell apart other white people since those features vary considerably in that racial group. Setting may also be important: some people might not notice that the centre man in the picture above is Asian while his friends on either side are white.
The ORE has consistently been demonstrated through the Face Inversion Effect (FIE) paradigm, where people are tested with pictures of faces presented in their usual upright orientation and inverted upside down. Such experiments have consistently shown that the FIE is larger when individuals are presented with faces from their own race as opposed to faces from other races.
The findings spurred decades of debate, and social scientists took the view that indicates less motivation for people to engage with people of other races, making a weaker memory for them. Cognitive scientists posited it is down to a lack of visual experience of other-race individuals, resulting in less perceptual expertise with other-race faces.
Now, a team in the Department of Psychology at Exeter, using direct electrical current brain stimulation, has found that the ORE would appear to be caused by a lack of cognitive visual expertise and not by social bias.
“For many years, we have debated the underpinning causes of ORE,” said Dr Ciro Civile, the projects lead researcher.
“One of the prevailing views is that it is predicated upon social motivational factors, particularly for those observers with more prejudiced racial attitudes. This report, a culmination of six years of funded research by the European Union and UK Research and Innovation, shows that when you systematically impair a person’s perceptual expertise through the application of brain stimulation, their ability to recognise faces is broadly consistent regardless of the ethnicity of that face.”
The research was conducted at the University of Exeter’s Washington Singer Laboratories, using non-invasive transcranial Direct Current Stimulation (tDCS) specifically designed to interfere with the ability to recognise upright faces. This was applied to the participants’ dorsolateral prefrontal cortex, via a pair of sponges attached to their scalp.
The team studied the responses of nearly 100 White European students to FIE tests, splitting them equally into active stimulation and sham/control groups. The first cohort received 10 minutes of tDCS while performing the face recognition task involving upright and inverted Western Caucasian and East Asian (Chinese, Japanese, Korean) faces. The second group, meanwhile, performed the same task while experiencing 30 seconds of stimulation, randomly administered throughout the 10 minutes – a level insufficient to induce any performance change.
In the control group, the size of the FIE for own-race faces was found to be almost three times larger than the one found for other-race faces confirming the robust ORE. This was mainly driven by participants showing a much better performance at recognising own-race faces in the upright orientation, compared to other-race faces – almost twice as likely to correctly identify that they had seen the face before.
In the active tDCS group, the stimulation successfully removed the perceptual expertise component for upright own-race faces and resulted in no difference being found between the size of the FIE for own versus other-race faces. And when it came to recognising faces that had been inverted, the results were roughly equal for both groups for both races, supporting the fact that people have no expertise whatsoever at seeing faces presented upside down.
“Establishing that the Other-Race Effect, as indexed by the Face Inversion Effect, is due to expertise rather than racial prejudice will help future researchers to refine what cognitive measures should and should not be used to investigate important social issues,” said Ian McLaren, Professor of Cognitive Psychology. “Our tDCS procedure developed here at Exeter can now be used to test all those situations where the debate regarding a specific phenomenon involves perceptual expertise.”
Most people have deep reservations about the use of facial recognition technologies in healthcare settings, a survey has found.
Facial recognition technologies – often used to unlock a phone or in airport security – is becoming increasingly common in everyday life, but how do people feel about this?
To answer this, researchers surveyed more than 4000 US adults and found that a significant proportion of respondents considered the use of facial image data in healthcare across eight varying scenarios as unacceptable (15–25%). Taken with those that responded as unsure of whether the uses were acceptable, roughly 30–50% of respondents indicated some degree of concern for uses of facial recognition technologies in healthcare scenarios. In some cases, using facial image data – such as to avoid medical errors, for diagnosis and screening, or for security – was acceptable to the majority. However over half of respondents did not accept or were uncertain about healthcare providers using this data to monitor patients’ emotions or symptoms, or for health research.
In the biomedical research setting, most respondents were equally concerned over use of medical records, DNA data and facial image data in a study.
While there was a wide range of demographics among respondents, their perspectives on these issues did not differ.
“Our results show that a large segment of the public perceives a potential privacy threat when it comes to using facial image data in healthcare,” said lead author Sara Katsanis, Research Assistant Professor of Pediatrics at Northwestern University Feinberg School of Medicine. “To ensure public trust, we need to consider greater protections for personal information in healthcare settings, whether it relates to medical records, DNA data, or facial images. As facial recognition technologies become more common, we need to be prepared to explain how patient and participant data will be kept confidential and secure.”
Senior author Jennifer K Wagner, Assistant Professor in Penn State’s School of Engineering Design, Technology, and Professional Programs adds: “Our study offers an important opportunity for those pursuing possible use of facial analytics in healthcare settings and biomedical research to think about human-centeredness in a more meaningful way. The research that we are doing hopefully will help decision-makers find ways to facilitate biomedical innovation in a thoughtful, responsible way that does not undermine public trust.”
The research team hopes to conduct further research to understand the nuances where public trust is lacking. The findings were published in PLOS One.