Public Attitudes Towards The Use of Automatic Face Recognition Technology

A group of researchers at the University of Lincoln and the University of New South Wales (Australia) were the first to conduct an international survey of public opinion on the use of automatic facial recognition technology (AFR) in the criminal justice system. The team, led by Kay Ritchie from the School of Psychology at Lincoln received £33,000 from the British Academy to conduct their research. They found that public acceptance of the use of AFR depends on who uses it and what it is used for, and made several recommendations for policy.

AFR technology is based on algorithms that detect faces and compare them against existing images to determine the degree of similarity between them. In law enforcement, the existing images might be a watchlist of wanted individuals, and the target image might come from CCTV footage of a person committing a crime.

Although the accuracy of face recognition algorithms has vastly improved in recent years, media reports are rife with reports of errors, and even calls to ban the use of AFR. This research project aimed to find out what people think about the use of AFR, with a focus on its use in criminal justice settings. Focus groups were carried out in the UK, Australia and China, which informed a questionnaire which was answered by participants in the UK, Australia and the USA. In the focus groups, people were given prompts for discussion about AFR. In the questionnaire, people were given specific questions about how much they trusted different users of the technology, and which uses they agreed with.

The focus groups and the questionnaire found broad agreement between people in different countries, but with some notable differences. People in the UK viewed the technology as less accurate than people in China and Australia, and people in the USA indicated lower trust in police use of AFR than people in the UK and Australia. 

One of the key results showed that people were more trusting of police than governments, and in turn governments than private companies using AFR. The results also showed that people were more accepting of the use of AFR by police when it is used to search for someone who is on a watchlist, but that acceptance drops if it were to be used to search for someone irrespective of whether or not they are on a watchlist. People showed some confusion around the accuracy of AFR, and whether it is equally accurate with people of different ethnic backgrounds and genders.

The results showed that support for the use of AFR depends greatly on what the technology is used for, and who it is used by. The study also showed that trust is a major concern for the public, and that there is a need for clear legislation around the use of AFR by police, governments, and private companies, as well as in courts.

The researchers made several recommendations for users and vendors of AFR, as well as for policy. They recommended that developers, vendors, and users of AFR (including police) do more to publicise the use, data privacy, and accuracy of AFR. They also stated that it is important for users of AFR (including police and governments) to justify their use of the technology, and know the capacity of their system. Crucially from a policy point of view, the researchers recommended that governments should provide clear legislation for the use of AFR in criminal justice systems around the world. In the UK this could mean including guidance for AFR use in PACE.

More information:

The full paper is published in the open access journal PLOS ONE along with a full list of questions, and full data. It can be accessed here.

Kay Ritchie