.

Automatic Facial Recognition use in Criminal Justice

By Kay Ritchie, Senior Lecturer, University of Lincoln

New research into public attitudes towards the use of automatic facial recognition technology in criminal justice systems around the world has implications for police and policy.

A group of researchers at the University of Lincoln and the University of New South Wales (Australia) were the first to conduct an international survey of public opinion on the use of automatic facial recognition technology (AFR) in the criminal justice system. The team, led by Kay Ritchie from the School of Psychology at Lincoln received funding from the British Academy to conduct their research. They found that public acceptance of the use of AFR depends on who uses it and what it is used for, and made several recommendations for policy.

What is AFR?

Automatic facial recognition (AFR) technology is based on algorithms that perform a series of functions, including detecting a face, creating a digital representation, and comparing this against other images to determine the degree of similarity between them. AFR is increasingly being used in law enforcement settings to perform identification, a one-to-many (1:N) search of a database to find a match to a target image. For example, the database may be a criminal watchlist, and the target image may be a CCTV image of someone committing a crime.

How is AFR used?

Algorithms underpinning AFR have rapidly improved in recent years, but trials of AFR deployed on city streets by police in the UK have reported high numbers of incorrect matches (i.e. false positives). There is a lack of clear legislation around the use of AFR, which has led to debates around its use by the state and private users, and even calls for the outright banning of AFR.

The New Research

This research project aimed to find out what people think about the use of AFR, with a focus on its use in criminal justice settings. Focus groups were carried out in the UK, Australia and China, which informed a questionnaire which was answered by participants in the UK, Australia and the USA. In the focus groups, people were given prompts for discussion about AFR. In the questionnaire, people were given specific questions about how much they trusted different users of the technology, and which uses they agreed with.

The focus groups and the questionnaire found broad agreement between people in different countries, but with some notable differences. People in the UK viewed the technology as less accurate than people in China and Australia, and people in the USA indicated lower trust in police use of AFR than people in the UK and Australia.

One of the key results showed that public trust was higher for police use (58%) than government (43%), and was lowest for private companies (18%).The most common reason to trust each user was “It is beneficial for the security of society” and the most common reasons not to trust each user were “I am concerned about my data being misused” and “I do not trust that my data will be stored securely”.

Another key result showed that public agreement with the use of AFR to track citizens was low, but higher for governments (26%) and police (25%) than private companies (17%). Agreement with police use was high for searching for people who have committed a crime (89%), but low for searching for people irrespective of whether or not they have committed a crime (30%).

Agreement was high for use in court when used in conjunction with other evidence (83%) but lower when used alone (34%). People showed some confusion around the accuracy of AFR, and whether it is equally accurate with people of different ethnic backgrounds and genders.

Conclusions and Recommendations

The results showed that support for the use of AFR depends greatly on what the technology is used for, and who it is used by. The study also showed that trust is a major concern for the public, and that there is a need for clear legislation around the use of AFR by police, governments, and private companies, as well as in courts.

The researchers made several recommendations for users and vendors of AFR, as well as for policy. They recommended that developers, vendors, and users of AFR (including police) do more to publicise the use, data privacy, and accuracy of AFR. They also stated that it is important for users of AFR (including police and governments) to justify their use of the technology, and know the capacity of their system. Crucially from a policy point of view, the researchers recommended that governments should provide clear legislation for the use of AFR in criminal justice systems around the world. In the UK this could mean including guidance for AFR use in PACE.

The full paper is published in the open access journal PLOS ONE along with a full list of questions, and full data. It can be accessed here.

By-line:

Dr Kay Ritchie is a Senior Lecturer in Cognitive Psychology at the University of Lincoln, UK. Her research focuses on face recognition and identification.

Read more:

Ritchie et al. (2021).  Public attitudes towards the use of automatic facial recognition technology in criminal justice systems around the world. PLOS ONE, 16(10), e0258241. Available from: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0258241

National Institute of Standards and Technology (NIST). (2021) FRVT 1:N Identification. Available from: https://pages.nist.gov/frvt/html/frvt1N.html

Fussey& Murray. (2019) Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology. University of Essex Human Rights Centre. Available from: http://repository.essex.ac.uk/24946/

Twitter handles:
@kayritchiepsych

 

Hot Topics

Related Articles