John Phillips
John Phillips June 5, 2023 Biometrics Digital Trust Privacy Security

Biometric Recognition Systems: Better Keep Them Close

John Phillips

Introduction

I wrote the first version of this article as a LinkedIn Post in 2019 (https://www.linkedin.com/pulse/biometric-recognition-systems-better-keep-them-close-john-phillips/). Much has happened since then, two events in particular have caused me to revisit the topic: the first is news about the personal biometric data left behind when the US and its allies pulled out of Afghanistan in 2021 (e.g. https://www.hrw.org/news/2022/03/30/new-evidence-biometric-data-systems-imperil-afghans) ; the second is a more recent podcast discussion by the Economist Babbage column in the potential impact of generative AI to “destroy” biometric security (https://www.economist.com/biometrics-pod).

The first (Afghanistan) provided graphic evidence of the potential harm that accurate, non-repudiable, identification systems can enable when placed in the wrong hands. In this instance it could be argued that the more accurate the system, the more harm it can cause. The second (the Economist) explored how new generative AI models with their ability to create deep fakes could risk breaking a number of identification methods. The thing I found interesting about the second source was that none of the very impressive group of participating experts described the harm that can be caused by identification, rather the conversation was focused on the harm of mis-identification (identity “theft”, masquerading and fakes). I think the second missed an opportunity to consider whether and when we should use identification, and best how we should protect people’s rights when doing so.

This article focuses on the general concept of biometric identification, its potential weaknesses and how while this can be a valuable approach, we need to be careful how we use and trust it.

Main Course

First things first: I’m an emerging technology evangelist. For all my professional career, and even before I started being paid for it, I’ve loved the way emerging technologies can create new opportunities and solve problems, I’m also increasingly aware that too often emerging technology can present unexpected new problems. 

I believe that biometric systems can be a boon to convenience and security. Biometric technology can provide ways for us to authenticate ourselves that can increase security, accessibility, and ease of use. Biometric data can provide early warning signs of otherwise hard to detect health issues such as diabetes and dementia. I even chose to unlock my computer with my face to type this article.

So long as biometric data capture and processing is done for the right reasons, in the right way, with the right governance and with informed consent, then count me in, assuming I have a choice of course.

The thing is, we have wildly different views being pursued by government organisations. In October 2019, an Australian Parliamentary Joint Committee report stated that the facial recognition bill (the “Identity-matching Services Bill 2019 (ISM)”) needed to be “re-drafted”, and made several recommendations. The need isn’t questioned, but recommendations are made for governance and review. [https://lnkd.in/fNNE72Z].

Also in October 2019, the European Union’s Independent Data Protection Authority posted a piece questioning whether facial recognition was a “solution in search of a problem”. [https://lnkd.in/fAxW4-4].

An interesting and rather disturbing difference of opinion. Since the the EU has an outright ban on Facial Recognition (Cite source), while Australia continues to explore (cite source).

I’m worried that all too often we seem to be skipping reasonable sanity checks let alone ethics checks with respect to Biometric Recognition Systems. Driven by the gee whiz factor, vendors with deep pockets and influential lobbyists, and a non-sequitur response that suggests its either this (national surveillance) or paedophiles and terrorists, we are being steadily herded into the surveillance pen.

Here’s my very simple rule of thumb (hah) about whether to “trust” a biometric system. Ask yourself where the biometric data about you is being stored and how you can be sure that it can only be used for the purpose that you have agreed to. 

If the data is being stored only on your device, locally, securely, and only for you to use, then I mostly have no worries. If it is being used to access an online system as part of some remotely controlled authentication process, then I mostly have worries, and so should you.

Let’s dig a little deeper to see why…

Here are the elements of a typical biometric recognition system 

Elements of a typical biometric recognition system

Biometric recognition systems involve capturing physical data about you from a sensor. The way you walk, the way you talk, the way you look, the shape of your ears, fingerprints, your heartbeat, and of course your DNA, have all been proven to be a potential signature of you.  

The raw sensor data is processed and a selection of data points is made that correspond to a “template” for the recognition process. When “enrolling” a template, the newly generated template is stored (often in a process that has many iterations to ensure a better matching process with variable sensor data).

There are two basic matching processes:

  1. 1 to many. The template is matched against a set of stored templates to see if it matches one of them. 
  2. 1 to 1. The template is matched against a single stored template. 

Unlocking your phone uses a 1 to 1 search. Unlocking your computer uses a 1 to 1 search. Unlocking your digital wallet uses a 1 to 1 search. Voice recognition by your bank? That can be 1 to 1, but other purposes may use 1 to many.

Even though this is a simplified model (there are variations such as liveness checks that improve resistance to attacks), we can use the illustration above to consider the locality of data and processing for a number of scenarios. 

For example, what happens when you present your Australian ePassport at a SmartGate booth on arrival at passport control in Melbourne? Your Australian passport contains a chip which has your facial biometric data template stored on it. The data on the chip is generated using the same photo that you can see in the passport and the chip is providing the stored template. This means that the SmartGate barrier can use a local matching process: does the data retrieved from the passport chip match the data captured by the camera on the booth? For this biometric test, no data needs to be sent outside the local system. However this doesn’t mean data isn’t sent outside the system, other border checks might be necessary and these would almost certainly need a data exchange with other systems. The most likely non-green light scenario is that the matching process confirms that you are the authenticated holder of the passport, and that your passport is on a list of passports that need further discussion.

The problem with centrally stored templates is that this increases the vulnerability of the system to a number of attacks and to a number of privacy issues. Whether the templates are encrypted or not (just ask the more than 1 million people in the UK whose biometric data was found on a public database).

There is a lot of research and development work in ensuring that biometric recognition systems are secure in that they can’t be fooled into giving a false positive, or a false negative, and that the personal information (both biometric and associated data) can’t be hacked or leaked. Securing and improving these systems is an arms race against those that want to hack them. As with every complicated system, every device and every connection is an attack point. This gives at least 14 possible attack points in our simple model as shown below.

Attack points in a biometric recognition system

The Economist podcast asked some serious questions of some seriously respected people (Bruce Schneier, Matthias Marx, Katina Michael, Joseph Lindley, Scott Shapiro) about whether generative AI and its ability to create deep fake audio and video might “destroy” biometrics security. Its a good question and their answers explore some of the attack points illustrated above. In general they argued for adding additional factors to any biometric matching process, and they argued for caution.

My concern here isn’t so much that we trust these systems to identify us reliably, and they don’t, its that we trust these systems to identify us reliably, and they do.

My key security concern here is that centralised systems require that the biometric data template is linked to additional data about the person. This data needs to be relevant to the service, role, products and access permission(s) that the person is associated with, but often it includes other data that is “useful” to the organisation about the person (such as their address, date of birth, full name etc.). Concentrating this data increases the size and the value of the honeypot for hackers.

It also leads us to the key privacy issue… if the biometric recognition system does provide an accurate identification of you, then it is by definition using a unique identifier. Not only is biometric data unique, you cannot reset it, you cannot generate a new one, and you cannot revoke it. And in centralised systems, this unique identifier is necessarily linked to a bunch of stuff about you.

Why is that a problem? If you believe that privacy is a human right, then you need to ensure that the systems we build and use don’t unintentionally deny this right. As a design principle, we should only need to prove that it is uniquely ‘us’ when that is essential to the service being provided.

There is a much, much darker risk here. Biometrics data, data that uniquely identifies individuals in a non-repudiable way, offers uniquely harmful risks. “Good” identification systems can be used for bad purposes if they fall into the wrong hands. When the US led forces pulled out of Afghanistan in 2021 they left behind a “treasure trove” of personal data (including biometrics data) on devices and in databases about the people who had aided them during their occupation, some of these was for a simple (and necessary) a process as providing a payroll (see for example this MIT Technology Review Article: https://www.technologyreview.com/2021/08/30/1033941/afghanistan-biometric-databases-us-military-40-data-points/). A new regime taking over a country will see as a threat those people who cooperated with the previous regime. Biometrics data makes plausible deniability, implausible.

If you are uniquely identified on every interaction, we have a correlation issue (not risk): everything that you do, everywhere you do it, whoever you do it with, whenever you do it, can be linked. One organisation that we spoke to in 2023 proudly told us that given sufficient image data, they could identity anyone in the world within 8 minutes.

These problems are bad enough, but we get another issue when we believe these systems work. The product/service provider’s default position has to be that their biometric systems “work”, and therefore it must have been you at the end of the phone, camera, sensor, banking machine. But no complex system is fool or fault proof. There will be errors. Try proving it wasn’t your finger or your face that the sensor detected and that authorised that payment transfer. Franz Kafka and Joseph Heller (I read old books sometimes) would have loved the material this provides.

Let’s take this a little further with reference (and apologies) to Casablanca and Cheers (I watch old stuff sometimes)…

Imagine a future where the start of the joke “a stranger walks into a bar” makes no sense to the audience. How can you be a stranger anywhere when your face is known everywhere? Do you really want to be able to walk into all the bars, in all the world, and have everyone know your name?

And finally, this whole fascination with biometrics recognition systems has given me a sinister new interpretation of the Ira and George Gershwin standard “They Can’t Take That away from Me”.

Ira and George Gershwin "They can't take that away from me"

“The way you wear your hat, the way you sip your tea, the memory of all that, no, no, they can’t take that away from me.”

Indeed.

About the Author
John Phillips
John Phillips John believes that there are better models for digital trust for people, organisations, and things on a global scale. He sees verifiable credentials, trustworthy communication, and trustworthy identifiers as a disruptive force for change for good, and wants to be a catalyst for that change, helping people and organisations navigate their way to a better future.

You may also find interesting...