Skip to main content
man using biometrics to open phone (yellow background)

Resources|Features

Algorithmic  Bias  and How it Can Affect  Digital Identity Systems  

March 2022

Bailey Kursar & Savina Kim

In our previous article, we defined what algorithmic bias is and its impact on credit risk decisions.

But algorithmic bias doesn’t just have the potential to affect credit risk; it can affect any technology where algorithms are used.

One of the other technologies it can affect is digital identity systems.

What is Digital Identity?

In 2021, the UK Government published their policy paper for a UK Digital Identity and Attribute Trust Framework, proposing the introduction of a trusted digital identity system.

The UK digital identity and attributes trust framework sets out requirements so that organisations know what ‘good’ identity verification looks like. The framework defines digital identity as:

...a digital representation of a person. It enables them to prove who they are during interactions and transactions. They can use it online or in person.

Within the trust framework, there are various roles that organisations could participate in providing services. One role is the “Identity Service Provider”.

An identity service provider does not need to do all parts of the identity checking process. They can specialise in designing and building components used during a specific part of the process.

For example, they could develop software to:

  • validate identity evidence (e.g. mobile account validation)
  • check identity evidence is genuine (e.g. passport chip reading)
  • manage authentication
  • provide biometrics-enabled identity verification (e.g. face biometrics)
  • provide biometric authentication (e.g. fingerprint biometrics on a mobile device), and
  • provide identity fraud services (e.g. fraud database checking).

Many, if not all, the services and technologies outlined above use algorithms used in automating the decision making in digital identity systems.

man using biometrics to open phone (grey background)

The Rise in Biometrics

One of the big growth areas within the digital identity space has been biometric systems. Biometric systems are used to authenticate the identity of customers,

Biometric solutions were already on the rise due to increasing smartphone penetration and the availability of biometric technology in smartphones. But the pandemic has resulted in disruption and changes across various industries and created an increased demand for touchless biometric solutions, such as facial biometrics.

Rising technology applications for identity verification, transactions, access control, banking and payments mean the global market for contactless biometrics technology is projected to reach $44 billion by 2026.

Biometric systems use algorithms.

What Are the Risks of Algorithms?

In April 2021, the Information Commissioner in the UK responded to the DCMS policy paper with their positioning paper, noting the following.

“Automated decision making has the potential to cause discriminatory effects. Bias in system design, algorithms or datasets can lead to outputs that affect particular groups. Some effects are expected or even desirable, such as age verification systems that restrict access to under- 18s for particular services. Other bias in automated verification systems is either undesirable or discriminatory or both.”

Our last article [Read Here] noted that human decision-making is also prone to inaccuracy and subconscious bias; algorithms are often subject to greater scrutiny due to their inherent lack of transparency and their ability to scale. In other words, they can impact thousands if not millions of people at an unprecedented scale. Consequently, the algorithm can reproduce “pre-existing patterns of exclusion and inequality.

How Biometric Algorithmic Bias Can Affect People

There have been many reported bias issues with biometrics systems and how they disproportionally affect certain groups. Biometric systems often do not work as accurately on black women in the 18 – 30-year-old bracket and older people.

In the USA, there have been allegations of poor performance for women and people of colour by face biometrics systems that have resulted in legal action in one case and the threat of a lawsuit.

The issues with inaccuracies in these systems can mean that individuals are either unable to be identified or potentially misidentified, leading to people in specific groups experiencing unnecessary difficulty using these systems.

Black faces v white faces algorithmic bias

SOURCE: WIRED Site https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/

IMAGE: Getty Images

At Smart Data Foundry, we’re researching this subject. With our skills and resources, we’re looking at ways to support the financial services industry in the UK to identify and mitigate some of these potential issues. One of our core missions is to open finance for all – you can read more about that here.

Because no matter what the bias is, the algorithms’ recommendations can have a real impact on individuals and groups in Open Banking and Digital Identity. Algorithms that include bias can help to perpetuate bias in a self-fulfilling way. Therefore, it’s important to detect bias in these models and eliminate it as much as possible.