Face the Bias: How facial recognition fails people of colour
By Angela Garcia

Facial recognition technology is the new thing of the future, you see it everywhere, when you unlock your phone, in the airport it’s like a friend that is here to stay. While it is built to detect faces, it somehow struggles with the basic concept that not all brown skin is the same shade and not every non-white person is a blurry shadow in need of “better lightning.” Unless you are of course a white guy named John smiling in good lightning with a perfect jawline, then it’s practically a love letter.

The technology was designed to chew you and spit you back out, it is full of biased data, coded by mono-cultural teams and weaponized disproportionately against Black and brown communities. This isn’t a glitch, it’s a system working exactly as designed.

One main reason for this racial bias is the lack of non-white faces in the dataset that have been used to develop the algorithms which means the tech fails hardest when it matters the most. It is sold as sleek, smart and secure,  a futuristic fix to modern life, helping you breeze through airports, unlock your phone, or supposedly keep cities “safe,” but  for all its high-tech promise, the tech still hasn’t figured out how to reliably tell the difference between brown faces or even see them clearly.

A major 2019 study by the U.S. National Institute of Standards and Technology (NIST) found that facial recognition systems were up to 100 times more likely to misidentify Asian and Black people than white individuals. That same pattern holds true in the UK, but the government keeps backing it anyway.

In London, trials of live facial recognition tech by the Met Police had a staggering 81% false positive rate, according to Big Brother Watch. That means the majority of people flagged as potential criminals weren’t suspects at all, and who’s more likely to be flagged? Spoiler: not white guys in suits.

Meanwhile, South Wales Police was taken to court after unlawfully scanning thousands of people’s faces without consent, disproportionately impacting marginalised communities. That ruling came all the way back in 2020 , yet facial recognition cameras are still being used in UK cities today, with barely any public debate or clear legislation.

These tools don’t just mess up photos, they risk real harm. Misidentification can lead to wrongful stops, harassment, or even arrests, especially when used in policing or border control. And the tech doesn’t exist in a vacuum. It reflects the systemic bias already baked into our institutions  now just automated and harder to question.

Despite repeated warnings, facial recognition is being normalised in shops, streets, and schools. Surveillance is being sold as safety, but the reality is much murkier. There’s still no UK-wide law regulating facial recognition, no transparency on how it’s used, and no way to opt out  especially if you’re already over-policed.

So what do we do?  We demand regulation and  stop pretending that an algorithm trained on a narrow, biased dataset can somehow deliver justice.

Because tech should serve the people, not just the ones it recognises.

Sign up to our 'Weekly WTF Now' newsletter.

Sign up to our 'Weekly WTF Now' newsletter.

One email. No fluff. Everything you should be angry about this week.

You're in!