Is the UK Turning Into a Surveillance State With Facial Recognition Vans?

- Advertisement -

(Commonwealth_Europe) The UK Government’s decision to roll out 10 new Live Facial Recognition (LFR) vans to police forces across the country has sparked deep concern, and for good reason. While ministers say it’s about catching “high-harm offenders,” the reality on the ground tells a different, more troubling story, one where the rights of ordinary people are quietly being traded away in the name of security.

These new vans, which will be deployed in places like Greater Manchester, West Yorkshire, Bedfordshire, and others, are equipped to scan thousands of faces every hour. This implies that individuals such as commuters, shoppers, and parents strolling with their children could unknowingly undergo daily facial recognition scanning. It’s not just happening in one place anymore; it’s becoming national policy. With this rollout, the number of police forces using facial recognition regularly rises to ten.

Look at South Wales Police, which leads the UK in facial recognition use. In the last year, they scanned more than 819,000 faces. The Metropolitan Police scanned 144,000 in London and made only eight arrests. Most of those arrests weren’t for serious crimes but for things like shoplifting or drug possession. Thus, we are discussing the scanning of countless individuals who are simply going about their daily lives, resulting in outcomes that are, frankly, disappointing.

The government claims this technology is only used to catch dangerous criminals. However, the watchlists present a more nuanced picture. These lists can include missing persons, people wanted for petty offenses, or those on bail—thousands of names and faces, many of which may have no business being there. During one deployment at Oxford Circus, there were more than 9,700 images on a single watchlist. That’s not targeted policing; that’s a dragnet.

There’s also the legal side of things. Currently, there’s no specific law in the UK that governs how facial recognition should be used. Police forces rely on a messy patchwork of existing rules, mainly common law. Even during recent parliamentary hearings, police officials admitted there’s no clear legal framework. Compare that with the European Union, where the use of this kind of technology in public spaces is tightly regulated or outright banned in numerous instances. Their approach sees the subject for what it is: high-risk technology that needs clear boundaries.

The Information Commissioner’s Office has tried to step in with guidance, reminding forces that the field isn’t a legal free-for-all. But guidance isn’t law, and it doesn’t come with the kind of accountability that proper legislation would bring. Essentially, we are witnessing the widespread deployment of a powerful surveillance tool, while the laws meant to safeguard the public continue to lag behind.

And the human cost is real. This isn’t just about privacy on paper; it’s about real people, in real places, being stopped, questioned, and even humiliated because of a system that makes mistakes. Big Brother Watch has documented several troubling incidents, including the case of a 14-year-old Black schoolboy who was surrounded by plainclothes officers after being wrongly flagged by facial recognition in Romford. He wasn’t doing anything wrong. He was in uniform. He was just a kid on his way to or from school. Imagine how terrifying that moment must have been for him and his family.

It’s worth pointing out that these systems, while getting more accurate, still carry biases. Facial recognition has consistently been less accurate when it comes to identifying people of color. And when the system gets it wrong, the consequences fall on individuals who often have the least power to speak up.

Police forces talk about low error rates, “zero false positives,” they claim. But even these statistics only refer to whether the software got a technical match. They don’t measure how many innocent people were scanned or how many felt their dignity compromised by being treated like suspects just for walking down the street.

And now this kind of surveillance is spilling into the private sector, too. Retailers like Asda, Sports Direct, and Southern Co-op have started using facial recognition technology in their stores. They’re creating their watchlists, deciding who to monitor and who to flag. These decisions aren’t being made by judges or even police officers, but by store managers and private security. The House of Lords appropriately labeled this trend as “privatized policing.” It’s deeply concerning.

All of this is happening while the government is rushing ahead with broader AI deployments across the public sector, from policing to education and healthcare. But where are the safeguards? Where is the debate? The ICO urges caution, saying that any use of facial recognition must be necessary and proportionate. Regulators are lagging behind while police and private companies are rapidly progressing.

Honestly, most of us were unaware that this was taking place. There’s never been a proper public debate. Parliament hasn’t voted on this. The idea of constant public scanning has not been the subject of a national conversation. Instead, we’re learning about this after the fact, through press releases and subtle policy changes.

The government has promised a consultation this autumn. But by then, the vans will already be out there. The technology will already be deployed. It feels less like a genuine effort to listen and more like an attempt to validate a decision that’s already been made. As Rebecca Vincent from Big Brother Watch put it, this is “carte blanche” to keep expanding surveillance without real accountability.

Yes, there have been some successes, arrests of wanted criminals, including sex offenders. But even if the tech can sometimes work, is it worth the cost of scanning and tracking millions of innocent people? There are other ways to catch dangerous individuals—ways that don’t involve turning every public space into a surveillance zone.

This isn’t just a policy issue. It’s a question of what kind of society we want to live in. Do we want to be watched, tracked, and categorized every time we go to the shop or walk to school? Or do we want to protect the simple, fundamental right to move through public life without being treated like a suspect?

It’s possible that facial recognition technology will persist. But how we choose to use it, or not use it, is still up to us. This change isn’t inevitable. It’s a decision. And right now, that decision is being made without us.

Hot this week

From Prosecutor to Defendant: Letitia James Indicted in Explosive Bank Fraud Case

The New York Attorney General Letitia James has been...

India and the UK Are Teaming Up to Rewrite the Rules of Global Fintech

(Commonwealth_India) A new trend of partnership is emerging between...

The Hidden Crisis Behind Britain’s Workforce: How Employee Illness Is Draining Businesses Dry

(Commonwealth_Europe) Almost nine in ten UK businesses say they’re...

Tiny Beetle, Huge Threat: The Shocking Discovery Inside Imported Baby Nappies

The Australian agricultural sector is on high alert after...
- Advertisement -

Related Articles

- Advertisement -sitaramatravels.comsitaramatravels.com

Popular Categories

Commonwealth Union
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.