The Ethics of Facial Recognition Technology
Walk through Sydney Airport and cameras scan your face. Shop at certain retail stores and your face gets logged. The technology is here, it works reasonably well, and it’s expanding rapidly.
The question isn’t whether facial recognition is possible—it obviously is. The question is whether we should be deploying it everywhere, and what safeguards should exist.
What the Technology Actually Does
Facial recognition analyses distinctive features of your face—distance between eyes, nose shape, jawline—and converts them into a mathematical representation. This gets compared against a database of known faces to find matches.
Modern systems are surprisingly accurate, particularly with high-quality cameras and controlled conditions. Google Photos can identify your friends from years-old photos. Facebook suggests tags with uncanny precision.
Law enforcement uses it to identify suspects from security footage. Retailers use it to track shoppers and identify return fraud. Airports use it to streamline boarding. Schools use it for attendance.
The use cases are expanding faster than the regulatory frameworks to govern them.
The Accuracy Problem
Facial recognition isn’t equally accurate for everyone. Multiple studies, including research from MIT and Stanford, have documented that these systems perform worse for women and people of colour.
Error rates for white men might be 1%. For Black women, error rates can be 35% or higher.
This isn’t theoretical. False matches have led to wrongful arrests. A Black man in Detroit was arrested because facial recognition incorrectly matched him to security footage. He spent 30 hours in custody before the error was discovered.
When the consequences of false positives include arrest, job loss, or denied services, accuracy disparities become serious civil rights issues.
The Consent Problem
Most facial recognition deployment happens without meaningful consent. You walk past a camera in a shopping centre or on a street. Your face gets scanned, analysed, and possibly stored. You weren’t asked. You weren’t informed. You have no practical ability to opt out.
This is fundamentally different from other biometric identification. Fingerprint or iris scanning requires active participation. You can choose not to provide your fingerprint. You can’t choose to not have a face in public spaces.
The argument that “you have no expectation of privacy in public” might be legally true, but it doesn’t address whether mass automated facial scanning is ethically acceptable.
The Surveillance State Risk
China’s deployment of facial recognition for social monitoring demonstrates the technology’s dystopian potential. Faces tracked everywhere, matched against databases, tied to behaviour scoring systems.
Australia isn’t China, but the infrastructure for similar surveillance exists or is being built. The NSW government’s facial recognition database contains millions of faces from drivers’ licenses. The federal government is pushing for expanded facial recognition capabilities for law enforcement.
The argument is always security and safety. The risk is that we build surveillance infrastructure that future governments might use in ways we wouldn’t accept today.
Once the cameras and databases exist, expanding their use is easy. Rolling them back is nearly impossible.
The Commercial Surveillance Angle
Retail deployment of facial recognition is about profit, not security. Stores track individual shoppers across visits, building profiles of behaviour, preferences, and purchase patterns.
This enables sophisticated targeted marketing but also raises questions about data ownership and consent. Should a store be allowed to track your face across multiple visits to understand your shopping habits without your knowledge?
Some retailers claim they use facial recognition only for security—identifying known shoplifters or banned customers. Others are more honest about using it for commercial purposes.
Either way, you’re being scanned without your knowledge or permission.
Where Regulation Stands
Australia’s privacy laws weren’t written with facial recognition in mind. The Privacy Act 1988 technically applies, but enforcement is weak and penalties are trivial compared to the commercial value of the data.
The EU has been more aggressive. The GDPR places strict limitations on biometric data processing. Some European cities have banned facial recognition in public spaces.
Several US cities (San Francisco, Boston, Portland) have banned government use of facial recognition. Others are considering similar restrictions.
Australia has guidelines, inquiries, and recommendations. Actual enforceable restrictions? Not really.
The Law Enforcement Argument
Police argue facial recognition helps solve serious crimes. Match a suspect’s face against surveillance footage. Identify victims who can’t identify themselves. Find missing people.
These are legitimate use cases. The problem is scope and oversight.
Should police be able to match anyone’s face against surveillance footage without a warrant? Should they be able to run searches against social media photos? Should they be able to use real-time facial recognition to identify people at protests or public gatherings?
Different jurisdictions answer these questions differently. In Australia, the rules are inconsistent and often unclear.
The Function Creep Reality
Technology deployed for one purpose inevitably gets used for others. This is “function creep.”
Facial recognition installed for security gets used for marketing. Systems meant for finding serious criminals get used for identifying protesters. Databases created for driver’s licenses get accessed by law enforcement for general surveillance.
Once the infrastructure exists, restricting its use requires constant vigilance and enforcement. Neither is guaranteed.
What Good Regulation Looks Like
Effective facial recognition regulation would include:
Transparency requirements: People should know when they’re being scanned and why.
Consent requirements: Except in specific circumstances (like airports where participation is voluntary), scanning should require opt-in consent.
Accuracy standards: Systems should meet minimum accuracy thresholds across demographics before deployment.
Limited retention: Facial data shouldn’t be stored indefinitely. Specific time limits should apply.
Audit and oversight: Independent review of how systems are used and whether they comply with regulations.
Meaningful penalties: Violations should carry penalties significant enough to deter misuse.
None of this exists comprehensively in Australian law.
The Technology Isn’t Neutral
Proponents argue facial recognition is just a tool—neutral, with good and bad uses determined by how we deploy it.
This is naive. The technology inherently enables surveillance at scale. It shifts power toward those doing the watching and away from those being watched.
Even well-intentioned deployments create infrastructure that can be misused. The question isn’t whether current use is acceptable; it’s whether we’re comfortable with the worst-case scenarios that the technology enables.
The Practical Resistance
Some people wear masks or makeup designed to confuse facial recognition. Others avoid locations known to use the technology. Privacy advocates push for bans and restrictions.
But practical resistance is limited. You can’t avoid being recorded in public spaces in major cities. The cameras are everywhere.
This means resistance must be regulatory and political, not just individual.
Where This Goes Next
Facial recognition will become more accurate, faster, and cheaper. Deployment will accelerate unless actively restricted.
We’re at a decision point. Do we accept ubiquitous facial recognition as inevitable and try to regulate its use? Or do we decide some uses should be prohibited entirely?
Different societies will answer this differently. The EU seems inclined toward significant restrictions. The US is fragmented—some cities banning it, others embracing it. Australia is largely letting it expand with minimal oversight.
The Uncomfortable Questions
Should police be able to identify everyone at a protest? Should stores be able to track your face to understand your shopping habits? Should schools be able to monitor students through facial recognition?
Should anyone be able to build a database of faces scraped from social media and sell searches to whoever will pay?
These aren’t hypothetical. They’re all happening now. We need to decide which uses we accept and which we don’t, and then actually enforce those decisions.
The technology exists. The regulatory frameworks don’t. We’re building a surveillance infrastructure without seriously debating whether we should.
That’s how you end up with powerful technologies deployed widely before society has figured out the implications. By the time we’re uncomfortable with the results, the infrastructure is already built and the precedents are set.
Facial recognition might be useful for specific, limited applications with proper oversight. Ubiquitous facial scanning without consent or transparency is surveillance, not security. We should be much more cautious about accepting it than we currently are.