
Facial recognition technology (FRT) is rapidly reshaping law enforcement across India. While it promises improved efficiency in identifying suspects and solving crimes, it also introduces serious ethical, legal, and procedural challenges—particularly concerning wrongful arrests and violations of constitutional rights. The convergence of artificial intelligence and policing has sparked an important national debate on accuracy, due process, and the balance between security and privacy.
Understanding Facial Recognition and Its Adoption
Facial recognition uses artificial intelligence to analyze visual data and match faces against pre-existing databases. In India, FRT is now active in several major cities—Delhi, Chennai, Hyderabad, and Mumbai—as well as in states like Punjab and Maharashtra. Law enforcement agencies employ these systems for crowd monitoring, criminal investigations, and tracing missing persons.
Delhi Police, for instance, uses a sophisticated network of CCTV cameras and mobile units that can scan and compare millions of face records within seconds. This has transformed surveillance and investigation methods—but not without controversy.
Major Cases: FRT and Criminal Arrests
Recent cases highlight both the power and pitfalls of this technology.
- In Jammu & Kashmir, police used FRT to arrest an accused under the Unlawful Activities (Prevention) Act (UAPA).
- During the 2020 northeast Delhi riots, FRT reportedly led to the identification of more than 137 suspects from CCTV footage, though legal scholars questioned the reliability of such evidence.
- Several theft and missing-persons cases solved in Delhi show FRT’s growing operational role.
While these examples demonstrate FRT’s usefulness, they also expose its inherent risks—especially when technology becomes the primary rather than a supplementary basis for arrests.
Risks of Wrongful Arrest
Facial recognition systems are far from flawless. Their misuse can lead to grave miscarriages of justice due to:
- Algorithmic bias: Studies show that AI-based systems often exhibit bias against minorities, women, and marginalized groups, leading to disproportionate targeting.
- Low-quality input data: Poor-resolution images or partial frames frequently result in false matches, with little corroborating evidence.
- Mass surveillance and privacy erosion: Widespread deployment at public gatherings raises concerns about oversight, legality, and the potential for abuse.
Such vulnerabilities jeopardize fair trial rights and the presumption of innocence—core tenets of criminal justice.
Legal and Constitutional Concerns
India’s legal framework is still evolving. The Digital Personal Data Protection Act (DPDPA) 2023 permits the collection and use of biometric data for security purposes but offers broad exemptions for state agencies, with limited checks against misuse.
Civil society advocates, including the Internet Freedom Foundation, have urged for a temporary moratorium on FRT until stronger safeguards are implemented. Wrongful arrests based on faulty facial recognition can infringe fundamental rights enshrined in Articles 14 and 21 of the Indian Constitution—equality before law and protection of life and personal liberty.
Case Study: Bias and Its Human Impact
Investigations following the Delhi riots reveal possible systemic and technological bias. Analysts observed that surveillance disproportionately focused on Muslim neighborhoods, leading to targeted arrests and community distrust.
The case of Mohammad Shahid, identified primarily through a blurry CCTV image flagged by an algorithm, underscores the human cost. His arrest, based on minimal corroborative evidence, reflects the dangers of overreliance on automated systems over traditional investigative processes.
Technology and Due Process
Experts emphasize that FRT should act as a supporting instrument, not a decisive one. Without corroborative evidence, algorithmic results cannot satisfy the standards of proof required in criminal law. Arrests based solely on FRT undermine procedural fairness and contravene the presumption of innocence.
Regulatory Gaps and Reform Needs
AI-driven policing demands transparency, accountability, and defined boundaries. Yet, in India, FRT is often deployed with little public awareness or oversight. Leading digital rights groups such as SFLC and Panoptic have recommended reforms including:
- Independent audits and accuracy testing of FRT systems.
- Restrictive guidelines specifying when and how FRT can be used.
- Legal remedies for victims of false identification.
- Parliamentary oversight and regular public disclosure of use cases.
Global Lessons for India
Worldwide, the debate over facial recognition continues. The United States has faced multiple lawsuits over wrongful arrests of Black individuals due to faulty matches, underscoring the dangers of unchecked deployment. For India, such examples highlight the need for:
- Clear legislative safeguards and accountability measures.
- Empowered monitoring bodies to prevent misuse.
- Human rights–centric frameworks that prioritize justice over technological convenience.
Conclusion: Building a Responsible Future
Facial recognition technology can aid policing, but without robust regulation and transparency, it risks undermining justice and public trust. As India continues to integrate AI into law enforcement, safeguards must prioritize fairness, data integrity, and human rights.
The true test of progress lies not merely in technological innovation but in ensuring that technology serves justice—never replaces it.