- Startt AI
- Posts
- From Fiction to Reality: Predictive Policing and Facial Recognition in Law Enforcement
From Fiction to Reality: Predictive Policing and Facial Recognition in Law Enforcement
Max Stone

Twenty years ago Tom Cruise starred as a detective in 2002's Minority Report set in Washington D.C.'s future 'precrime unit.' Using technology, police were able to predict future crimes before they actually happened and apprehended suspects before various crimes, including murder.
Never before has this dystopian technology seemed so close than now. In the past few years advancements in artificial intelligence have introduced new predictive crime and identification applications that could possibly lead to the future of law enforcement.
The main technologies being developed include facial identification and predictive policing surveillance systems.
“In 2019, the Facial Identification Section received 9,850 requests for comparison and identified 2,510 possible matches, including possible matches in 68 murders, 66 rapes, 277 felony assaults, 386 robberies, and 525 grand larcenies.”
Facial recognition technology is a technology that utilizes saved facial structures, similar to biometrics, to help establish probable cause in the event of a crime. The FBI began experimenting with the technology in 2010, and now they have millions of saved biometric information across multiple states.
It now can be utilized by law enforcement to compare images from criminal investigations with arrest photos, contributing to crime resolution and public safety. Trained investigators analyze potential matches in conjunction with supporting evidence, as matches themselves do not result in immediate arrests. Protocols encompass human review to avert misidentification, and the technology is not employed for crowd monitoring, rally identification, or routine cross-referencing with databases such as driver's licenses or social media.
While these innovations offer the potential for crime prevention and resolution, substantial concerns regarding transparency and potential biases persist.
In Detroit, Michigan, Porsha Woodruff, 32, was mistakenly arrested due to an eight-year-old photo used in an AI facial recognition match by the police. The outdated photo led to Woodruff being identified as the robbery suspect by both police and the victim, resulting in her arrest. The Prosecutor’s Office dropped the original robbery case due to 'insufficient evidence' after further investigation. The incident highlights concerns over biased AI algorithms contributing to discriminatory arrest practices.
Predictive policing involves utilizing data analysis, statistical modeling, and machine learning to forecast and deter potential criminal activities by identifying patterns in past crime data. Different forms of it were developed over a few decades as computers developed, but various police groups largely started to use them in the early 2010s.
There are two main types; place-based and person-based predictive policing. Place-based is the mass data analysis of former crimes to identify patterns and hotspots to try and forecast areas where crime will happen again. Person-based refers to utilizing data analysis and predictive analytics to pinpoint individuals who may have an elevated likelihood of engaging in criminal activities, drawing insights from their previous behaviors and characteristics.
The Chicago Police Department operated a major person-based predictive policing effort in 2012. Name as the 'heat list' or 'strategic subjects list,' it identified individuals at highest risk of involvement in gun violence, utilizing an algorithm developed by researchers from the Illinois Institute of Technology. The program was heavily criticized for its ineffectiveness, broad inclusion of arrestees, and bias against communities of color, leading to its discontinuation in January 2020.
“We can gather information more quickly than ever in the past, analyze it, and from that, actually begin to predict that certain actions, based on intelligence, are going to occur and seek to prevent them.”
Meanwhile, the NYPD, America's largest police force, began testing predictive policing software in 2012, involving firms like Azavea, KeyStats, and PredPol. By 2013, the NYPD had developed its own in-house predictive algorithms for various crimes, aiding with officer assignments. Despite revealing the input sources, including complaints and shooting incidents, data sets haven’t been disclosed to the public. This program is still ongoing.

Clearly, Transparency and bias worries surround predictive policing, with concerns about data analysis methods and law enforcement usage. Legal and civil rights issues emerge, including Fourth Amendment challenges and potential reinforcement of racial biases through reliance on historical police data. Critics suggest predictive policing might obscure biased practices under the guise of technology-driven objectivity.
Still in the dynamic realm of law enforcement technology, the emergence of predictive policing and facial recognition signifies notable progress. While these innovations offer the potential for crime prevention and resolution, substantial concerns regarding transparency and potential biases persist. As law enforcement organizations strive to strike a delicate equilibrium between innovation and responsibility, the trajectory of policing's future remains an enduring subject of ongoing discussion and examination.
Reply