Should We Trust Crime-Stopping AI?

Around the world, city, state, and federal police agencies are bringing in artificial intelligence to spot criminals before they act. However, more than we know are rumored to be using the tech without public disclosure. Why is this? Perhaps because crime-stopping AI is based on historical crime data, which opens the opportunity for existing bias, self-fulfilling police predictions, and ignored crimes.

For example: African Americans face higher risk of being included in facial recognition databases accessible to the FBI due to over-policing of black communities. Law enforcement agencies get away because predictive policies are supposed to be based on the broken window theory. An example of crime-stopping AI tending to broken window policing is the PredPol system, developed by LAPD and UCLA in 2008. This system is able to foresee hotspots for minor crimes and can even target their patrol range down to a 500 square foot area.

The problem with predictive policing lies in AI’s data feedback loophole. Take a look at the infographic to find more instances of crime-stopping AI gone wrong.
Infographic source link:


Comments 0