Predictive policing using artificial intelligence (AI) is certainly controversial, but proponents tout it as anything but. They say that having a computer predict where crime will happen means there cannot be any bias in the system. The computer just uses numbers. It just uses data. When they send officers to the hot spots that it identifies, they claim that is the fairest way to decide where to run these patrols.
However, there are those who say this isn’t true at all. They say that it just makes the bias worse.
After all, where does the AI get the numbers that it uses? From police officers. When they make arrests, that data goes into the system. It can then map out where arrests happened and predict future crime.
But it’s not mapping out where crime happened, just where arrests were made.
Say that the same amount of drug crime happens in a wealthy neighborhood and a poor neighborhood. A biased officer, though, thinks that it is more common in the poor neighborhood. That officer ignores the wealthy area and targets the poor one. This means that the AI only gets arrest data from the poor neighborhood. It interprets this as where crimes are likely and keeps sending officers back, but it doesn’t know how many crimes are just being overlooked in the wealthy area.
You can see how problematic this may be and how it can just make the biases that officers hold even worse. If you think that you got arrested unfairly for any type of crime, you must know exactly what steps you can take.