On AboutIntel a discussion was started: “Why policing is not predictable“:
Discussion Prompt: When, if ever, is predictive policing effective, fair, and legitimate? What is the role of data reliability in this?
One of the contributors is Nina Galla. Her article Why policing is not predictable ends with:
To find out if AI can effectively support the reduction of crime, more research is necessary, as well as internal trials for special authority purposes, e.g. to detect misuses of access to databases or racist structures. These can be used to train competencies and reliabilities of systems. Using AI systems to improve internal processes can also build public trust in the practice of predictive policing.
We also need transparency on how systems are procured, how persons are trained on these systems, who makes decisions, and how these persons are qualified. Public monitoring is needed to make sure that the implementation of the systems will be stopped if relevant requirements are not met, if the evaluation shows that they discriminated against minorities or vulnerable populations, or if decisions cannot be reproduced or explained.
Other articles in the discussion:
- The full force of the state
- Data in policing: a responsibility to use responsibly
- Predictive Policing: A force for public good
- Fresh, fair, and smart: data reliability in predictive policing