⇧ [VIDÉO] You may also like this partner content (after viewing)
Technological advances in algorithms and artificial intelligence continue to push the boundaries, even if it means we are faced with moral dilemmas. Suppose a statistical model predicts that there is a high probability that a crime will be committed in a block of flats within the next three days, what to do with this information? Shall we act? If so, how and, above all, who should do it? Predicting crime models can provide powerful statistical tools, but researchers must keep important considerations in mind to prevent them from becoming counterproductive and dangerous. A new algorithm developed by US researchers uses publicly available data to accurately predict crime in eight US cities, while revealing an increased police response in affluent neighborhoods at the expense of less deprived areas.
Advances in machine learning and artificial intelligence have sparked the interest of governments looking to use predictive policing tools to deter crime. However, early efforts to predict crime were controversial because they failed to account for systemic biases in law enforcement and its complex relationship to crime and society.
The new University of Chicago (UChicago) study, led by Rotaru and colleagues, appears to be inspired by the film Minority Report, in which, in 2054, future society eradicates crime by arming itself with that of the world most sophisticated prevention, detection and suppression system. Hidden from everyone, three psychics transmit the images of the crimes to the Precrime police officers. This is certainly an algorithm programmed by humans, but its accuracy would be around 90%. The team’s work is published in the journal nature of human behavior.
A more accurate and scalable algorithm
The research team, specializing in data and social sciences, has developed a new algorithm that predicts crime by learning patterns gleaned from public data on violent and property crime. The models match geographic locations with crime risk at a given point in time, outperforming previous predictive models.
In fact, previous efforts to predict crime often use an epidemic or seismic approach. In other words, crime seems to start from specific “hot spots” (e.g. the epicenter of an earthquake or the first case of infection) and then spread to the surrounding areas (e.g. seismic waves or contact cases for pathology) . However, these tools do not take into account the complex social environment of cities, nor the relationship between crime and the impact of police enforcement. There is not just design bias, there are many human biases, often rooted in racism or social prejudice.
Sociologist and study co-author James Evans, a professor at UChicago and the Santa Fe Institute, said in a statement: Spatial models ignore the natural topology of the city. Transport networks include streets, sidewalks, train and bus lines. Communication networks take into account areas of similar socio-economic origin. Our model enables us to discover these connections “.
The tool was tested and validated using historical data from the City of Chicago on two broad categories of reported events: violent crime (homicide, assault, and assault) and property crime (burglary, theft vehicle). This data was used because it was more likely to be reported to the police in urban areas, which have a history of distrust and lack of cooperation with law enforcement. These crimes are also less prone to law enforcement bias than are drug offenses, traffic delays, and other offenses.
The new model isolates crime by examining the temporal and spatial coordinates of events and recognizing patterns to predict future events. It divides the city into boroughs about 1,000 feet (305 meters) across and forecasts crime in those areas, rather than relying on traditional neighborhoods or political boundaries, which are also subject to bias. The model performed equally well with data from seven other US cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco. The model can predict future crimes a week in advance with about 90% accuracy.
Evans points out: We demonstrate the importance of uncovering city-specific patterns in predicting reported crime, creating a new view of neighborhoods that allows us to ask new questions and evaluate policing in new ways “.
A response bias by the police
In a separate model, the research team also examined police response to crime, analyzed the number of arrests after incidents, and compared these rates between neighborhoods of different socioeconomic status. They found that crime in affluent areas led to more arrests, while arrests in poor neighborhoods fell. This finding suggests a bias in police response and enforcement.
Ishanu Chattopadhyay, assistant professor of medicine at UChicago and lead author of the study, notes that the tool’s accuracy doesn’t mean it should be used to guide law enforcement or police departments to proactively invade neighborhoods to prevent crime. Instead, it should be added to a toolbox of city policies and police crime-fighting strategies.
She says: ” Now you can use it as a simulation tool to see what happens when crime increases in one area of the city or law enforcement increases in another. If you apply all of these different variables, you can see how systems change in response “.
Indeed, law enforcement in Illinois will aggregate information on firearms used in crimes across the state and create a database that will allow police to better track the illegal arms trade, Attorney General Kwame Raoul announced at a news conference in Chicago on Wednesday .
The Illinois database makes it very easy to find and share information, and the algorithms flag suspicious models as to resell them on the black market.
On the way to algorithmic justice?
Finally, in March, the Santa Fe Institute, working with the University of Chicago, brought together experts from a variety of disciplines, including computer science, law, philosophy, and social sciences, to discuss the following question: Can algorithms keep the balance? towards justice? The workshop was organized by Professors Melanie Moses and Sonia Gipson Rankin (University of New Mexico) and Tina Eliassi-Rad (Northeastern University).
First, the group analyzed the concept of justice itself, which is understood very differently by computer scientists, ethicists and lawyers. Computer scientists tend to have a narrow but well-defined view of fairness – a view that is useful for writing or analyzing algorithms, but is often too useful to understand what social scientists, philosophers, lawyers, and ordinary people mean by ” justice”.
One of the challenges is finding practical ways to deepen algorithmic justice to include broader definitions. They also discussed the regulations or incentives needed to ensure these algorithms operate fairly and ethically. Melanie Moses concludes in the article: “ We learn from each other and design a way forward where artificial intelligence drives justice, rather than exacerbating or accelerating injustices “.