Artificial intelligence is already in some way helps to determine your future. When you are looking for something in a search engine, using a service such as Netflix or the Bank evaluates your suitability for a mortgage. But what if artificial intelligence will have to determine whether you’re guilty or not in court? Oddly enough, in some countries this may already be happening. Recently, the American high judge John Roberts asked if he could imagine a day when “smart machines controlled by artificial intelligence, will help in the search for evidence or even in judicial decisions”. He said: “This was already happening, and it significantly helps in the judicial proceedings”.
Perhaps Roberts was referring to the recent case of Erik Loomis, who was sentenced to six years in prison on the recommendation of the secret proprietary software to a private company. Loomis, who already had a criminal history and was sentenced for that fled from police in a stolen car, now claims that his right to the procedure was violated because neither he nor his representatives are unable to examine or challenge the algorithm recommendations.
The report was prepared by the Compas program, which is sold Notrpointe courts. The program embodies a new trend in AI research: it helps the judges to take the “best” (or at least more focused on the data) the decision in court.
Although the specific details of the case of the Loomis remain closed, it certainly contains graphs and numbers that identify the life, behavior and the likelihood of relapse Loomis. Among them age, race, gender identity, habits, browser history and some measurements of the skull. No one is sure.
It is known that the Prosecutor in the case told the judge that Loomis demonstrated “a high risk of recidivism, violence, pre-trial proceedings”. This is standard when it comes to sentencing. The judge agreed and told Loomis that “according to the Compas, he was defined as a person presenting a high risk to society.”
The Supreme court of Wisconsin condemned Loomis, adding that the Compas report brought valuable information to his decision, but noted that it carried the same sentence. To check this for sure, of course, will not work. What are the cognitive biases, when in fact involved the Almighty “smart” system is like a Compas, which advises judges how to do?
Unknown use
Let’s be honest, there is nothing “illegal” that made court of Wisconsin is just an example. Other courts can and will do the same.
Unfortunately, we don’t know to what extent the use of AI and other algorithms in sentencing. It is believed that some courts have “tested” the system like the Compas in private research, but may not announce their partnership. There is also a view that several startups are developing AI similar to the smart system.
However, the use of AI in law does not begin and end with the sentencing, it begins with investigation. In the UK system has been developed VALCRI that performs the laborious analytical work in seconds — Wade through tons of data like texts, lab reports and police documents to highlight things that may require further investigation.
Police West Midlands in the UK will test VALCRI over the next three years, using anonymous data, containing more than 6.5 million records. A similar test conducted by the police of Antwerp in Belgium. However, in the past, the projects of AI and deep learning, including massive data sets, was problematic.
The benefits to the few
Technology has provided many useful devices the halls of the court, from copiers to extract DNA from fingerprints, and sophisticated surveillance techniques. But this does not mean that any technology is improving.
Although the use of AI in investigations and sentences can potentially save time and money, it will create acute problems. In the Compas report from ProPublica, it was clearly stated that black program mistakenly believes are more prone to recidivism defendants than whites. Even the most sophisticated AI systems may have inherited racial and gender biases of those who create them.
Moreover, what is the point to shift decision-making (at least partially) on issues that are unique to people on the algorithm? In the US there is a certain difficulty when a jury judges their peers. The standards in the laws was never a reference, because these juries are the most democratic and effective system of condemnation. We make mistakes, but over time accumulate knowledge of how not to make them, updating the system.
Compas and similar systems are “black box” in the legal system. Such should not be. The legal system depends on the continuity, transparency of information and ability into consideration. Society does not want the emergence of a system that encourages a race with AI, start-UPS, which make quick, cheap and exclusive solutions. Hastily made AI will be terrible.
An updated version of Compas, open source would be better. But first have to raise the standards of the justice system, before we begin to take responsibility in favor of the algorithms.
Why is the AI condemning criminals is dangerous?
Ilya Hel