Artificial intelligence gets its day in court

Artificial intelligence gets its day in court
Credit: Georgia State University

Last september, the ACLU filed an amicus brief in a California case that brings to a head a controversy over the use of algorithms and artificial intelligence in criminal law.

A DNA sample taken from a larger sample of mixed human DNA implicated Billy Ray Johnson in burglaries and sexual assaults. Johnson denied committing the crimes, but he received a sentence of life in prison without parole. Prosecutors based their case on results of a law enforcement tool running a sophisticated algorithm called TrueAllele.

Johnson's attorneys were never allowed to examine TrueAllele's source code, to see if it somehow held any information bias. Prosecutors successfully argued that laws protected the algorithm as a trade secret.

For Jessica Gabel Cino, associate dean for academic affairs and associate professor of law, the decision raised a big red flag.

"No technology is foolproof," she said. "Tech is designed by humans and run by humans, so there is definitely room for human error all through the process. Moreover, the results are only as good as the input."

Cino continued. "Systems are developed for proprietary purposes, by private companies looking to make a profit. But we're talking about access to information that affects a person's freedom. To deny access to validation sequences, source code or proprietary data when it affects a person's freedom? Using a wizard hiding behind a curtain to get a conviction dilutes the integrity of the system."

The ACLU's Vera Eidelman, a William J. Brennan Fellow with the ACLU Speech, Privacy, and Technology project, explained further in September.

"Racial bias also often creeps into algorithms, both because the underlying data reflects existing racial disparities and because inaccurate results for smaller minority groups may be hidden in overall results," Eidelman wrote. "And, of course, there's the possibility that financial incentives will pervert the goals of companies that build these algorithms. In the context of DNA typing, the prosecution, backed by the substantial resources of the state, is a company's most likely customer — and that customer is likely to be most satisfied with an algorithm that delivers a match. So companies may build programs to skew toward matches over the truth."

Similar concerns arise regarding the use of AI in bail and sentencing decisions.

Vic Reynolds (J.D. '86), district attorney for Cobb County, Georgia, has seen the sweep of technology in law practice in his career. He's been on both sides of the bench, as a judge, a defense attorney and, today, a prosecutor.

"We're talking about an area of law where there is very little precedent," Reynolds said. "If we commit to a system where AI is being used to help formulate a criminal sentence, we do in fact have an ethical obligation to share the foundation of that system with the very people whose lives are affected."

He says there are both pros and cons to using AI as a sentencing tool.

"On the positive side — any time we have the human element involved, it carries with it human frailties," he said. "Judges are human, so even good judges can have a really bad day, a day when there's a fight with a spouse or something else affecting him or her.

"Using an AI system has the potential for removing some of this human frailty," Reynolds said.

And the cons?

"Having been a judge, I know that the essence of that position is discretion," Reynolds said. "Even the federal sentencing guidelines have recently begun giving some of that discretion back to sentencing judges.

"Let's say we plug a person's history into an artificial intelligence system.Based on prior felony convictions, the system recommends a 10-year sentence. But with a closer look, we may see that prior convictions were based on a drug abuse situation and that a better long-term decision might be a sentence which emphasizes substantive treatment. To me, this is where a judge's discretion would come into play.

"If you don't have that human element involved, and you lose the element of discretion, there's a chance for mistakes."

Cino cites a study published in Science Advances on the credibility of the computer program COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which assesses whether defendants should be released on bail based on their risk of reoffending. The study's authors found that COMPAS, which has been used to assess more than one million defendants throughout the U.S., "is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey."

"The criminal justice system is hungry for technology that makes decision-making less subjective," Cino said, "but early adoption without proper validation creates long-term consequences that are difficult to rectify."

Cino, Reynolds and many others in the legal system will be watching the Johnson case in California. Attorneys will present oral arguments in early 2018. Cino hopes for a decision supporting transparency.

"If we're going to use artificial intelligence," she said, "we'll hopefully move to an open-source model that moves us past this Chinese wall of proprietary trade secrets. Either open source, or the accused gets access to the algorithm. There's a reason we have the right to a fair trial."

Explore further: Court software may be no more accurate than web survey takers in predicting criminal risk