justitia

Legal AI and Bias

Justice is blind, but legal AI may be biased.

Like many advanced technologies, artificial intelligence (AI) comes with its advantages and disadvantages. Some of the potentially negative aspects of AI regularly make headlines. There is a fear that humans could be replaced by AI, and that AI might take our jobs. (As pointed out in a previous article, lawyers are less at risk of such a scenario: AI would perform certain tasks, but not take jobs, as only 23% of the work lawyers do can be automated at present). Others, like Elon Musk, predict doomsday scenarios if we start using AI in weapons or warfare. And there could indeed be a problem there: what if armed robotic soldiers are hacked, or have bad code and go rogue? Some predict that superintelligence (where AI systems become vastly more intelligent than human beings) and the singularity (i.e. the moment when AI systems become self-aware) are inevitable. The combination of both would lead to humans being the inferior species, and possibly being wiped out.

John Giannandrea, who leads AI at Google, does not believe these are the real problems with AI. He sees another problem, and it happens to be one that is very relevant to lawyers. He is worried about intelligent systems learning human prejudices. “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea said.

The case that comes to mind is COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It is often used in criminal cases in the US by judges and parole boards. ProPublica is a Pulitzer Prize winning non-profit news organization. It decided to analyse how correct COMPAS was in its predictions. They discovered that COMPAS’ algorithms correctly predicted recidivism for black and white defendants at roughly the same rate. But when the algorithms were wrong, they were wrong in different ways for each race. African American defendants were almost twice as likely to be labelled a higher risk where they did not actually re-offend. And for Caucasian defendants the opposite mistake was made: they were more likely to be labelled lower risk by the software, while in reality they did re-offend. In other words, ProPublica discovered a double bias in COMPAS, one in favour of white defendants, and one against black defendants. (Note that COMPAS disputes those findings and argues the data were misinterpreted).

The problem of bias in AI is real. AI is being used in more and more industries, like housing, education, employment, medicine and law. Some experts are warning that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” Giannandrea added.

Giannandrea correctly points out that the underlying problem is a problem of lack of transparency in the algorithms that are being used. “Many of the most powerful emerging machine-learning techniques are so complex and opaque in their workings that they defy careful examination.”

Apart of all the ethical implications, the fact that it is unclear how the algorithms come to a specific conclusion could have legal implications. The U.S. Supreme Court might soon take up the case of a Wisconsin convict who claims his right to due process was violated when the judge who sentenced him consulted COMPAS. The argument used by the defence is that the workings of the system were opaque to the defendant, making it impossible to know for what arguments a defence had to be built.

To address these problems, a new institute, the AI Now Institute (ainowinstitute.org) was founded. It produces interdisciplinary research on the social implications of artificial intelligence and acts as a hub for the emerging field focused on these issues. Their main mission consists of “Researching the social implications of artificial intelligence now to ensure a more equitable future.” They want to make sure that AI systems are sensitive and responsive to the complex social domains in which they are applied. To that end, we will need to develop new ways to measure, audit, analyse, and improve them.

Sources:

One thought on “Legal AI and Bias”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.