Technology and Law Policy Commission: Algorithms in the Justice System

Posted on

By Dr Tom Smith and Ed Johnston

Technology and Law Policy Commission: Algorithms in the Justice System

Wales Evidence Session, 7 February 2019

This event had 3 different panels speaking for about 30 minutes each regarding the use of algorithms in the justice system. The talks were a mix of practical and managerial algorithms. Both had aspects that were interesting to our teaching and research, as well as elements that were irrelevant.

The commission opened by making the point that we are not asking the correct questions concerning the use of algorithms in the justice system. We are not asking what ‘values’ underpin their usage (for example, issues like transparency and ‘explainability’). All panellists agreed that this is a growth area but the swift growth causes a number of concerns. Firstly, how do we define the ‘values’ that need to underpin the tech and its usage in law?

A major concern rests on the fact that algorithms are often opaque systems for decision making and there is a problem with ‘explainability’ (i.e. we cannot extract from machine learning the rationale for why the algorithm arrived at a particular conclusion). Arguably, this raises a red flag for lawyers in terms of justification of decisions to those affected, and especially for the potential conflict between Freedom of Expression and the use of AI to tackle forms of extremism in England and Wales.

There is a further problem with the lack of emotional intelligence associated with the use of AI. This raises questions about the lack discretion afforded to humans in legal systems when allowing machine learning to make decisions. Much of modern policing is done by using discretionary powers – a concern is the potential for the use of AI to allow the criminal justice ‘net’ to widen disproportionately and without adequate safeguarding. Additionally, if there is an element of human discretion operating alongside AI, who do we defer to in making final decisions (a classic man vs. machine argument)? This raises questions about the risk of humans delegating responsibility (and thus accountability) to machines.

As well as these elements, we need to answer questions concerning data control. What happens to the data that is generated by machine learning?

A further problem exists concerning the language being unpicked by the AI. We have many different languages spoken/written in society. Coupled with this we have local spoken/written language. Finally, we have code spoken by offenders to avoid detection on social media (for example in organised dogfighting). The dogfighting article suggests that there is an informal code spoken on social media to alert likeminded individuals to events and dogs for sale – how can the AI pick up such information? This would require continuous human input and updating to ensure that those targeting by such technologies cannot evade justice by ‘gaming’ such systems.

Ed asked a question about the Harm Assessment Risk Tool (HART) being used by Durham Constabulary but sadly it was not answered. I wonder what risks exist in using an algorithm to make bail decisions post-conviction. However, with the advent of the Released under Investigation status used frequently by police officers and the reduction of the use bail, this is perhaps not an issue (however, that feels very much like fudging the numbers to appear successful – this new unregulated status may in fact be a retrograde step which undermines attempts to reduce unnecessary use of bail). 

There are positives to the technology. The Facial Recognition software described by a Police Inspector appeared to be very beneficial. There are some 12 million images in the Police National Database and the average officer will upload 30 new images per day. Previously, there would be a 12 day wait to try and identify a suspect from the database. The new software will provide a result in 5 minutes. This is of particular benefit when tackling crowd disorder at sporting events. Previously, officers would have to stick their heads out the window of a police van to identify someone. Now the software can scan all individuals in a crowd. Whilst this has clear practical benefit there was little regard for the potential breach of civil liberties or discussion about training for officers on responsible and effective use.

Finally, the panel spoke of the need for regulation and the panels tried to centre in on accountability, oversight and transparency. We need to know a) how will the use be regulated (soft regulation or by legislation) or b) what happens if the evidence is wrongly used. We can exclude evidence under s.78 PACE 1984 currently, but does this broad protection go far enough?

Lots of questions, not many answers. It’s clear that this is a ‘sexy’ and attractive area of law, which is being pioneered primarily in other jurisdictions. Whilst the desire not to be left behind and to utilise technology effectively in the digital age is understandable, this area also potentially poses great danger. The use needs to be carefully considered from a protective, due process standpoint rather than focusing solely on the practical benefits of the technology to crime control and enforcement.

Leave a Reply

Your email address will not be published. Required fields are marked *