The Justice and Home Affairs Committee has published a report, “Technology Rules? The advent of new technologies in the justice system”.
The development of artificial intelligence or AI has impacted most parts of our lives. We started with the use of email, databases and automatic number plate recognition which are all now in common use. The technology has developed far beyond this and it is natural that agencies and police forces look at the technology and how it could make operations more effective and successful.
The Committee’s report comes from an investigation into how these technologies are used in the justice system. By way of example, algorithms are used to improve crime detection, assess prisoner categorisation, streamline entry clearance processes into the country. The investigation did not look at all technologies in court processes that are not regarded as ‘new’, such as giving remote evidence, as these have been the subject of previous studies.
AI are the machines used to perform tasks normally performed by human intelligence, especially where the machines learn from data how to perform those tasks.
An algorithm is a series of instructions used to perform a calculation or solve a problem. Algorithms form the basis for everything a computer does and are a fundamental aspect of all AI systems.
Examples of the technology currently being used are:
- A “bot” used by a police force to run procedural checks on vetting enquiries. The key information is passed to an officer for assessment and a decision.
- Qlik Sense is used to present data in an interactive way. An officer can view increasing crime trends and find out what types of crime are driving that increase with the specifics of the relevant offence.
- The Serious Fraud Office uses a machine to pre-screen documents saving time ad costs compared to manual processing.
- The Home Office uses an algorithm to screen documents and review applications for marriage licences. The tool can raise flags as to potential sham marriages.
- Harm Assessment Risk Tool (HART) is used by Durham police to provide a prediction of how likely it is for an individual to commit a violent or non-violent offence over the following two year period.
- Polygraphs are used to monitor sex offenders on parole and manage their compliance with conditions.
Issues
The report begins by acknowledging the benefits of technology but goes on to look at the AI tools being used without proper oversight, particularly within police forces. Facial recognition technology has seen a lot of press recently, resulting in guidance being produced for police forces to adopt.
The use of such technologies can have serious implications for a person’s human rights and civil liberties, particularly if someone is imprisoned as a result of technology if it cannot be fully explained. Informed scrutiny is needed to make sure the technology is safe, effective, necessary and proportionate.
At the moment, police forces are free to individually purchase or commission technology as they wish. The security and testing of systems may not be known, with suppliers insisting on commercial confidentiality even where the systems will be harvesting data from the general public.
The investigation discovered there is no central register of technologies which means it is hard to find out where they are being used or for them to be scrutinised. Concerns have also been raised about AI being used in predictive policing, that is, forecasting crime before it occurs. Historical data is used to predict where and when certain types of crime occur and, therefore, be more likely to take place. The predictions are then used to plan policing priorities and strategies.
The risk with this is whether discrimination would increase as a result of human bias being embedded in decisions made by algorithms. The ultimate decision-maker should always be a human rather than an algorithm to act as a check against things going wrong.
Although medical science uses proper trials methodology, there are no minimum ethical or scientific standards that a tool must meet before it can be used in the justice system. Public bodies do not have the expertise to properly evaluate the technology, so there is an additional risk of deploying unsuitable technologies.
Report Recommendations:
- A mandatory register to be made of algorithms used in relevant tools.
- The introduction of a duty of candour on the police to ensure there is transparency in their use of AI.
- The establishment of a national body to set appropriate standards and to certify new technology against those standards. Although police forces should have the ability to address the
- particular issues in their area, any tool should achieve the requisite kitemark certification before use.
- The establishment of a proper governance structure to carry out regular inspections.
- The system should be streamlined; more than thirty public bodies and programmes currently play a role in the governance of new technologies. Roles overlap or are unclear, there is no coordination, and it is not clear where ultimate responsibility lies. As part of the streamlining, there is a need for a robust legal framework and regulation.
- Local specialist ethics committees to be established and empowered.
- Proper individual training in the limitations of the technology being used with mandatory training on the use of the tools themselves and general training on the legislative context. The possibility of bias and the need for cautious interpretation of outputs should also be addressed.