Jump to content

Force makes custody decisions using artificial intelligence


Techie1
 Share

Recommended Posts

The system has been tested for three years and is now undergoing a live pilot.

C2AB5B11-8B4E-4A31-ADA6-C7CE8D72819B.jpeg

Custody sergeants are trialling a system which will aid them in making difficult risk-based judgements.

The tool, created by Cambridgeshire University, helps identify detainees who pose a major danger to the community, and whose release should be subject to additional layers of review.

“The police officers who make these custody decisions are highly experienced, but all their knowledge and policing skills can’t tell them the one thing they need to know most about the suspect – how likely is it that he or she is going to cause major harm if they are released?

“This is a job that really scares people – they are at the front line of risk-based decision-making,” says Dr Geoffrey Barnes.

“Imagine a situation where the officer has the benefit of 100,000 or more real previous experiences of custody decisions? No one person can have that number of experiences, but a machine can,” Professor Lawrence Sherman added.

In 2016, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.

Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years.

Using a method called “random forests”, the tool can create thousands of combinations of predicted outcomes, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area. 

“Imagine a human holding this number of variables in their head, and making all of these connections before making a decision. Our minds simply can’t do it,” explains Dr Barnes.

The aim of HART is to categorise whether in the next two years an offender is high risk, moderate risk or low risk.

“The need for good prediction is not just about identifying the dangerous people,” explains Prof. Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”

Durham Constabulary wants to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART.

These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks. 

However, the system cannot prioritise offences, which often change over time, so it has to be supplied frequently with up-to-date information.

An independent study found an overall accuracy of around 63 per cent, but is 98 per cent accurate at detecting a ‘false negative’ – an offender who is predicted to be relatively safe, but then goes on to commit a serious and violent crime.

The researchers also stress the technology is not a “silver bullet for law enforcement” and the ultimate decision is that of the officer in charge.

Prof. Sherman said: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe. 

“The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”

View on Police Oracle

Link to comment
Share on other sites

10 hours ago, chaos4122 said:

The minority report ???!!

Sums it up pretty well. Sounds very interesting. With all the will in the world and whatever technology people come up you still can’t account for human behaviour. We all know the lowest risk people can actually end up doing the worst things and vice versa. 

  • Like 1
Link to comment
Share on other sites

I don't really see how that's going to have an impact.

For example we all know prolific shoplifters addicted to drugs with no income are likely to continue to offend to fund said habit but the custody sergeant will rarely remand because we know the courts won't remand or imprison.

Edited by DB11
  • Like 1
Link to comment
Share on other sites

I'm in two minds about this.

Maybe it will take the poor decisions judges make out of their hands and produce more reasonable decisions that don't let people out that are highly likely to reoffend on bail. But then, don't decisions involving people need some sort of human interaction. They'll be an outcry when people are let out, do something bad, and it's found that an error in the algorithm was the cause. 

The USA have been kinda using this type of system for years by way of an offender matrix tied in to mandatory minimums, and look at their incarceration rate and the fact that a black guy is 8 times more likely to get sent down than a white dude

Link to comment
Share on other sites

29 minutes ago, jimmyriddle said:

I'm in two minds about this.

Maybe it will take the poor decisions judges make out of their hands and produce more reasonable decisions that don't let people out that are highly likely to reoffend on bail. But then, don't decisions involving people need some sort of human interaction. They'll be an outcry when people are let out, do something bad, and it's found that an error in the algorithm was the cause. 

The USA have been kinda using this type of system for years by way of an offender matrix tied in to mandatory minimums, and look at their incarceration rate and the fact that a black guy is 8 times more likely to get sent down than a white dude

I don’t think it’s anything to do with court bail/judge or magistrates decisions.

Its do with with custody officers.

Link to comment
Share on other sites

My concern isn’t about the technology but the transparency around it.

By all accounts, if you were a male aged between 18 - 24 and lived in a particular micro-geography then you would be deemed high risk on the basis of everyone else who lived there.

This technology prophesises risk when compared to a control group but demographics change over time. Neighborhoods change. And whilst people don’t change, they are often quiet mobile on move on.

The efficacy of the technology needs to be continuously tested for it to remain an ethical tool, but I do nonetheless think the pros outweighs the cons.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...