Our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems.

As a little nod to colleagues elsewhere, I must admit that I do not believe that AI has got talent. It does not. Let’s take a look why.

On the recent GCSE Advanced Levels exam fiasco in England and in Scotland under COVID, sometimes it’s the humans behind the tech rather than the tech itself that are sloppy.

The algorithm itself was complex but somewhat dumb. The system was designed by the exams regulator, Ofqual, to ensure results were standardised across the country. But that’s not why it was dumb. It was dumb because it was flawed. It placed constraints on how many pupils could achieve certain grades and based its outputs on a schools’ prior performance, downgrading around 40 per cent of predicted results, mostly in disadvantaged areas.

Prof. Chris Dent (Professor of Industrial Mathematics; Director of the Statistical Consulting Unit at the University of Edinburgh; and a Turing Fellow at the Alan Turing Institute), has a few interesting thoughts on what went wrong here.

Wired UK, a subsidiary of the American magazine Wired, had a different take on the matter. It said that the UK government’s attempt to grade thousands of students by algorithm was a disaster. Hundreds of student protesters, gathering outside the Department for Education in Westminster, on August 16 made this abundantly clear. ‘Fuck the algorithm,’ they chanted.

What that algorithm has done is open up a conversation about the potential dangers and unfairness that this kind of model can have on so many people, especially the already disadvantaged. Going back to where we started, to some, AI may have talent, but where it does, it favours the already advantaged. I could make a tenuous connection between algorithms that reinforce advantage and racism (link to property is racist blog) but that needs more data to untangle. 

Ofqual’s misfortune was that it was caught out. Rightly so. The Home Office, slightly fortunate, recently did a U-turn on the use of an algorithm for assessing immigration applications. This did not hit the headlines, but details can be found here.

In this case, what the Law Society recently noted that the UK government was suspending the use of algorithms in visas and immigration pre-decisions while it reviews them. This algorithm was racist, the Society proclaimed. Why? Because it issues a red flag for applicants from specific countries such as in Africa and speeds up applications for white people from other countries.

The Court of Appeal in the UK recently also ruled that use of automated facial recognition by the police force in south Wales was in breach of the public sector equality duty because of potential bias. This too did not get much press, much to Ofqual’s displeasure I suspect. Details here.

The hunt for talented, unbiased, fair & intelligent algorithms continues.