A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions.
Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications.
Military researchers are even exploring whether facial recognition technology could enable autonomous drones to identify their own targets.
However, University of Sheffield computer expert Noel Sharkey told the Guardian that such algorithms are ‘infected with biases’ and cannot be trusted.
Calling for a halt on all AI with the potential to change people’s lives, Professor Sharkey instead advocates for vigorous testing before they are used in public.
Scroll down for video
A leading expert in artificial intelligence has issued a warning against the use of race- and gender-biased algorithms — such as those used in facial recognition — for critical decisions
‘There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board,’ Professor Sharkey told the Guardian.
‘There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail,’ he added.
‘It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation.
‘But then I realised eventually that some innovations are well worth stifling, or at least holding back a bit. So I have come down on the side of strict regulation of all decision algorithms, which should stop immediately.’
According to the Guardian, Professor Sharkey has had discussions with some of the world’s leading tech and social media organisations — including Facebook and Google — about the bias presently seen in AI.
‘They know it’s a problem and they’ve been working, in fairness, to find a solution over the last few years but none so far has been found,’ he told the Guardian.
‘Until they find that solution, what I would like to see is large-scale pharmaceutical-style testing.’
This, he says, would require testing and refining these systems against hundreds of thousands or millions of people until they show no inbuilt bias.
‘These algorithms have to be subjected to the same rigorous testing as any new drug produced that ultimately will be for human consumption,’ he added.
According to the Guardian, Professor Sharkey has had discussions with some of the world’s leading tech and social media organisations — including Facebook and Google — about the bias presently seen in AI
University of Sheffield computer expert Noel Sharkey told the Guardian that algorithms are ‘infected with biases’ and cannot be trusted. Pictured, Professor Sharkey campaigning against the development of autonomous weapons, which are often dubbed ‘killer robots’
Professor Sharkey also noted that his concern over algorithmic bias overlaps with his previous work campaigning against autonomous weapons and so-called ‘killer robots’.
‘Now the new idea that you could send autonomous weapons out on their own, with no direct human control, and find an individual target via facial recognition is more dangerous,’ he told the Guardian.
‘Because what we have found out from a lot of research is that the darker the skin, the harder it is to properly recognise the face.
‘In the laboratory you get a 98 per cent recognition rate for white males without beards. It’s not very good with women and it’s even worse with darker-skinned people.
‘In the latter case, the laboratory results have shown it comes to the point where the machine cannot even recognise that you have a face.
Professor Sharkey also noted that his concern over algorithmic bias overlaps with his previous work campaigning against autonomous weapons and so-called ‘killer robots’. Pictured, a US Air Force MQ-9A Reaper drone, which can fly autonomously, equipped with a hellfire missile
‘So, this exposes the fantasy of facial recognition being used to directly target enemies like al-Qaida, for instance. They are not middle-class men without beards, of whom there is a 98% recognition rate in the lab,’ Professor Sharkey continued.
‘They are darker-skinned people and AI-driven weapons are really rubbish at that kind of recognition under the current technology.
‘The capacity for innocent people being killed by autonomous weapons using a flawed facial recognition algorithm is enormous.’
For Professor Sharkey, the development of autonomous, facial-recognition-driven weapons should not be allowed to even reach the planning stage at present.
‘In relation to decision-making algorithms generally, these flaws in facial recognition are yet another argument — along with all the other biases — that they too should be shut down, albeit temporarily, until they are tested just like any new drug should be.’