Science

AI expert warns against 'racist and misogynist algorithms'


A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions.

Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications. 

Military researchers are even exploring whether facial recognition technology could enable autonomous drones to identify their own targets. 

However, University of Sheffield computer expert Noel Sharkey told the Guardian that such algorithms are ‘infected with biases’ and cannot be trusted.

Calling for a halt on all AI with the potential to change people’s lives, Professor Sharkey instead advocates for vigorous testing before they are used in public.

Scroll down for video

A leading expert in artificial intelligence has issued a warning against the use of race- and gender-biased algorithms — such as those used in facial recognition — for critical decisions

A leading expert in artificial intelligence has issued a warning against the use of race- and gender-biased algorithms — such as those used in facial recognition — for critical decisions

‘There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board,’ Professor Sharkey told the Guardian.

‘There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail,’ he added. 

‘It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation.

‘But then I realised eventually that some innovations are well worth stifling, or at least holding back a bit. So I have come down on the side of strict regulation of all decision algorithms, which should stop immediately.’

According to the Guardian, Professor Sharkey has had discussions with some of the world’s leading tech and social media organisations — including Facebook and Google — about the bias presently seen in AI.

‘They know it’s a problem and they’ve been working, in fairness, to find a solution over the last few years but none so far has been found,’ he told the Guardian.

‘Until they find that solution, what I would like to see is large-scale pharmaceutical-style testing.’ 

This, he says, would require testing and refining these systems against hundreds of thousands or millions of people  until they show no inbuilt bias.

‘These algorithms have to be subjected to the same rigorous testing as any new drug produced that ultimately will be for human consumption,’ he added.

According to the Guardian, Professor Sharkey has had discussions with some of the world's leading tech and social media organisations — including Facebook and Google — about the bias presently seen in AI

According to the Guardian, Professor Sharkey has had discussions with some of the world’s leading tech and social media organisations — including Facebook and Google — about the bias presently seen in AI

HOW DO RESEARCHERS DETERMINE IF AN AI IS ‘RACIST’?

In a 2018 study titled Gender Shades, a team of researchers discovered that popular facial recognition services from Microsoft, IBM and Face++ can discriminate based on gender and race.

The data set was made up of 1,270 photos of parliamentarians from three African nations and three Nordic countries where women held positions.

The faces were selected to represent a broad range of human skin tones, using a labelling system developed by dermatologists, called the Fitzpatrick scale.

All three services worked better on white, male faces and had the highest error rates on dark-skinned males and females.

Microsoft was unable to detect darker-skinned females 21% of the time, while IBM and Face++ wouldn’t work on darker-skinned females in roughly 35% of cases.   

In a 2018 study titled Gender Shades, a team of researchers discovered that popular facial recognition services from Microsoft, IBM and Face++ can discriminate based on gender and race

In a 2018 study titled Gender Shades, a team of researchers discovered that popular facial recognition services from Microsoft, IBM and Face++ can discriminate based on gender and race 

University of Sheffield computer expert Noel Sharkey told the Guardian that algorithms are 'infected with biases' and cannot be trusted. Pictured, Professor Sharkey campaigning against the development of autonomous weapons, which are often dubbed 'killer robots'

University of Sheffield computer expert Noel Sharkey told the Guardian that algorithms are ‘infected with biases’ and cannot be trusted. Pictured, Professor Sharkey campaigning against the development of autonomous weapons, which are often dubbed ‘killer robots’

Professor Sharkey also noted that his concern over algorithmic bias overlaps with his previous work campaigning against autonomous weapons and so-called ‘killer robots’.

‘Now the new idea that you could send autonomous weapons out on their own, with no direct human control, and find an individual target via facial recognition is more dangerous,’ he told the Guardian.

‘Because what we have found out from a lot of research is that the darker the skin, the harder it is to properly recognise the face.

‘In the laboratory you get a 98 per cent recognition rate for white males without beards. It’s not very good with women and it’s even worse with darker-skinned people.

‘In the latter case, the laboratory results have shown it comes to the point where the machine cannot even recognise that you have a face.

Professor Sharkey also noted that his concern over algorithmic bias overlaps with his previous work campaigning against autonomous weapons and so-called 'killer robots'. Pictured, a US Air Force MQ-9A Reaper drone, which can  fly autonomously, equipped with a hellfire missile

Professor Sharkey also noted that his concern over algorithmic bias overlaps with his previous work campaigning against autonomous weapons and so-called ‘killer robots’. Pictured, a US Air Force MQ-9A Reaper drone, which can  fly autonomously, equipped with a hellfire missile

‘So, this exposes the fantasy of facial recognition being used to directly target enemies like al-Qaida, for instance. They are not middle-class men without beards, of whom there is a 98% recognition rate in the lab,’ Professor Sharkey continued.

‘They are darker-skinned people and AI-driven weapons are really rubbish at that kind of recognition under the current technology.

‘The capacity for innocent people being killed by autonomous weapons using a flawed facial recognition algorithm is enormous.’

For Professor Sharkey, the development of autonomous, facial-recognition-driven weapons should not be allowed to even reach the planning stage at present.

‘In relation to decision-making algorithms generally, these flaws in facial recognition are yet another argument — along with all the other biases — that they too should be shut down, albeit temporarily, until they are tested just like any new drug should be.’

HOW DOES FACIAL RECOGNITION TECHNOLOGY WORK?

Facial recognition software works by matching real time images to a previous photograph of a person. 

Each face has approximately 80 unique nodal points across the eyes, nose, cheeks and mouth which distinguish one person from another. 

A digital video camera measures the distance between various points on the human face, such as the width of the nose, depth of the eye sockets, distance between the eyes and shape of the jawline.

A different smart surveillance system (pictured)  can scan 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. The military is working on applying a similar version of this with AI to track people across the country 

A different smart surveillance system (pictured) can scan 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. The military is working on applying a similar version of this with AI to track people across the country 

This produces a unique numerical code that can then be linked with a matching code gleaned from a previous photograph.

A facial recognition system used by officials in China connects to millions of CCTV cameras and uses artificial intelligence to pick out targets.

Experts believe that facial recognition technology will soon overtake fingerprint technology as the most effective way to identify people. 



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.