Money

Facial recognition: authoritarian intrusion or crime-fighting tool?


How do you feel about facial recognition technologies? That question is sparking angst in Britain right now, after it emerged last week that London’s Metropolitan Police plan to deploy cameras with facial recognition and artificial intelligence capabilities.

Local newspapers dubbed it “Big Brother Goes London Wide” – and civil liberties groups howled about potential breaches of privacy. However, the police insist that these cameras are merely the latest tool to fight crime.

I suspect that many Londoners feel torn: should safety take priority over privacy? And if so, where do you draw the line in a world where “normal” surveillance cameras (those without AI) are already filming us all the time? There are few easy answers. But as the debate bubbles on, it is worth looking beyond London and making our focus as broad and cross-cultural as possible.

One intriguing research paper was recently published by a team of social scientists funded by Intel, the West Coast giant, which is involved in this technology. For “AI Among Us: Agency in a World of Cameras, and Recognition Systems”, Ken Anderson, Maria Bezaitis, Carl Disalvo and Susan Faulkner spent three years conducting ethnography (or fly-on-the-wall) research in six locations where facial recognition tools are used: two in the US (a school and a police force); and four institutions in China, including more schools.

Some findings are unsurprising. The researchers note that in the US, developments in AI are apt to spark “moral panic”. Thus, the institutions in the study are using this technology discreetly – within a thicket of rules.

One American school, for example, has been using the tech to track adults but refrained from watching children. An American police force applied facial recognition tools to private-sector footage but refused to touch video from public-sector cameras. The mere presence of these laws offered a safety blanket that enabled a level of acceptance.

In China, by contrast, the Intel researchers found that the technology is already so ubiquitous that it sparked little comment among those they interviewed. “Facial recognition interactions in China are stunning because they are so normative and normalised,” the paper notes, describing how the researchers observed people “smiling” at cameras as they extract money from bank machines, purchase items or enter buildings.

Occasionally, there was resistance to the technology. When AI cameras were installed in the canteen of one Chinese school, to monitor pupils’ food selection, some parents complained. However, this attention was rare. As the researchers put it: “In a society that has had overt and everyday surveillance in human and institutional form for over 70 years, the emergence and deployment of recognition through cameras has been less controversial than in the USA.”

Part of this may reflect the difficulty of expressing dissent in China’s authoritarian system, as well as censorship of news stories about the more controversial uses of facial recognition, such as the surveillance of the Uighurs in Xinjiang. It is noteworthy that a (rare) anonymous survey last month in Beijing found three-quarters of respondents disliked having mandatory facial recognition on their phone. This was primarily due to fears that data would be leaked, but just over half of the respondents were also uneasy about surveillance.

However, the Intel researchers believe there are other reasons why facial recognition seems more accepted. As the Chinese poll showed, consumers love the convenience of these tools. They also take pride in national innovation. Intriguingly, the Intel team thinks that Chinese citizens sometimes view AI-enabled recognition systems as being easier and fairer to deal with than human bureaucrats.

“The use of these systems removed and obviated the unnecessary social complications often inherent in transactions”, they argue. Or to put it another way: whereas in America it is generally presumed that humans are more trustworthy than machines, in China this dynamic might be reversed, since human officials are sometimes seen as capricious, biased or even corrupt. (However, it is worth noting that at present even the most advanced facial recognition technologies are often inaccurate, particularly when attempting to identify women and people of colour.)


By highlighting the Intel research, I do not mean to suggest that we should ignore the very real dangers of government and corporate surveillance; nor do I want to downplay the way in which the Chinese state has used AI surveillance to target Muslim minorities. This, like other forms of repression, should be deplored.

For me, though, the key point is this: what makes facial recognition and AI technology so emotive is that it challenges us to think about what it means to be human and to have human “agency”. This is often contradictory and can change over time. Moreover, it varies across cultures in ways we often ignore. Therein lies the challenge of building a morally acceptable form of modern tech; and the reason why it pays to study the issue with a wider cultural lens.

Follow Gillian on Twitter @gilliantett or email her at gillian.tett@ft.com

Follow @FTMag on Twitter to find out about our latest stories first. Listen to Culture Call, the FT’s transatlantic culture podcast, which interviews people shifting culture in London and New York. Subscribe at ft.com/culture-call, Apple Podcasts, Spotify or wherever you listen to podcasts.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.