Clearview AI, the facial recognition company that’s been used by more than 600 law enforcement agencies around the country, reveals security breach that exposed details of ALL its clients
- Clearview AI sent a statement to customers alerting them to a security breach
- The intruders accessed the company’s full client list and number of searches
- The company did not say how the breach happened but claims its servers weren’t accessed
Facial recognition software provider Clearview AI has revealed that its entire client list was stolen by someone who ‘gained unauthorized access’ to company documents and data.
According to a notice sent to its customers, Cleaview AI said that in addition to its client list, the intruder had gained access to the number of user accounts associated with each client, as well as the number of searches conducted through those accounts.
The company didn’t specify how the security breach had occurred nor who might have been responsible, and it claimed its servers and internal network hadn’t been compromised.
Facial recognition software company Clearview AI has revealed a security breach that exposed it’s client list and number of searches those clients made
‘Unfortunately, data breaches are part of life in the 21st century,’ Clearview attorney Tor Ekeland told The Daily Beast, who broke the story.
‘Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.’
Clearview AI software allows its customers to identify people by uploading photos to the company’s servers, where they’re compared against a database of more than 3 billion photos pulled from Facebook, YouTube, Twitter, and even Venmo.
The service was reportedly used by at least 600 different law enforcement agencies in the last year, including the Chicago Police Department, the Department of Homeland Security, and the FBI.
The company originally worked on smaller projects including a 2013 initiative with New Jersey Attorney General’s office to analyze driver’s license photos to identify people using the same photo for multiple identities.
The company has since expanded its business to include general law enforcement activity, among other projects.
The company claims its server’s and internal networks were not breached but hasn’t specified how exactly the data was accessed
The company has amassed a database of over 3 billion photos from sites like Facebook, YouTube, Twitter, and even Venmo, which its proprietary AI will scan to try and match people in photos uploaded by their clients
In 2013, Clearview worked with the New Jersey attorney general’s office on an ID fraud detection program, helping to identify instances where the same person was trying to use multiple identities, though in that instance they relied on state-provided photos, not ones taken from online sources like Facebook and Twitter
In 2019, the New York Police Department used Clearview AI for a 90-day trial period, during which officers were able to upload photos of unidentified suspects to find leads.
The department declined to continue using Clearview’s services after the trial period ended but a report from the New York Post found that there were still 36 accounts connected to individual officers who had kept the app on their personal phones.
After reports questioning the company’s privacy policies earlier this year, the New Jersey Attorney General’s office announced state prosecutors and law enforcement officers would stop using Clearview AI.
‘I’m not categorically opposed to using any of these types of tools or technologies that make it easier for us to solve crimes, and to catch child predators or other dangerous criminals,’ Attorney General Gurbir S. Grewal said.
‘But we need to have a full understanding of what is happening here and ensure there are appropriate safeguards.’