Face-recognition technology poses a lot of complicated issues for society and the consumers it affects.
Entering 2020, few had ever heard of facial recognition startup Clearview AI. Several weeks and a series of negative news cycles later, the company has drawn the ire of lawmakers and privacy activists who fear that it might, in the words of The New York Times, “end privacy as we know it.”
So, who is Clearview AI? How has it pushed the envelope on a technology that’s been around since the 1960s? And why are its technology breakthroughs so dangerous for the general consumer?
Here’s a quick backgrounder. The secretive New York startup developed a groundbreaking face-recognition system that produces more accurate matches from more angles than previous iterations. The company sells the technology to law enforcement and other businesses. What really distinguishes the application is its ability to match photos of unidentified people with images Clearview AI has scraped from the internet. The 3 billion-image database includes pictures people post to Facebook, Twitter and other online sites.
It’s easy to see the benefit. The company touts its technology’s success hunting down murderers, child molesters and other criminals, using more precise vectors and a wider pool of images.
But it’s just as easy to see the potential downsides.
Privacy supporters argue that widespread use of facial recognition technology breaks unwritten rules regarding technology and privacy. For years, tech companies capable of developing face recognition apps, including Google, have held back for ethical reasons. Large cities such as San Francisco refuse to use it. Clearview AI has forged ahead, selling the tech to more than 600 law enforcement agencies without going through any public scrutiny.
People also raise questions about whether Clearview AI’s facial recognition technology could be misused. Hunting down dangerous criminals is one thing. But what if a rogue law enforcement officer uses picture matching to snoop on enemies or stalk potential dates? What if a totalitarian government uses the technology to dig up dirt on activists? What if an end user misidentifies someone and blocks their access to flights or government benefits? Could a law enforcement entity use a match to gain access to your bank accounts or emails?
“The weaponization possibilities of this are endless,” Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told The New York Times.
Clearview AI argues that it’s within its First Amendment rights to use pictures that are posted on the web because they’re in the public realm. But, for most people, this crosses a line.
Tech companies that Clearview AI has been depending on are drawing lines of their own. Twitter, YouTube and Facebook have all sent Clearview AI cease-and-desist letters requesting the deletion of all of their collected data. Apple followed suit, blocking Clearview AI’s iPhone app for violating its rules.
Clearview AI’s issues intensified this past week when a data breach exposed Clearview AI’s customer list. Despite the company’s claim that it marketed only to law enforcement agencies, the list of 2,200 organizations in 27 countries included non-government entities such as Walmart, Best Buy, Bank of America and the NBA.
What can consumers do about the pictures that are already out there, getting matched? “They can delete all of their photos on Facebook,” suggests Martin Hron, Senior Researcher for Avast. “However, if the shots have already been cached on the web they can still be found. They can ask Facebook to delete their data, but they’d have to delete their whole account,” Hron adds.
Privacy laws offer consumers some refuge. Under the state’s new privacy law, California residents can ask to see the data Clearview AI has collected about them and get the data removed from the database. Residents of the European union can also request that their data be deleted.
This case has been an eye-opener. Face-recognition technology poses a lot of complicated issues for society and the consumers it affects. The moves Clearview AI made, quietly scaling up a system outside of public view, illustrate how governments have to get ahead of this issue.