When does pervasive public surveillance start to infringe on our basic rights?
My respect deepened for Satya Nadella after Microsoft’s erudite CEO recently expressed his strong views about the need for clear societal parameters governing the rising commercialization of facial recognition systems.
Speaking at the World Economic Forum in Davos, Switzerland last month, Nadella expressed his concern about facial recognition, or FR, being used for intrusive surveillance and said he welcomed any regulation that helps the marketplace “not be a race to the bottom.”
Granted, his comments come after the horses have already left the starting gate. Most recently, CBS news disclosed how FR systems have been deployed by a half dozen North Texas law enforcement agencies, in some cases with little public awareness.
Yet, there is ample opportunity for captains of industry and political leaders to tilt towards a greater public good on this issue. This will only happen if consumers stay informed, consider what’s at stake and make their voices heard.
Thanks to advances in digital sensors, processing power, data analytics and neural networks, FR has become lightning fast and stunningly accurate. FR works by applying algorithms to images of a human face, correlating the contours of the eyes, nose, lips, ears and chin. It can be very passive. Non-intrusive sensors adept at tracking faces on the move can be placed in hidden nooks; algorithms can be set to even gauge emotions.
This fantastically capable technology is being increasingly deployed to manage authorized access to schools and workplaces. And FR systems designed to support shopping, travel and even the dispensing of healthcare services are on a fast track for wide public use in the next few years. The intensity of this growth is reflected in Allied Market Research’s projection that the “image recognition market” will rise from $17.91 billion in 2017 to $86 billion in annual revenue by 2025, a compounded annual growth rate of 22 percent. Facial ID systems will be a big driver of that growth with both law enforcement and general commercial uses expected to grow dramatically over that span.
However, Nadella’s call for regulation focuses the spotlight where it ought to be at this pivotal juncture – on first coming to grips with some profound privacy and civil liberties questions. These are weighty issues over which we waged two World Wars last century.
This century we’ve become complacent about always-on sensors. We expect them to be in operation upon entering a bank, shopping mall or airport -- or even just slipping into a bodega or walking down a city street. We’ve accepted this as a benign part of modern public safety. But advanced FR systems introduce a critical nuance. Here’s how Jay Stanley, senior policy analyst for the American Civil Liberties Union, described it for me:
“Right now everybody knows that when you walk down the street you’re recorded by a lot of video cameras, and that the video will just sit on some hard drive somewhere and nothing really happens to it unless something dramatic goes down. The ultimate concern with this technology is that we’ll end up in a surveillance society where your I.D. is your face, and everybody is checking on you at every moment, monitoring you.”
It's now commonplace for high-resolution video cams to feed endless streams of image data into increasingly intelligent data mining software. Along with this comes the rising potential for abuse of the technology. “We’re talking about an enormously powerful surveillance capability that no government has ever had in the history of humanity,” Stanley says.
Here in the U.S., privacy advocates have been quick to vigorously question any development that could lead down the slippery slope of authoritarian abuse. For instance, the Department of Homeland Security recently unveiled a FR pilot program, ostensibly to affirm the identity of Secret Service agents patrolling the White House grounds. Privacy advocates noticed that a couple of the video cams were set up to capture faces of individuals strolling by on the public sidewalk. DHS had trained the FR system to classify anyone caught by those cameras as “subjects of interest.”
DHS later said it would not use that data to support active Secret Service operations. But we’ve moved beyond this sort of cat-and-mouse accountability. Just take a look at what’s happening in China, where President Xi Jinping has rolled out a national surveillance network comprised of 200 million cameras, roughly four times the number in the United States. Xi has indicated he plans to have 300 million cameras in place by 2020.
China has been using this surveillance net to track down criminals and scofflaws, including jaywalkers, whose punishment is to have their faces displayed on giant outdoor digital screens, alongside lists of names of people who don’t pay their bills. The line between maintaining public safety -- and abusing potent technology -- has become very fine, indeed, in China.
Satya Nadella is absolutely right. There’s no way we should let the commercialization of FR become a race to the bottom. The city of San Francisco is tackling that issue right now. Aaron Peskin, a member of the city’s Board of Supervisors, has proposed a ban on facial recognition technology for city agencies and has called for an approval process for any new surveillance technology purchases by city agencies.
Hopefully others in positions of influence and power will give this due consideration, as well – and weigh in for the greater public good. Talk soon.
Byron Acohido is a guest blogger on the Avast Blog where you can catch up on all the latest security news. Avast is a global leader in cybersecurity, protecting hundreds of millions of users around the world with award-winning free antivirus and keeping their online activities private with VPN and other privacy products. Join in the conversation with Avast on Facebook and Twitter.
Within the ongoing digital transformation, more data is collected by governments every year. But while there is incredible power in this data, it’s not being sufficiently utilized.
Many of the underlying algorithms we rely on are only as good as the human knowledge they come from. And sometimes, the knowledge transfer from humans to formulas falls short.