Face recognition technology has come a long way, before coming to the internet, our phones and to governmental security initiatives. The technology has a bright future, as it certainly helps improve existing societal benefits for the better.
But there are myriad issues around the usage of the technology.
According to Brad Smith, the President and Chief Legal Officer at Microsoft, face recognition technology can be abused. Smith is worried about the spread of surveillance systems with powerful facial recognition that he's calling for other fellow technology companies and lawmakers to act, before it becomes too pervasive.
It's easy enough to say the reasons:
Sophisticated facial-recognition technology has been deployed in many parts of China for its more dystopian security initiatives. The country has hundreds of millions of surveillance cameras located on every public places imaginable to track its citizens. The technology is also ubiquitous on smartphones to Facebook, Google, Apple, Amazon and others.
The governments of the world can’t wait to implement this particular technology in a broader way, and here, plenty of tech companies are more than happy to help.
This is where, according to Smith, the technology should be regulated.
While the technology can indeed improve security, as it can catch crimes when they happen, detect undocumented immigrants, look for missing people, as evidence for cases and many more, but this facial recognition technology can also harm democratic freedom or enable discrimination.
Smith is concerned that unchecked facial recognition will increase the risk of biased decisions and outcomes, and may invade people's privacy.
He argues that face recognition laws should require tech companies to provide transparent documentation that explains the capabilities and limitations of their facial-recognition technology.
The laws should also require providers of facial-recognition services to undergo third-party checks and tests for accuracy and unfair bias.
"While we're hopeful that market forces may eventually solve issues relating to bias and discrimination, we've witnessed an increasing risk of facial-recognition services being used in ways that may adversely affect consumers and citizens -- today," said Smith.
The legislation should also force organizations that use facial recognition to review its impact and ensure that using the technology isn't an escape route for complying with anti-discrimination laws.
People should also be notified.
For example, in areas that should be covered by face recognition technology, should at least have a clear message that notifies people about this technology, and where it is in use. The laws should require consumers to give consent to the use of facial recognition when entering premises, for example.
Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. People should have the right to reject the use of these technologies on them in both public and private contexts. If public notice of their use is not sufficient, there should be a high threshold for any consent, given the dangers of the increasing and continual mass surveillance.
There should be constraints on law enforcement use of facial recognition when monitoring people of interest in public places, and only to be used on court order, or in emergency, such as the risk of death or serious injury to a person.
"In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law."