Facial recognition technology has come a long way. Its challenges still have a long way to go.

Is the search-by-face function on photo apps the most useful tools around? Probably not, it just shows us that computers are great at doing some things that we aren’t.

Like identifying individual faces among many, years of technological developments have now been made relatively easy.

A study from Mozilla fellow Deborah Raji and Genevieve Fried — a congressional advisor on algorithmic accountability — shows these developments have come at a cost: Researchers have gradually abandoned asking for people’s consent.

Let’s look at a little history of facial recognition.

  • The 1960s-1990s: Manual data gathering involved measuring distances between facial features in consented photographs.
  • The 1990s-2007: The DOD spent $6.5 million building the first large face dataset, capturing 14,000 consented images and sparking academic and commercial interest.
  • 2007-2014: Researchers began scraping images from the web “without peoples consent”.
  • 2014-present: In 2014, Facebook used 4 million images from 4,000 users’ profiles to train its DeepFace model, making web-based deep learning the industry standard.

Back in the ’90s, photoshoots made up 100% of facial recognition data sources. Today, they makeup just 8.7%, with web search making up the rest, which is great for the technology, but bad for consent.

Training computer models on millions of images improves the technology, but it also raises serious concerns regarding wide-scale bias and privacy issues. Studies have shown that white males are falsely matched with mug shots less often than other groups. A test of Amazon’s “Rekognition” software showed it misidentified 28 NFL players as criminals.

Facial recognition is moving a bit too fast for some: Boston, San Francisco, Oakland, and Portland have outlawed “city use of the surveillance technology.”

This seems to be a reasonable measure; at a minimum, it’ll give people less of a reason to put on (and this is actually a thing, google it) anti-facial-recognition makeup. It’s estimated that over ½ of US citizens’ faces are in the databases, and I don’t remember consenting to allow anyone to capture my image. Oh wait – maybe I DID allow this by simply posting a profile picture on Facebook. (read the privacy policy, folks). We are our own worst enemy when it comes to our privacy.

Massachusetts has announced that it’s kicking off a year-long study with a newly commissioned panel to study the effects of facial recognition and just where the technology needs to improve. With all the cameras pointed at us every day and in so many different places, it’s well past time to get a handle on the technology before it’s fully implemented and widely used legally for tracking and reporting purposes.

It will be interesting to see what this newly appointed commission comes up with. However, for the time being, the use of facial recognition applications is still illegal in Massachusetts except in “limited cases”. That does not mean our likenesses aren’t being skimmed and stored on servers elsewhere.

You can read more about this here:
Massachusetts Passes Bill With Facial Recognition Rules
https://www.govtech.com/policy/Massachusetts-Passes-Bill-With-Facial-Recognition-Rules.html