Face masks play a pivotal role in stopping the spread of coronavirus. However, an unexpected use for them is protecting you from facial recognition algorithms. For those with phones that unlock using your face, it often comes as an annoyance to unlock using your password or taking off your mask. However, this can be used to one’s advantage to stop intrusive algorithms from recognizing one’s face. Law enforcement also expressed serious concerns on the widespread use of protective masks on security operations in a leaked document. However, numerous algorithm companies have been working to dismiss these concerns and take this as an opportunity to further improve their already faulty and criticized tools.
Protection of Privacy
The US National Institute of Standards and Technology(NIST) has recently come out with findings that showed significant decreases in accuracy. The accuracy declined at rates from 5%~50%. Additionally, it was also observed that when more of the nose is covered, the accuracy of the algorithm significantly decreases. Additionally, black masks were noticed to deter the algorithms more.
Law Enforcement Concerns
The Department of Homeland Security is concerned with the accuracy of these algorithms as public health initiatives clash with police on all levels who rely on more complex tools. Video cameras, image processing hardware/software, and image recognition algorithms are all affected negatively by the masks. Currently, much of the nation has an anti-police brutality sentiment which also causes tension between the general public and law enforcement. Historically, criminals have attempted to avoid facial recognition, and law enforcement expressed concerns that they might use this health initiative as an opportunity to hide their faces in a normal manner.
However, this battle between law enforcement and masks is simply a continuation of itself from before the pandemic. In the 2011 Occupy Wall Street Protests, demonstrators donned masks resembling the masks worn by Anonymous, a hacktivist movement known for its cyber attacks and leaks against governments/governmental institutions as a form of vigilantism. Interestingly, they have recently returned, promising the exposure of police injustice, immediately gaining immense popularity within younger circles. Returning to the protests in 2011, NYPD arrested these demonstrators, citing a law from 1845 that banned groups of 2+ people from covering their faces in public when they were not entertainment or in a party themed that way. Following this precedent, many states banned masks when protests erupted. Some anti-mask laws were designed to prevent Ku Klux Clan gatherings, but the majority of them were made with the aim to protect white elites. Homeland security uses the protests in Hong Kong as an example of cause of concern about masks. However, social analysts have stated that banning masks in the name of preventing violent protests is most likely done with the intent to stop peaceful assemblies and marches instead of just militants.
Improving the Algorithms
Numerous facial recognition algorithm companies have declared that they have shifted their priorities to better identify people based on the area of their face above their nose(not covered by masks). The stakes are high to make this improvement, as masks cover a substantial portion of the face making this a difficult and necessary challenge for these companies.
Shaun Moore, CEO of Trueface, stated that, “If the [facial recognition] companies aren’t looking at this, aren’t taking it seriously, I don’t foresee them being around much longer.” Trueface is a leader in this industry, used by the US Air Force to identify personnel. Facial recognition has been scrutinized heavily for racial bias in these algorithms, accused of being much more faulty when attempting to identify minorities.