Facial recognition software has become increasingly effective with the improvement of in-depth learning. C/orsimilarly, privacy concerns about facial recognition software have increased. Many facial recognition systems build their databases by indexing publicly available images on the Internet, which means that your faces may be in some database without your knowledge. One way to avoid this problem is to leave your face on the Internet. However, in the age of social media, it may not be possible. Another solution is to change the image of the cheat face recognition software while maintaining the image quality so that you can still use the image. This is an approach to the “LowKey” method invented by some researchers at the University of Maryland.

LowKey takes advantage of the fact that most facial recognition systems are built on neural networks that are known to be weak in attacks against opponents. Opposite attacks are small changes to the input of the neural network that cause the network to misclassify the input. Ideally, the method of use is as follows. You run a LowKey adversarial attack on self-service and upload it to the Internet. The LowKey image is retrieved from the face recognition database. Later, you go out and the surveillance camera takes a picture of you (called a “probe picture”). However, it cannot match the probe image to the database LowKey image. You’re safe.

Source: LowKey paper

LowKey aims to work well against all deep learning systems for face recognition. However, we do not know the architecture of some of the deep learning systems we are trying to overcome. If we have trained our adversarial attack to defeat one particular facial recognition neural network to which we have access, we cannot guarantee that it will work on the ground against other networks. There is no perfect solution to this problem.

LowKey researchers decided to train their competition attack on a set of the best open source facial recognition neural networks, hoping that the set of models would give their attack better generalizability. First, for each group of models, the researchers calculated the output of that model in the input image. They then used a LowKey adversarial attack on the input image and calculated the output of the model using the LowKey-modified image as input. Next, they calculated the difference between the two outputs. They did this for each band model and then took the sum of the differences. Their goal was to maximize this amount. The higher this sum, the less likely it is for the facial recognition neural network to classify the actual image and the LowKey-modified image as the same.

Second, the researchers wanted the modified image to remain recognizable to humans. To achieve this goal, they decided to minimize the LPIPS metrics of the original and LowKey images. LPIPS (Learned Perceptual Image Patch Similarity) is a human-based measure of the similarity between two images. A smaller LPIPS means greater similarity.

LowKey thus has two objectives: to maximize the distance between the original and LowKey images based on open source facial recognition models, and to minimize LPIPS between the same two images. In a mathematical notation, the overall goal can be written as follows:

Source: LowKey paper

Clarifications:

  • x is the original image
  • x ‘is a LowKey image
  • n is the number of models in the training group
  • f_i is the i. band model
  • A is the image preprocessing function
  • G is the Gaussian smoothing function

Note that there are two versions of the first goal – one with Gaussian smoothing and the other without. The researchers included a version of the Gaussian smoothing function because it improved the results. The overall target is trained with the slope, and the final x ‘is printed as a LowKey image.

LowKey researchers released an online tool if you want to try it for yourself. It’ll be found here. As an example, here’s how it works with a sample image:

Source: Ali Kazal from Loosen

The researchers tested LowKey in an attempt to break two commercially available facial recognition interfaces, namely Amazon Rekognition and Microsoft Azure Face. With both APIs, LowKey was able to protect the user’s face so that they were recognized less than 3% of the time. Without LowKey protection, two face recognition systems detect faces more than 90% of the time. That is a monumental difference.

However, it can be seen whether LowKey works just as well when testing other, perhaps classified, facial recognition systems. One way for face recognition systems to circumvent LowKey protection would be to train the systems with LowKey images as part of the training data. This can lead to armaments where an anti-face recognition algorithm such as LowKey is released, face recognition attempts to respond by training new models that take the algorithm into account, leading to the release of a new algorithm, and so on and so forth. In other words, it is possible that LowKey will one day stop efficiency.

Despite these doubts, however, LowKey is an important step toward privacy on the Internet and machine learning. LowKey shows that an intuitively simple competitive attack can trick existing image recognition systems while maintaining image quality. See the original paper for more information here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here