Facial Recognition Software with Deep Learning

2
1610

With advances in Deep Learning, Facial Recognition software has become increasingly powerful technology have grown. Many Facial Recognition systems construct their databases by trawling publicly available images on the internet, means the face could be in a database without the knowledge.

One approach to avoid this issue is to avoid posting photos on the internet may be impossible in the age of social media.  Manipulate image to fool the facial recognition software while keeping image quality so that the image can still be used.

Exploits that are Lowkey makes use of the fact that the majority of Facial Recognition systems are based on Neural Networks, which are notoriously vulnerable to adversarial assaults. 

Adversarial attacks are tiny modifications to a Neural Network’s input that cause it to misclassify the data. Post a selfie to the internet after performing the Lowkey adversarial attack on its, a Facial Recognition Database recognizes the Lowkey image.

Step outside later, a surveillance camera snaps a photo referred to as the “probe image“. However, it could not match the probe image with the database’s Lowkey image, so you are unharmed.

Low Key’s mission is to outperform all other Facial Recognition software systems with Deep Learning. However, the architecture of some of the Deep Learning systems we’re attempting to destroy is unknown cannot ensure that our adversarial strategy, which we trained to beat one specific Facial Recognition neural network that has access would work in the field against other networks. This problem does not have a proper solution.

Researchers decided to train their adversarial approach on an ensemble of the top current open-source facial recognition Neural Networks, in the hopes of improving the generalizability of their assault.  

And compute the output of each model in the ensemble on the input image the Lowkey adversarial attack to compute the Lowkey output edited image was used as the input for the model.  Difference between these two outputs was then computed.  

Repeated this for each model in the ensemble, then added the differences together. Their goal was to get as much money as possible, the larger this sum, the less likely it is for a Facial Recognition Neural Net to classify the true image and the Lowkey modified image as the same.

 Follow and connect with us on Facebook, LinkedIn & Twitter

2 COMMENTS