Deep Learning And Loss of Privacy In Facial Recognition

0
1159

Facial recognition has grown to become the most scrutinized technology of our times and has been penetrating all walks of human life. Convenient stores, online banking, personal devices, social media platforms, or public spaces are some fields of intervention

The first use of facial recognition was recorded back in1964. An American mathematician and computer scientist, Woodrow Bledsoe, used a computer program to match the suspects with mugshots. Facial recognition technology has come a long way since then, optimizing advanced machine learning and artificial intelligence to recognize human faces. The neural networks were also incorporated into facial recognition systems for the first time in 2014 with Facebook’s DeepFace roll-out. Face data is predominantly, a piece of biometric information as unique and identifiable as a fingerprint, yet it is casually available in many forms and thus can be passively collected to perpetuate severe privacy violations such as violating the norms of consent, automatic feature extraction, privacy issues, etc.

Research

Researchers have surveyed more than 133 face datasets resulting in 145 million images of over 17 million subjects from different, varied sources, demographics, and conditions from the 1960s to 2019. The study discusses how the overbearing data requirements of deep learning models can be harmful as it majorly ignores people’s consent in extracting facial information. DeepFace, with its 97.35% accuracy on the Labeled Faces in the Wild (LfW) dataset and improved error mitigation by 27%, requires rich datasets for training and testing models.

The obtained materials were analyzed chronologically to surface the trends in designing the benchmarks and datasets and to map how these aspects are creating misunderstandings about this technology’s limitations.

Understanding the scenario

According to the researchers, these deep learning-based facial recognition systems are highly influenced by the people behind the whole scenario-creating and funding of the dataset. The goal of the developed technology is often explicit and specifically defined in the design of the evaluation. The researchers noted that many datasets analyzed are using minors’ photos and are promoting racist or sexist labels. Also, it has been noted that the needs of stakeholders have always shaped the practice of model development. As the datasets got heavier, obtaining subject consent or recording demographic distributions was cumbersome and hence neglected.

The paper also claimed that Amazon explicitly manipulates accuracy reporting by evaluating their facial recognition products on a higher threshold to claim better performance, even though the real-world applications work at a much lower default threshold.

Beware!

Neglecting the obvious complexities will do a disservice to those who are most impacted by its careless deployment – the general public at large. Deep Learning has accessed thousands of such data sets making the vulnerable, unknowing populations highly prone to massive, gross invasion of privacy which have begun to have serious repercussions. If this phenomenon is not checked at the earliest, all our privacies will be a thing of the past.

Follow and connect with us on FacebookLinkedin & Twitter