Misinformation or Artifact Happening in Machine Learning

0
835

Deep neural networks are multi-layered hybrid systems built to process images and other data through the use of mathematical modeling and are a cornerstone of artificial intelligence. They are capable of providing sophisticated results. However, they can also be fooled byways that range from cases that are relatively harmless like misidentifying one animal as another and is potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating Green Signal to proceed.

Studies regarding these common assumptions state that the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks. As automation, machine learning, and other forms of artificial intelligence become more embedded in society and are used in everything from ATM to cybersecurity systems. Multiple views are coming to understand the source of apparent failures caused by what researchers call “adversarial examples,” Because when a deep neural network system misjudges images or other data when confronted with information outside the training inputs that are used to build the complete network. They are rare and are called “adversarial” as they are often created or discovered by another machine learning network which is like a sort of brinksmanship in the machine learning world. They classify these adversarial events could instead be artifacts, and we need to better understand what they are to know how reliable these networks are.

If a security system based upon facial recognition technology could be hacked to allow a breach. The previous research has found that the counter to historical assumptions where there are some naturally occurring adversarial which occurs rarely and can be discovered only through the use of artificial intelligence.

As per Cameron Buckner, associate professor of philosophy at UH, they are real, and we need to rethink how researchers approach the anomalies or artifacts. And they haven’t been well understood. Further, he offers the analogy of a lens flare in a photograph which is a phenomenon that isn’t caused by a defect in the camera lens but is instead produced from the interaction of light with the camera. The lens flare potentially offers useful information such as the location of the sun for example and if you know how to interpret it, there raises the question regarding adverse events in machine learning which are caused by an artifact also have some useful information to offer.