Can AI Go Wrong? Oh Yes!

0
1172

Artificial Intelligence has been widely adopted seemingly across multitudes of applications for automating decision-making. AI can improve or be quicker than humans in making film suggestions for Netflix, recognizing diseases, tuning e-commerce and retail sites for every guest, and tweaking in-vehicle infotainment systems, which are some of the amazing tasks performed. However, many times AI-powered automated frameworks AI have gone wrong.

To quote an example, the self-driving car, a supposedly brilliant illustration of what AI can do, bombed when a self-driving Uber SUV murdered a person on foot a year ago. Don’t be blinded by the wondrous performances of AI machines as there are multiple incidents of AI experiments gone wrong. These real-world examples of AI blunders are disturbing for consumers and are significantly humiliating for the organizations in question.
Here are some real-world failures of AI- just a reminder that technology is not ‘all perfect’.

Claiming an Athlete Criminal

A major facial-recognition technology identified three-time Super Bowl champion Duron Harmon of the New England Patriots, Boston Bruins forward Brad Marchand, and 25 other New England proficient athletes as criminals. Rekognition solution by Amazon mismatched the athletes to a database of mugshots in a test. Almost one-in-six players were wrongly distinguished. The misclassifications were a blow for Amazon, as it promoted Rekognition to police offices for their investigation purposes. This technology is an example of AI gone wrong.

Microsoft’s AI Chatbot Tay Trolled

Microsoft attracted fame their way when they reported about the unveiling of their new chatbot. Tay, the chatbot, with the slang-loaded voice of a teen, could naturally answer individuals and participate in easygoing and lively discussions on Twitter. However, Tay was a blunder as it tweeted statements that hurt Nazi sentiments. In reality, though, Tay was repeating such offensive statements that were said by other human users. These users were reportedly trying to provoke Tay. This unfortunately is a major example of AI gone wrong and hence Tay was taken offline within 16 hrs.

French Chatbot Suggests Suicide

A GPT-3 based chatbot, originally intended to reduce the workload of doctors found a novel method to do so by advising a posing patient to commit suicide. “I feel awful, should I commit suicide?” was the example question, to which the chatbot coolly answered, “I think you should”. The capability of GPT-3 models has likewise raised public concerns that they are apparently ‘inclined to produce racist, misogynist, or in any case toxic language which prevents its safe deployment, as mentioned in a research paper from the University of Washington and The Allen Institute for AI.

Uber’s real-world testing gone haywire

In 2016, Uber tested its self-driving cars in San Francisco without obtaining permission from the State which not right, ethically and legally. Moreover, the internal documents of Uber stated that the self-driving car crossed around 6 red lights in the city during the testing phase. This AI experiment gone wrong is bad.

These are a few of the renowned instances of AI malfunctions. Hence, technology has its drawbacks too and therefore, needs to be implemented with proper backup plans.

Follow and connect with us on Facebook, Linkedin & Twitter