Anxiety on the racial biases of AI increases, World in need of ethical technology.

0
661

Machines can discriminate in harmful ways. The George Floyd incident and #Blacklivesmatter protests in the US to bring attention to the death of George Floyd, police brutality, abuses, and rioting unfolded debates on many levels. The members of the AI research community made their own small gestures of support. The researchers pledged to match donations to Black in AI, a nonprofit promoting the ideas, collaborations, and discussions of initiatives to include more black people in the field of AI. One of the reasons, being the racial biases in technology and the flaws in AI.

Akins and Unleash Live are two AI-backed companies founded by Liesl Yearsley and Hanno Blankenstein respectively. Akin uses AI to build bots that have the ability to converse with the humans, while Unleash Live cover up the real-time analysis of CCTV footages coming from security cameras and drones. Both the companies started with a common mission to foster a fully defined ethical AI culture-bound to good business policies.

Since all bots are programmed to optimize themselves towards a goal, their ability to manipulate human behaviour to unsustainable ends should be taken into serious consideration. It has been analyzed that AI has the ability to bring about changes in human behaviour, which seems terrific.

On the other hand, Unleash Live has programmed the AI which analyses video feeds from a security camera to help in decision making. It makes improved decisions on whether the footpaths should be made wider or detecting whether people are running from law enforcement. Its unique perspective is that it doesn’t feed or analyze any personal information, so the government is not beneficial to use it.

But there are certain companies willing to provide technologies to governments and police, which is crucial than ever before. But the incident in Minneapolis makes the companies rethink on whether to continue providing services to government. Better regulations in AI are about to come up based on the incident.

The automated technology is at higher risks confronting bitter truths. Many companies will be forced to abandon all the technologies soon. It is also expected that the face recognition will soon hit back with increased troubles. So it is clear that the necessity of a humanistic, less biased, ethically safe AI is at the peak.