Twitter has been committed to open-sourcing for image cropping by using a machine learning algorithm after when the users have identified potential racial bias. Twitter has been confirmed that it is doing more analysis on the algorithm for determining which elements of the images are shown when they are previewed after the user posted the examples of the potential bias in the tool application.
Tony Arceri who is the cryptography and infrastructure engineer experimented on the platform for highlighting Twitter’s automated image cropping since the preview is preferring white faces over black faces. Also in the experiment, Arceri has used various combinations of the picture which shows the faces of the former US president Barack Obama and also senator Mitch McConnell. Even regardless of the position or any potential interfering of the factor which includes the color of the individuals tie the algorithm preferred to show the cropped preview of the face of Mitch McConnell.
Also, Liz Kelley who is the twitter spokesperson claimed in the response of the experiment that the company’s own testing before the model being shipped has not been found any evidence of racial or gender bias is present but also admitted that further analysis is needed. Also, the firm will open-source its machine learning algorithm so that others can review and replicate the results of the Arcier’s experiment and also get the bottom of the issues.
And the major concern about the bias in the artificial intelligence systems and machine learning which are found to be seen as the black box with many arguing that mostly the technology companies haven’t been prioritized eradicating discrimination in the system. Also, the escalation of the black lives matter protest in the earlier months has forced a number of these tech companies into reflecting on the potential bias that had been presented in the system such as facial recognition technologies.
Most of the organizations such as Amazon, IBM has been rushed either for discontinuing or suspending the facial recognition system and also uses law reinforcement as the result of their movement. Also, there are several newly launched AI-powered system which shows these racial bias that these tech companies as the whole that they have to work more for stamping out this.