Facebook working to halt dangerous and false posts using AI

0
863

Facebook has criticism faced by people this past year as the Company is not doing enough to stem hate speech, online harassment, and the spread of false news stories.

To protect the activities every day for 1.62 billion daily users generating four petabytes of data that includes million of photos is no small task. But the Company gets denounce for allowing large amounts of malicious groups to spread offensive and threatening posts and for allowing ultra-right-wing conspiracy-theory groups such as QAnon to freely spread false political statements.

Facebook employs 15,000 content moderators for reviewing reports of misbehavior of political scheme, harassment, terroristic threats, child exploitation. They typically manage to report chronologically, regularly permitting additional severe allegations to go unaddressed for days whereas lesser points had been reviewed. On Friday, Facebook reported that it will bring Machine Learning or ML into the moderating process.

To detect the most severe issues it will employ algorithms and assign them to human moderators. Software moderators will handle lower-level abuse like copyright infringement and spam. Facebook commented that they would evaluate problematic posts according to three criteria: virality, severity, and likelihood they are violating rules. An offensive post threatening violence at the site of racial unrest, for instance, give higher priority, either removed by machine automatically or assigned to the moderator for immediate evaluation and action. During the Covid-19 pandemic, a study by a non-profit organization found that 3.8 billion views on Facebook of misleading content related to COVID-19.

Facebook officials commented that applying Machine Learning or ML is a part of a continuing effort to stop the spread of offensive, dangerous, and misleading information while ensuring legitimate posts are not censored.

The challenges faced by Facebook was the virtual overnight creation of a massive protest group contesting the 2020 election count. A Facebook group of 40,000 gathered demanding a recount. Facebook has not blocked the page.

There is nothing illegal about requesting for a recounting an unusual rise of chat about claimed voting abuses, charges that have been categorically dismissed by officials in all 50 states by Republicans and also Democrats this past week, a troubling reminder in the making of false information to shape political views.

Chris Palow, a member of Facebook’s Integrity team commented that the system is about combining AI and human reviewers to make less total mistakes, and AI is never going to be perfect.