YouTube Using AI to Detect Videos that Violate Age Restrictions

0
960

Youtube will be using artificial intelligence to automatically restrict the video that is inappropriate for the children. Presently the video hosting site uses human viewers for flagging the videos that are believed should not be watched specifically viewers under 18 years old and youtube will be soon taking machine learning to come into a decision.

Youtube has more than 2 billion monthly users and it is the world’s biggest online video source and it is the world’s top source especially for the kid’s video too. And the content for kids is one of the sites that is most-watched categories but youtube has come under fire for a range of scandals which mainly based on children.

Youtube is going forward to build an approach for using machine learning for detecting the content for review by developing and also adapting the technology for helping to automatically apply the age-restrictions. Uploaders can also appeal the decisions sometimes if they feel it was incorrectly applied. And also youtube said that it doesn’t expect these changes to make any difference to those people who expect the revenue from these videos.

Many videos have been picked up under these settings that have already violated the advertiser-friendly guidelines and also it has already run limited or no adverts. Youtube also increased the use of artificial intelligence which can detect the harmful content in its video during the time of coronavirus pandemic. Also, the company has removed more videos during the second quarter of 2020 which it hadn’t done before.

Also, the video site couldn’t rely on the human moderators hence they have increased the use of automated filters to check which videos violate their policies. Youtube content removal system is not that accurate and they have also accepted that there is a lower level of the accuracy to make sure that they were removing as many pieces of the violative content present in the videos. There are even other social media companies that are mainly relying on artificial intelligence for keeping the platform secure but they also are running into issues with the users attempting to subvert their system.