YouTube’s brand safety woes have continued into 2018, but the video sharing giant is attempting to bring advertisers and marketers back on side with new steps to delete inappropriate content before it is actually published on the platform.
Google revealed late last week that an additional vetting procedures will now take place for premium content, which means its ever-increasing army of moderators will now cast an eye over every video in its exclusive ‘Google Preferred channels’.
Fake news and extreme content were among the major social trends in 2017, but there are no signs that the industry’s major players have yet to truly win the battle in combating them, despite both Google and Facebook’s continued commitment to rolling out important updates.
Google is now using artificial intelligence software to root out content not deemed suitable for advertising, in addition to more than 10,000 moderators, who are deleting inappropriate content flagged by users.
“We built Google Preferred to help our customers easily reach YouTube’s most passionate audiences and we’ve seen strong traction in the last year with a record number of brands,” an Alphabet spokesperson said. “As we said recently, we are discussing and seeking feedback from our brand partners on ways to offer them even more assurances for what they buy in the Upfronts.”
In other social media news, Facebook revealed over the weekend that it would make changes to its news feed to prioritise personal content posted by friends and family rather than videos and articles from publishers. CEO Mark Zuckerberg said the move was designed to bring people “closer together”.