YouTube is littered with extreme and misleading videos, and the company has been criticized for not doing enough to limit the dreck. But one place the Google unit has managed to clean up is YouTube’s homepage. Behind the scenes, Google has deployed artificial intelligence software that analyzes reams of video footage without human help, deciphers troubling clips and blocks them from the homepage and home screen of the app. It’s internal name is the “trashy video classifier,” according to three people familiar with the project.
The system, which has not been reported before, plays a key role in attracting and keeping viewers on YouTube’s homepage, building a foundation for a flurry of new advertising coming to the video service. Google tested the classifier as early as 2015, but deployed it broadly after a series of ugly incidents with children’s videos in 2017, according to a former YouTube staffer. One episode, dubbed “Elsagate,” featured popular videos showing the Disney princess in a variety of untoward situations. That year, YouTube also faced a torrent of advertiser boycotts over inappropriate videos that threatened to dent sales.
A Google spokeswoman confirmed the company has a classifier that screens videos for the YouTube homepage as well as the “watch next” panels that recommend other clips. The system analyzes feedback from users who report videos that are misleading, clickbait-y and sensational. It also taps other data on audience retention, likes and dislikes. This shows YouTube is capable of regulating the spread of troubling content. However, current and former employees say the company has only seriously focused on the problem when money is at stake, or — in the case of terrorist content — when outside pressure has forced it to act.
The trashy video classifier was, in part, driven by financial incentives. As more people used smartphones to get online, they increasingly went straight to YouTube’s app and website, rather than watching YouTube clips on other destinations online. This was potentially good news for YouTube. When people watch ads directly on the service, rather than elsewhere on the internet, Google often gets more of the revenue, said a person familiar with the company. They asked not to be identified sharing private information. YouTube also doubled down on the trend by adding new homepage features including a “Trending” tab and personalized carousels of clips.
The problem, though, was that some popular videos surfacing on the homepage were cringe-worthy — clips of people fighting or someone taking a bruising fall, for instance. One former engineer recalled an internal meeting involving a still image from a video with male genitalia. This type of content got clicks, but it also turned some people away from the homepage and loomed as a potential publicity nightmare. YouTube’s terms of service prohibit pornography.
This is what drove YouTube to embrace the trashy video classifier. The system has been a success, helping to rid the homepage of many unsavory clips and keep viewers coming back. Google recently told marketers that “watch time” on YouTube’s homepage and app grew tenfold in the past three years. The value of this AI system became even clearer this month, when Alphabet Inc’s Google released new advertising features that target YouTube’s growing direct audience.
For the first time, marketers can run promotions on smartphones as users scroll through YouTube’s app. That makes the video service look a lot more like Facebook Inc’s Instagram, and could help Google grab more of the ad dollars that currently flow to social-media rivals. The company doesn’t share YouTube financials, but RBC Capital Markets estimates the video service generated more than $20 billion in sales last year. Improvements to the homepage audience and new ways of showing them ads could generate billions of extra dollars in revenue.
Still, YouTube’s AI software can only do so much. Machines struggle to parse what happens and what is said inside video footage automatically, particularly without the help of text descriptions of clips. More than 450 hours of video are uploaded to YouTube every minute. The company cites this gargantuan sum to justify repeated mistakes. The latest AI techniques are getting better at recognizing both the content and context of videos, although the data signals remain “noisy,” said Serena Yeung, a researcher at Stanford University who was advised by former Google AI executive Fei-Fei Li.
Could software be trained to identify something like “people fighting” from footage alone? “You could,” said Yeung. “It will probably have a lot of errors; it will work some of the time, but not all of the time.” She didn’t know about YouTube’s trashy video classifier. YouTube has mostly brushed off the advertiser boycotts at this point. Marketers are more confident the company can handle brand safety concerns, said Chris Apostle, chief media officer at iCrossing, an ad agency. He predicts advertisers will immediately test the latest YouTube ad tools. “YouTube is still very much a part of the conversation,” he said.