scorecardresearch
Follow Us:
Sunday, October 24, 2021

Explained: How YouTube’s recommendation system works

YouTube's VP of engineering Cristos Goodrow has shed some light on the matter in a detailed blog post.

Written by Shruti Dhapola , Edited by Explained Desk | New Delhi |
Updated: September 17, 2021 2:31:43 pm
YouTube says its auto livestream captions are now available for all creatorsYouTube’s recommendation system works in two main places -- YouTube homepage and Up Next panel.

How does YouTube determine which video to recommend for consumption? How exactly does it decide which videos end up on one’s YouTube homepage? YouTube’s VP of engineering Cristos Goodrow has shed some light on the matter in a detailed blog post.

He has tried to answer concerns on whether sensationalist and misleading content or what the company calls ‘borderline content’ gets more views on the platform. The post also tries to answer how YouTube has been trying to ensure that it does not end up recommending such content.

In the blogpost, Goodrom states that recommendations “drive a significant amount of the overall viewership on YouTube,” and that this is higher than “even channel subscriptions or search.” He also notes that YouTube wants to limit “views of borderline content from recommendations to below 0.5% of overall views” on the platform.

So let’s take a look at just how YouTube’s recommendation systems work.

What is the ‘recommendation system’ on YouTube?

YouTube’s recommendation system works in two main places. One is the YouTube homepage, which usually has a mix of content from subscribed channels to recommended videos, which the platform thinks you are likely to see.

Recommendations also work in the ‘Up Next’ panel, when you are done watching a video, and YouTube lines up a second video that it thinks you are likely to watch.

The post explains that YouTube’s recommendations systems “don’t operate off of a ‘recipe book’ of what to do,” but is constantly evolving and rely on certain signals.

So what are these signals used by YouTube’s recommendation system?

The signals range from clicks, watchtime, survey responses, and actions around videos such as sharing, clicking on the like or dislike button.

Clicks: If one clicks on a video, this is seen as a strong indicator that one will watch the video. But the blog post notes that over the years YouTube has realised that just because one clicks a video, doesn’t mean it is high up on their preference list. After all, deceptive and click-bait video thumbnails are used to lure viewers in, who then realise that the video isn’t something they prefer.

Watchtime: This looks at videos which one has watched and for how long, in order to provide “personalised signals” to YouTube’s systems. For instance, if one is a fan of comedy content on the platform, and spends hours watching, then it is likely to be all over the recommendations. It is a safe bet that the user will watch such comedy videos. This is important considering an average US adult user spends around 41.9 minutes on the platform per day, according to an emarketer report. (https://www.emarketer.com/content/us-youtube-advertising-2020)

But not “all watchtime is equal,” which is why they also consider other signals when deciding recommendations.

Survey Responses: YouTube says this is done to measure “valued watchtime”—the time spent watching a video that a user considers valuable. Surveys include asking users to rate videos out of five stars, and if a user marks a video as low or high, they usually have follow-up questions. Only videos rated highly with four or five stars are counted as valued watchtime. Responses from these surveys have been used to train “a machine learning model to predict potential survey responses for everyone,” by YouTube.

Sharing, Likes, Dislikes: Likes, shares, dislikes on a video are also considered. The assumption is that if one enjoyed a video they will press the like button or might even share it. This information is further used to “try to predict the likelihood that you will share or like further videos.” Dislike obviously is a strong indicator that the video didn’t appeal to the user.

But the blog post also explains that the importance allotted to each signal depends on the user. “If you’re the kind of person to share any video that you watch, including the ones that you rate one or two stars, our system will know not to heavily factor in your shares when recommending content,” explains the post.

YouTube says that the recommendation system doesn’t have a “fixed formula” but “develops dynamically” and keeps up even up with changes in viewing habits.

What about misinformation? How does YouTube make sure it does not get recommended?

YouTube like all other social media platforms such as Facebook, Twitter, is facing criticism that it does not do enough to curtail the spread of misinformation. US President Joe Biden in particular has been very critical of both Facebook and YouTube for allowing the spread of misinformation against the COVID-19 vaccines.

And this is the context behind why YouTube is opening up about how its recommendation system works. However, the company says that they don’t want to recommend low-quality content or ‘borderline’ content, which is problematic but doesn’t violate its rules outright. Examples include videos claiming the Earth is flat or those that claim to offer a cure for cancer with ‘natural remedies.’

Newsletter | Click to get the day’s best explainers in your inbox

YouTube states it has limited recommending low-quality content since 2011 when it built “classifiers to identify videos that were racy or violent and prevented them from being recommended.” Further in 2015, it started demoting videos with “sensationalistic tabloid content that was appearing on homepages.”

This classification system, where a video in the news and information category gets tagged as either authoritative or borderline relies on human evaluators. The blog post explains that “these evaluators hail from around the world and are trained through a set of detailed, publicly available rating guidelines.” They also rely on “certified experts, such as medical doctors when content involves health information.”

Evaluators try and answer some questions around the video: Whether it has expertise, the reputation of the channel, the speaker, etc. “The higher the score, the more the video is promoted when it comes to news and information content,” states the blog.

Videos are also accessed for whether the content is misleading, inaccurate, deceptive, hateful, or with the potential to cause harm. Based on all of these factors a video gets a score; the higher the score, the more YouTube’s recommendation system will promote it. A lower score means the video is classified as borderline and is demoted in recommendations.

The company says that these human evaluations have then been used to train the company’s system “to model their decisions, and we now scale their assessments to all videos across YouTube.”

So does this borderline content get more engagement?

YouTube claims that “through surveys and feedback,” they have found that “most viewers do not want to be recommended borderline content, and many find it upsetting and off-putting.” It further claims that when it started demoting “salacious or tabloid-type content” there was an increase in watchtime by about 0.5% percent over the course of 2.5 months.

It says the company has not seen evidence that such content is more engaging compared to other content. The post gives examples of flat earth videos, adding that even though many such videos are uploaded to the platform, they get far fewer views.

It has also revealed that when they started demoting borderline content in 2019, they saw “a 70% drop in watchtime on non-subscribed, recommended borderline content in the US. It further claims that today “consumption of borderline content that comes from our recommendations is significantly below 1 per cent.” What this also means is that despite YouTube’s best efforts, some borderline content does end up getting recommended, even if it is a very small percentage.

The post also states that advertiser guidelines are such that many of these borderline content channels will find it impossible to monetise their videos. It notes that advertisers do not wish to be associated with such content.

So why doesn’t YouTube remove the misleading content?

YouTube admits that this kind of content isn’t good for them and that it impacts their image in the press, with the public and policy-makers. But just like Facebook, YouTube doesn’t actually remove such content because it says that “misinformation tends to shift and evolve rapidly,” adding that “unlike areas like terrorism or child safety, often lacks a clear consensus.”

📣 JOIN NOW 📣: The Express Explained Telegram Channel

It also adds that “misinformation can vary depending on personal perspective and background.” This is a defense that frustrated critics who have argued that YouTube is not doing enough to remove problematic content. The post admits that the company knows it is leaving up “controversial or even offensive content,” at times, but adds that “it wants to focus on building a recommendation system that doesn’t promote this content.”

YouTube admits the problem is far from being solved but states that “it will continue refining and investing in its system to keep improving it.”

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Explained News, download Indian Express App.

  • Newsguard
  • The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.
  • Newsguard
0 Comment(s) *
* The moderation of comments is automated and not cleared manually by indianexpress.com.
Advertisement
Advertisement
Advertisement
Advertisement