A paper accepted to this year’s RecSys conference series hints at a potential change in the way YouTube’s algorithms suggest videos to users.

In the paper’s introduction, the authors present a (perhaps purposefully) weak case for why an algorithmic change is necessary, suggesting that the current model contains an ‘implicit bias’ which causes a ‘feedback loop effect’.

Karen Hao from the MIT Technology Review has suggested that this paper may be Google’s attempt to rectify the widely reported problems with a recommendation engine that tends to suggest videos of a marginal political and / or conspiratorial nature.

The proposed change would take into consideration the position of a recommended video in the sidebar, and would discount the influence of videos which were positioned near the top, to reduce the feedback loop effect.

The authors of the paper acknowledge the difficulties of designing and evaluating systems which reduce implicit bias and openly state that asking users for explicit feedback on their recommendation engine ‘can hardly scale up’.

But can this simple change really be expected to collapse alt-right echo chambers? Or is something more fundamental amiss with YouTube’s system?

The deep learning based recommendation engine that was discussed in a 2016 paper (also presented at RecSys) shows that the YouTube algorithm trains over 1 billion parameters (on data which includes details about both the video being watched and the user doing the watching.)

Working out why certain videos are recommended, then, is an impossible task. This leaves Google with few options except trying almost random algorithm changes to see which ones positively impact the behaviour of their users.

It is important to remember, however, how positive behaviour changes are assessed. In Google’s eyes, a recommendation system is doing its job if it keeps the viewer engaged. And perhaps it’s this false equivalence that makes ousting the echo chambers so difficult.

In the absence of explicit feedback, the paper’s authors have chosen to measure user ‘satisfaction’ via Like and comment counts, forgetting that these metrics can be gamed by content creators (by simply asking for them, for example).

Relying on metrics around behaviours that are subject to variegate social pressures, viewer suggestibility, and creator manipulation will never lead to algorithms which represent the true intentions and desires of users.

As long as YouTube equates viewing time with algorithmic success (and does so inside of a black box), they’ll rightly suffer criticism for creating environments which reward radical attitudes.



A week's worth of ideas, in your inbox. No spam ever.