Just for you
Content selected “just for you” on social media feeds may seem harmless, but new research shows that it can shape preferences in potentially harmful ways — such as promoting videos about political conspiracy theories. The algorithms making these suggestions determine what we see on social media, what videos are recommended to us on YouTube and which ads should be shown to us on the internet.
A team of Berkeley engineers have developed a model that can evaluate whether these algorithms, known as recommender systems, are manipulative or not. The research, which focused on reinforcement learning (RL)-based recommender systems, is particularly timely because RL recommenders are starting to be used on many popular platforms, including YouTube.
Their findings showed that using the RL recommenders could lead to manipulative behavior toward users. When the team attempted to model a user’s natural preference shifts — which are shifts that occur in the absence of a recommender — these proved to be very different from the preference shifts induced by the RL recommenders. To counter the risks from this technology, the team proposed a way for recommenders to mimic natural shifts, while still optimizing key metrics
like engagement. This framework could be used to assess whether current recommendation algorithms are already manipulating users and to create new algorithms to avoid undesirable effects on preferences.
The study was conducted at the laboratory of Anca Dragan, associate professor of electrical engineering and computer sciences, with Ph.D. student Micah Carroll, Dylan Hadfield-Menell (Ph.D.’21 EECS) and computer science professor Stuart Russell.
Learn more: Just for you; Estimating and penalizing induced preference shifts in recommender systems (PDF)