Abstract:
Recommendation engines are interactive systems aiming to predict users’ top preferences according to their choices. A limitation of the recommender systems is that over time the recommendations get narrower in scope. That means the popular items are favored and the less frequently visited ones are censored. Consequently, users are prevented from exploring new alternatives. The so-called filter bubble is the inevitable outcome of the self-reinforcing feedback loops where the preference estimations depend on the user choices. Self-reinforcing feedback loops are both cause and e↵ect of over- and under presentation of some content in interactive recommender systems. That results in inaccurate user preference estimates, namely, overestimation of over-presented con tent and vice versa. The burden is on the recommender system to eliminate the self reinforcing feedback loops for a more accurate preference estimation. In this regard, we define the “fairness” criteria for an interactive recommender system considering the adverse impacts of these self-reinforcing feedback loops. We also claim that designing an intelligent presentation mechanism is essential to meet those criteria. To prove the claim we address two models that explicitly incorporate or ignore the systematic and limited exposure to alternatives. By simulating real-world biases, we demonstrate that ignoring systematic presentations results in overestimation of promoted options and underestimation of censored alternatives. Simply conditioning on the limited exposure is a remedy for these biases.