Recommender systems are increasingly being used for many purposes. This is creating a deeply problematic situation. Recommender systems are likely to be wrong when used for these purposes because there are distorting forces working against them. RS’s are based on past evaluative standards which will often not align with current evaluative standards. RS’s algorithms must reduce everything to computable information – which will often, in these cases, be incorrect and will leave out information that we normally consider to be important for such evaluations. The algorithms powering these RSs also must use proxies for the evaluative ‘good’. These proxies are not equal to the ‘good’ and therefore will often go off track. Finally, these algorithms are opaque. We do not have access to the considerations that lead to a particular recommendation. Without these considerations we are taking the machine’s output on faith. These algorithms also have the potential to modify how we evaluate. YouTube has modified its algorithm explicitly to ‘expand our tastes’. This is an extraordinary amount of power – and one that if my first argument goes through, is likely to take us away from the good. This influences our behavior which feeds back into the algorithms that make recommendations. It is important that we establish some meaningful human control over this process before we lose control over the evaluative.
Recommending Ourselves to Death: Values in the Age of Algorithms Recommending Ourselves to Death: Values in the Age of Algorithms
In his article, which was published as a chapter in the anthology "Recommender Systems: Legal and Ethical Issues" in 2023, Scott Robbins problematizes the algorithms of so-called recommendation systems and illustrates this using YouTube as an example.
Universität Bonn
© CASSIS
Download all images in original size
The impression in connection with the service is free, while the image specified author is mentioned.