Recommender systems are increasingly being used for many purposes. This is creating a deeply problematic situation. Recommender systems are likely to be wrong when used for these purposes because there are distorting forces working against them. RS’s are based on past evaluative standards which will often not align with current evaluative standards. RS’s algorithms must reduce everything to computable information – which will often, in these cases, be incorrect and will leave out information that we normally consider to be important for such evaluations. The algorithms powering these RSs also must use proxies for the evaluative ‘good’. These proxies are not equal to the ‘good’ and therefore will often go off track. Finally, these algorithms are opaque. We do not have access to the considerations that lead to a particular recommendation. Without these considerations we are taking the machine’s output on faith. These algorithms also have the potential to modify how we evaluate. YouTube has modified its algorithm explicitly to ‘expand our tastes’. This is an extraordinary amount of power – and one that if my first argument goes through, is likely to take us away from the good. This influences our behavior which feeds back into the algorithms that make recommendations. It is important that we establish some meaningful human control over this process before we lose control over the evaluative.
04. August 2023
Recommending Ourselves to Death: Values in the Age of Algorithms Recommending Ourselves to Death: Values in the Age of Algorithms
Scott Robbins problematisiert in seinem Beitrag, der als Kapitel im Sammelband "Recommender Systems: Legal and Ethical Issues" 2023 veröffentlicht wurde, die Algorithmen von sog. Empfehlungssystemen und illustriert dies am Beispiel YouTubes.
Universität Bonn
© CASSIS
Alle Bilder in Originalgröße herunterladen
Der Abdruck im Zusammenhang mit der Nachricht ist kostenlos, dabei ist der angegebene Bildautor zu nennen.