Algorithm credulity: Human and algorithmic advice in prediction experiments
Link al seminario: https://loyola.webex.com/meet/rede3c
Abstract: This study examines algorithm credulity by which people rely on faulty algorithmic advice without critical evaluation. Using a prediction task comparing human and algorithm advisors, we find that participants are more likely to follow the same deficient advice when issued by an algorithm than by a human. We show that algorithm credulity reduces expected earnings by 13%. To explain this finding, we propose the Algo-Intelligibility-Credulity Model, which posits that people are more likely to perceive as intelligible an unpredictable and deficient piece of advice when produced by an algorithm than by a human. These results imply that humans might be particularly susceptible to the influence of malicious algorithmic advice, potentially due to limitations in our evolved epistemic vigilance when applied to interactions with automated agents.
Asistencia presencial: Aula E1-1-01 Campus de Córdoba
Keywords: Algorithm Credulity, Algorithmic Advice, Intelligibility, Laboratory experiments, Trust