ALGORITHMS AND JUSTICE: AN ERA OF UNCERTAINTY AND DOUBT
DOI:
https://doi.org/10.29121/shodhkosh.v5.i3.2024.5268Keywords:
Algorithmic Complacency, Epistemic Agency, and Choice ArchitectureAbstract [English]
This paper describes the emerging concept of algorithmic complacency and the questions it raises regarding human epistemic agency in AI-governed decision scenarios. Examining case studies of algorithmic choice architectures through the lenses of the philosophical frameworks of epistemic and hermeneutical injustice allows an analysis of how algorithmic decision environments systematically interfere with our ability to form, assess, and revise belief states. In so doing, it argues that adaptive-choice architectures and hyper-nudging techniques create a panoply of situations in which epistemic autonomy is undermined, and algorithmic opacity introduces new hermeneutical gaps that interfere with our capacity for meaning-making and world navigation. Through this analysis, which is made operational by case studies based primarily on recommendation systems, decision support tools, and content curation platforms, a rough map emerges through which the potential for AI-based architectures of choice to fundamentally change human epistemic practice is charted and possible response points are elucidated, where intervention may be made to support the preservation of epistemic agency.
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. DOI: https://doi.org/10.1126/science.aaa1160
Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237. DOI: https://doi.org/10.1136/bmjqs-2018-008370
Coeckelbergh, M. (2023). Democracy, epistemic agency, and AI: political epistemology in times of artificial intelligence. AI and Ethics, 3(1), 17-29. DOI: https://doi.org/10.1007/s43681-022-00239-4
Floridi, L. (2025). AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis. Philosophy & Technology, 38, 13. DOI: https://doi.org/10.1007/s13347-025-00858-9
Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780198237907.001.0001
Gray, C. (2023). Testimonial Injustice in Governmental AI Systems. KI-Realitäten: Modelle, Praktiken und Topologien künstlicher Intelligenz. DOI: https://doi.org/10.1515/9783839466605-004
Hauswald, R. (2025). Artificial Epistemic Authorities. Social Epistemology. DOI: https://doi.org/10.1080/02691728.2025.2449602
Kaas, M. H. L. (2024). The perfect technological storm: artificial intelligence and moral complacency. Ethics and Information Technology, 26, 9. DOI: https://doi.org/10.1007/s10676-024-09788-0
Kay, J., Kasirzadeh, A., & Mohamed, S. (2024). Epistemic Injustice in Generative AI. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. DOI: https://doi.org/10.1609/aies.v7i1.31671
Liu, N. N., Zhao, M., Xiang, E., & Yang, Q. (2010, October). Online evolutionary collaborative filtering. In Proceedings of the fourth ACM conference on Recommender systems (pp. 95-102). DOI: https://doi.org/10.1145/1864708.1864729
Malone, E., Afroogh, S., D'Cruz, J., & Varshney, K. R. (2024). When Trust is Zero Sum: Automation's Threat to Epistemic Agency. arXiv preprint arXiv:2408.08846.
Milano, S., & Prunkl, C. (2023). Algorithmic profiling as a source of hermeneutical injustice. Philosophical Studies, 1-23. DOI: https://doi.org/10.1007/s11098-023-02095-2
Mollema, W. J. T. (2025). A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure. arXiv preprint arXiv:2504.07531.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141-161. DOI: https://doi.org/10.1017/epi.2018.32
O'Callaghan, D., Greene, D., Conway, M., Carthy, J., & Cunningham, P. (2015). Down the (white) rabbit hole: The extreme right and online recommender systems. Social Science Computer Review, 33(4), 459-478. DOI: https://doi.org/10.1177/0894439314555329
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users' decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3), 801-823. DOI: https://doi.org/10.1111/j.1083-6101.2007.00351.x
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381-410. DOI: https://doi.org/10.1177/0018720810376055
Potasznik, A. (2023). ABCs: Differentiating Algorithmic Bias, Automation Bias, and Automation Complacency. 2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS). DOI: https://doi.org/10.1109/ETHICS57328.2023.10155094
Rafanelli, L. M. (2022). Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy. Big Data & Society, 9(1). DOI: https://doi.org/10.1177/20539517221080676
Ramsoomair, N. (2025). Pressing Matters: How AI Irons Out Epistemic Friction and Smooths Over Diversity. Atlantis: Critical Studies in Gender, Culture & Social Justice.
Sunstein, C. R. (2018). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press. DOI: https://doi.org/10.1515/9781400890521
Susser, D. (2019). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 403-408. DOI: https://doi.org/10.1145/3306618.3314286
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press.
Wickens, C. D., Clegg, B. A., Vieane, A. Z., & Sebok, A. L. (2015). Complacency and automation bias in the use of imperfect automation. Human Factors, 57(5), 728-739. DOI: https://doi.org/10.1177/0018720815581940
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Nibedita Basu, Dr. Rhishikesh Dave

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.