ALGORITHMIC BIAS AND SOCIAL INEQUALITY IN AI DECISION-MAKING SYSTEMS FROM A SOCIOLOGICAL PERSPECTIVE

Authors

  • Rahul Jha Dr. B. R. Ambedkar University, Delhi

DOI:

https://doi.org/10.29121/shodhkosh.v5.i6.2024.6167

Keywords:

Algorithmic Bias, Social Inequality, Artificial Intelligence, Sociological Perspective, Decision-Making Systems, Intersectionality, Critical Race Theory and Institutional Discrimination

Abstract [English]

The rapid adoption of Artificial Intelligence (AI) in decision-making systems has brought new efficiency gains but also intensified debates about fairness, equity, and social justice. From a sociological standpoint, algorithmic bias is not merely a technical anomaly but a structural reflection of pre-existing social inequalities embedded in historical data, institutional practices, and cultural norms. This study investigates how algorithmic systems in domains such as criminal justice, health care, housing, and employment perpetuate and sometimes exacerbate inequality. Through an extensive review of empirical studies, this work examines mechanisms of bias across the AI lifecycle and the sociological theories that explain them, such as intersectionality, critical race theory, and social stratification, and presents case-based statistical evidence, including disparities in facial recognition accuracy, credit scoring, and risk assessment tools. The study also outlines methodological approaches for studying AI bias sociologically, discusses results through both quantitative metrics and qualitative interpretations, and suggests future perspectives for designing systems that are substantively fair. The findings emphasize that addressing algorithmic bias requires an interdisciplinary approach that bridges sociology, computer science, and policy.

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica.https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2022). Consumer-lending discrimination in the FinTech era. Journal of Financial Economics, 143(1), 30–56. https://doi.org/10.1016/j.jfineco.2021.05.047 DOI: https://doi.org/10.1016/j.jfineco.2021.05.047

Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press. DOI: https://doi.org/10.7551/mitpress/6352.001.0001

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). PMLR.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047 DOI: https://doi.org/10.1089/big.2016.0047

Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), 139–167.

Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241–1299. https://doi.org/10.2307/1229039 DOI: https://doi.org/10.2307/1229039

Delgado, R., & Stefancic, J. (2023). Critical race theory: An introduction (4th ed.). NYU Press. DOI: https://doi.org/10.18574/nyu/9781479818297.001.0001

Feagin, J. R., & Feagin, C. B. (2011). Racial and ethnic relations (9th ed.). Pearson.

Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19. https://doi.org/10.1111/j.1740-9713.2016.00960.x DOI: https://doi.org/10.1111/j.1740-9713.2016.00960.x

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342 DOI: https://doi.org/10.1126/science.aax2342

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469–481). ACM. https://doi.org/10.1145/3351095.3372828 DOI: https://doi.org/10.1145/3351095.3372828

Suresh, H., & Guttag, J. (2019). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 113–123). ACM. https://doi.org/10.1145/3287560.3287598 DOI: https://doi.org/10.1145/3287560.3287598

Downloads

Published

2024-06-30

How to Cite

Jha, R. (2024). ALGORITHMIC BIAS AND SOCIAL INEQUALITY IN AI DECISION-MAKING SYSTEMS FROM A SOCIOLOGICAL PERSPECTIVE. ShodhKosh: Journal of Visual and Performing Arts, 5(6), 3151–3156. https://doi.org/10.29121/shodhkosh.v5.i6.2024.6167