ETHICS AND ARTIFICIAL INTELLIGENCE (AI) IN THE AGE OF AUTOMATION: REEXAMINING MORAL FRAMEWORKS IN TECHNO-ETHICAL DILEMMAS

Authors

  • Dr. Jetal J. Panchal Assistant Professor, M. B. Patel College of Education (CTE), Sardar Patel University, Vallabh Vidyanagar, Anand, Gujarat, India https://orcid.org/0000-0002-0663-6339

DOI:

https://doi.org/10.29121/shodhkosh.v5.i1.2024.4851

Keywords:

Ethics, Artificial Intelligence (Ai), Automation, Moral Framwork

Abstract [English]

This research paper investigates the intricate relationship between ethics and artificial intelligence (AI) in the era of automation, with a primary focus on reevaluating existing moral frameworks in addressing techno-ethical dilemmas. As AI and automation technologies advance rapidly, they raise profound ethical concerns, from algorithmic bias and privacy breaches to the ethical implications of job displacement.The research question at the core of this study is: How do existing moral frameworks in technology ethics, such as utilitarianism, deontology, virtue ethics, rights-based ethics, care ethics, and feminist ethics, address the ethical challenges posed by AI and automation? To answer this question, a comprehensive literature review is conducted, analyzing the historical development of AI and automation, ethical concerns in these fields, and prominent case studies. Methodologically, a qualitative approach is employed, drawing on case studies and examples to illustrate the real-world applications of different moral frameworks in techno-ethical dilemmas.The key findings of this research paper demonstrate that existing moral frameworks provide valuable insights into the ethical dimensions of AI and automation, but they also exhibit limitations when addressing complex, multifaceted challenges. The study reveals that no single framework can comprehensively address all ethical concerns in this domain. Instead, a more pluralistic and context-aware approach to AI ethics is recommended, acknowledging the importance of diverse perspectives and cultural contexts. The implications of this research are profound, as they call for a reevaluation of current practices in AI development, policy-making, and ethical guidelines. By recognizing the limitations of traditional moral frameworks and embracing a more inclusive and adaptable approach, we can navigate the ethical complexities of AI and automation more effectively, fostering a future where technology aligns more harmoniously with human values and societal well-being.

References

Anderson, M., & Anderson, S. L. (2010). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 31(4), 13-26.

Anderson, S. L., & Anderson, M. (2007). Machine ethics: The design and governance of ethical AI and autonomous systems. Cambridge University Press.

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. Cambridge University Press. DOI: https://doi.org/10.1017/CBO9781139046855.020

Bryson, J. J., & Winfield, A. F. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 50(5), 116-119. DOI: https://doi.org/10.1109/MC.2017.154

Calvo, R. A., & Keane, M. T. (2014). Ethical considerations in affective computing. In P. R. Cowie, C. Peters, & C. Pelachaud (Eds.), Emotion-Oriented Systems (pp. 191-203). Springer.

Floridi, L. (2019). AI ethics: An oxymoron? In L. Floridi (Ed.), The Routledge Handbook of Philosophy of Information (pp. 229-254). Routledge.

Floridi, L. (2020). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 33(2), 1-8. DOI: https://doi.org/10.2139/ssrn.3835010

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in computers, 6, 31-88. DOI: https://doi.org/10.1016/S0065-2458(08)60418-0

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. DOI: https://doi.org/10.1038/s42256-019-0088-2

Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195-204. DOI: https://doi.org/10.1007/s10676-006-9111-5

Johnson, D. G., Powers, T. M., & Srikant, R. (2008). A moral justification for information technology security. Information Management & Computer Security, 16(1), 3-20.

Kallman, E. A., & Grillo, J. M. (2016). Ethical decision making and information technology: An introduction with cases. McGraw-Hill Education.

Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705.

Lucivero, F., & Tamburrini, G. (2019). Ethical monitoring of research and innovation in artificial intelligence. Journal of Responsible Innovation, 6(1), 118-133.

Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics, 22(2), 303-341. DOI: https://doi.org/10.1007/s11948-015-9652-2

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. DOI: https://doi.org/10.1177/2053951716679679

Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21. DOI: https://doi.org/10.1109/MIS.2006.80

O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. DOI: https://doi.org/10.4159/harvard.9780674736061

Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.

Santoni de Sio, F., Faber, N. S., Savulescu, J., & Vincent, N. A. (2019). Why less praise for enhanced performance? Moving beyond responsibility-shifting, authenticity, and cheating to a nature-of-activities approach. Bioethics, 33(2), 222-230.

Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23-28. DOI: https://doi.org/10.29173/irie136

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752. DOI: https://doi.org/10.1126/science.aat5991

Van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Ethics and Information Technology, 21(2), 129-137.

Wallach, W., & Franklin, S. (2011). On the nature of AI ethics. AI & Society, 26(1), 25-41.

Winfield, A. F. (2018). Why ethics matter for autonomous cars. Philosophical Transactions of the Royal Society A, 376(2113), 20180085. DOI: https://doi.org/10.1098/rsta.2018.0085

Downloads

Published

2024-06-30

How to Cite

Panchal, J. (2024). ETHICS AND ARTIFICIAL INTELLIGENCE (AI) IN THE AGE OF AUTOMATION: REEXAMINING MORAL FRAMEWORKS IN TECHNO-ETHICAL DILEMMAS. ShodhKosh: Journal of Visual and Performing Arts, 5(1), 3072–3082. https://doi.org/10.29121/shodhkosh.v5.i1.2024.4851