ALGORITHMIC ETHICS IN AI-CREATED ARTWORKS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6740Keywords:
Algorithmic Ethics, AI-Generated Art, Cultural Representation, Bias Mitigation, Style Mimicry, Dataset Governance, DeepfakesAbstract [English]
The information presented in this paper explores the changing ethical issues that come with AI-generated artworks, and how algorithms, datasets, and the behavior of model-layers, cultural sensitivity, and the governance structures influence artistic results. The research evaluates various aspects of ethical risk, such as the spread of bias, stylistic theft and misrepresentation of culture, as well as manipulation of identity. Based on a multi-layered analytical framework upheld by graphs, tables, and case studies, the study points out that the main ethical vulnerabilities are due to uneven datasets, obscure semantic-layer reasoning, and inadequate protection of culturally sensitive information. To illustrate how the failures of ethics are put into practice, the paper introduces five real-world case studies that include unauthorized style mimicry, distortion of Indigenous symbols, the bias of a demographic portrait, exploitation of community heritage datasets, and manipulation of identity by a deepfake to prove how they may fail in practice. The results show that governance interventions, including consent-driven datasets, fairness auditing, cultural-protection mechanisms, and identity safeguards, can greatly mitigate the harmful output, which is achieved through their regular use. The paper ends with an overview of the future directions that focus on contextual cultural intelligence, adaptive governance, semantic interpretability and community oriented data rights. The general aim is to influence the creation of morally accountable generative AI applications, which are cognizant of creative authority, culture, and social trust.
References
Acion, L., Rajngewerc, M., Randall, G., and Etcheverry, L. (2023). Generative AI Poses Ethical Challenges for Open Science. Nature Human Behaviour, 7, 1800–1801. https://doi.org/10.1038/s41562-023-01740-4 DOI: https://doi.org/10.1038/s41562-023-01740-4
Baldassarre, M. T., Caivano, D., Fernandez Nieto, B., Gigante, D., and Ragone, A. (2023). The Social Impact of Generative AI: An Analysis on ChatGPT. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good (363–373). ACM. https://doi.org/10.1145/3582515.3609555 DOI: https://doi.org/10.1145/3582515.3609555
Chan, C. K. Y., and Hu, W. (2023). Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. arXiv Preprint. https://doi.org/10.1186/s41239-023-00411-8 DOI: https://doi.org/10.1186/s41239-023-00411-8
Hurlburt, G. (2023). What if Ethics Got in the Way of Generative AI? IT Professional, 25, 4–6. https://doi.org/10.1109/MITP.2023.3267140 DOI: https://doi.org/10.1109/MITP.2023.3267140
Lee, K., Cooper, A. F., and Grimmelmann, J. (2024). Talkin’ ’Bout AI Generation: Copyright and the Generative-AI Supply Chain (short version). In Proceedings of the Symposium on Computer Science and Law (48–63). ACM. https://doi.org/10.1145/3614407.3643696 DOI: https://doi.org/10.1145/3614407.3643696
Li, F., Ruijs, N., and Lu, Y. (2023). Ethics and AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare. AI, 4, Article 28. https://doi.org/10.3390/ai4010003 DOI: https://doi.org/10.3390/ai4010003
McCradden, M. D., Stephenson, E. A., and Anderson, J. A. (2020). Clinical Research Underlies Ethical Integration of Healthcare Artificial Intelligence. Nature Medicine, 26, 1325–1326. https://doi.org/10.1038/s41591-020-1035-9 DOI: https://doi.org/10.1038/s41591-020-1035-9
McKinley, G. A., Acquaviva, V., Barnes, E. A., Gagne, D. J., and Thais, S. (2024). Ethics in Climate AI: From Theory to Practice. PLOS Climate, 3, e0000465. https://doi.org/10.1371/journal.pclm.0000465 DOI: https://doi.org/10.1371/journal.pclm.0000465
Montomoli, J., Bitondo, M. M., Cascella, M., Rezoagli, E., Romeo, L., Bellini, V., Semeraro, F., Gamberini, E., Frontoni, E., and Agnoletti, V. (2024). Algor-Ethics: Charting the Ethical Path for AI in Critical Care. Journal of Clinical Monitoring and Computing, 38, 931–939. https://doi.org/10.1007/s10877-024-01157-y DOI: https://doi.org/10.1007/s10877-024-01157-y
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mokander, J., and Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds and Machines, 31, 239–256. https://doi.org/10.1007/s11023-021-09563-w DOI: https://doi.org/10.1007/s11023-021-09563-w
Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J. C., Lui, V., and Gibson, J. (2021). Artificial Intelligence for Good Health: A Scoping Review of the Ethics Literature. BMC Medical Ethics, 22, Article 14. https://doi.org/10.1186/s12910-021-00577-8 DOI: https://doi.org/10.1186/s12910-021-00577-8
Nah, F. F.-H., Cai, J., Zheng, R., and Pang, N. (2023). An Activity System-Based Perspective of Generative AI: Challenges and Research Directions. AIS Transactions on Human-Computer Interaction, 15, 247–267. https://doi.org/10.17705/1thci.00190 DOI: https://doi.org/10.17705/1thci.00190
Rhim, J., Lee, J.-H., Chen, M., and Lim, A. (2021). A Deeper Look at Autonomous Vehicle Ethics: An Integrative Ethical Decision-Making Framework to Explain Moral Pluralism. Frontiers in Robotics and AI, 8, Article 632394. https://doi.org/10.3389/frobt.2021.632394 DOI: https://doi.org/10.3389/frobt.2021.632394
Sandiumenge, I. (2023). Copyright Implications of the Use of Generative AI (SSRN Working Paper No. 4531912). Elsevier. https://doi.org/10.2139/ssrn.4531912 DOI: https://doi.org/10.2139/ssrn.4531912
Tang, L., Li, J., and Fantus, S. (2023). Medical Artificial Intelligence Ethics: A Systematic Review of Empirical Studies. Digital Health, 9, 20552076231186064. https://doi.org/10.1177/20552076231186064 DOI: https://doi.org/10.1177/20552076231186064
Thambawita, V., Isaksen, J. L., Hicks, S. A., Ghouse, J., Ahlberg, G., Linneberg, A., Grarup, N., Ellervik, C., Olesen, M. S., Hansen, T., et al. (2021). DeepFake Electrocardiograms Using Generative Adversarial Networks are the Beginning of the End for Privacy Issues in Medicine. Scientific Reports, 11, Article 21896. https://doi.org/10.1038/s41598-021-01295-2 DOI: https://doi.org/10.1038/s41598-021-01295-2
Voss, E., Cushing, S. T., Ockey, G. J., and Yan, X. (2023). The use of Assistive Technologies Including Generative AI by Test Takers in Language Assessment. Language Assessment Quarterly, 20, 520–532. https://doi.org/10.1080/15434303.2023.2288256 DOI: https://doi.org/10.1080/15434303.2023.2288256
Zhong, H., Chang, J., Yang, Z., Wu, T., Arachchige, P. C. M., Pathmabandu, C., and Xue, M. (2023). Copyright Protection and Accountability of Generative AI: Attack, Watermarking and Attribution. In Companion Proceedings of the ACM Web Conference 2023 (94–98). ACM. https://doi.org/10.1145/3543873.3587321 DOI: https://doi.org/10.1145/3543873.3587321
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Rajita Dixit, Dr. Jayashree Patil, Dr. Ketaki Anay Pujari, Dr. Devendra Puntambekar, Yuvraj Parmar, Gurpreet Kaur

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























