VISUAL SEMANTICS OF AI-GENERATED PAINTINGS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6801Keywords:
AI-Generated Art, Visual Semantics, Diffusion Models, Computational Aesthetics, Machine CreativityAbstract [English]
This paper examines the visual semantics of AI generated paintings focusing on the crossroads between the computational creativity and the aesthetic interpretation of paintings by humans. With the rise of artificial intelligence based on powerful image-generation models, including Generative Adversarial Networks (GANs) and diffusion models, the ability of machines to generate image-based artworks with visual complexity and symbolic richness has increased. Nonetheless, it is a question to whether these visual products contain actual semantic depth or they only imitate human artistic intent. The study is based on a mixed-method design that involves a combination of both computational image analysis and qualitative semantic interpretation in order to explore the construction and perception of meaning within AI-generated art. The results of analysis of a curated dataset of AI-generated paintings were presented using CLIP, DALL•E, and Midjourney to obtain visual features and project them on conceptual and emotional planes. By applying the conceptual theories of semiotics and aesthetics, the paper determines the trends in color, composition and symbolism that are used to encode cultural and perceptual information in AI models. It has been found that though AI systems are able to adopt a human-like semantics via visual correlations learned by training, their results are essentially derivational: based on training data and probability associations as opposed to capturing original creative intent. The discourse explains the ways AI-generated paintings question traditional limits of authorship and artistic meaning in that they propose a new paradigm, in which human users collaborate to create semantics out of algorithmic aesthetics.
References
Andreu-Sánchez, C., and Martín-Pascual, M. Á. (2020). Fake Images of the SARS-CoV-2 Coronavirus in the Communication of Information at the Beginning of the First COVID-19 Pandemic. Profesional de la Información, 29(3), e290309. https://doi.org/10.3145/epi.2020.may.09 DOI: https://doi.org/10.3145/epi.2020.may.09
Andreu-Sánchez, C., and Martín-Pascual, M. Á. (2021). The Attributes of the Images Representing the SARS-CoV-2 Coronavirus Affect People’s Perception of the Virus. PLOS ONE, 16(7), e0253738. https://doi.org/10.1371/journal.pone.0253738 DOI: https://doi.org/10.1371/journal.pone.0253738
Andreu-Sánchez, C., and Martín-Pascual, M. Á. (2022). Scientific Illustrations of SARS-CoV-2 in the Media: An Imagedemic on Screens. Humanities and Social Sciences Communications, 9, 24. https://doi.org/10.1057/s41599-022-01037-3 DOI: https://doi.org/10.1057/s41599-022-01037-3
Barroso Fernández, Ó. (2022). La Definición Barroca De Lo Real En Descartes A La Luz De La Metafísica Suareciana. Anales del Seminario de Historia de la Filosofía, 39(1), 203–213. https://doi.org/10.5209/ashf.77917 DOI: https://doi.org/10.5209/ashf.77917
Doss, C., Mondschein, J., Shu, D., Wolfson, T., Kopecky, D., Fitton-Kane, V. A., Bush, L., and Tucker, C. (2023). Deepfakes and Scientific Knowledge Dissemination. Scientific Reports, 13, 13429. https://doi.org/10.1038/s41598-023-39944-3 DOI: https://doi.org/10.1038/s41598-023-39944-3
Engel-Hermann, P., and Skulmowski, A. (2025). Appealing, But Misleading: A Warning Against a Naive AI Realism. AI and Ethics, 5, 3407–3413. https://doi.org/10.1007/s43681-024-00587-3 DOI: https://doi.org/10.1007/s43681-024-00587-3
Hartmann, J., Exner, Y., and Domdey, S. (2025). The Power of Generative Marketing: Can Generative AI Create Superhuman Visual Marketing Content? International Journal of Research in Marketing, 42(1), 13–31. https://doi.org/10.1016/j.ijresmar.2024.09.002 DOI: https://doi.org/10.1016/j.ijresmar.2024.09.002
Kar, S. K., Bansal, T., Modi, S., and Singh, A. (2025). How Sensitive are the Free Ai-Detector Tools in Detecting AI-Generated Texts? A Comparison of Popular AI-Detector Tools. Indian Journal of Psychological Medicine, 47(3), 275–278. https://doi.org/10.1177/02537176241247934 DOI: https://doi.org/10.1177/02537176241247934
Komali, L., Jyothsna Malika, S., Satya Kala Mani Kumari, A., Nikhita, V., and Sri Naga Samhitha Vardhani, C. (2024). Detection of Fake Images Using Deep Learning. TANZ Journal, 19(3), 134–140. https://doi.org/10.61350/TJ5288
Moshel, M. L., Robinson, A. K., Carlson, T. A., and Grootswagers, T. (2022). Are You for Real? Decoding Realistic AI-Generated Faces from Neural Activity. Vision Research, 199, 108079. https://doi.org/10.1016/j.visres.2022.108079 DOI: https://doi.org/10.1016/j.visres.2022.108079
Ostmeyer, J., Schaerf, L., Buividovich, P., Charles, T., Postma, E., and Popovici, C. (2024). Synthetic Images aid the Recognition of Human-Made Art Forgeries. PLOS ONE, 19(2), e0295967. https://doi.org/10.1371/journal.pone.0295967 DOI: https://doi.org/10.1371/journal.pone.0295967
Papia, E.-M., Kondi, A., and Constantoudis, V. (2023). Entropy and Complexity Analysis of AI-Generated and Human-Made Paintings. Chaos, Solitons And Fractals, 170, 113385. https://doi.org/10.1016/j.chaos.2023.113385 DOI: https://doi.org/10.1016/j.chaos.2023.113385
Quan, H., Li, S., Zeng, C., Wei, H., and Hu, J. (2023). Big Data and AI-Driven Product Design: A Survey. Applied Sciences, 13(16), 9433. https://doi.org/10.3390/app13169433 DOI: https://doi.org/10.3390/app13169433
Rajpurkar, P., and Lungren, M. P. (2023). The Current and Future State of AI Interpretation of Medical Images. The New England Journal of Medicine, 388(21), 1981–1990. https://doi.org/10.1056/NEJMra2301725 DOI: https://doi.org/10.1056/NEJMra2301725
Rajpurkar, P., Chen, E., Banerjee, O., and Topol, E. J. (2022). AI in Health and Medicine. Nature Medicine, 28(1), 31–38. https://doi.org/10.1038/s41591-021-01614-0 DOI: https://doi.org/10.1038/s41591-021-01614-0
Rao, V. M., Hla, M., Moor, M., Adithan, S., Kwak, S., Topol, E. J., and Rajpurkar, P. (2025). Multimodal Generative AI for Medical Image Interpretation. Nature, 639(8056), 888–896. https://doi.org/10.1038/s41586-025-08675-y DOI: https://doi.org/10.1038/s41586-025-08675-y
Rodríguez-Fernández, M.-M., Martínez-Fernández, V.-A., and Juanatey-Boga, Ó. (2020). Credibility of Online Press: A Strategy for Distinction and Audience Generation. Profesional De La Información, 29(6), 1–18. https://doi.org/10.3145/epi.2020.nov.31 DOI: https://doi.org/10.3145/epi.2020.nov.31
Singh, B., and Sharma, D. K. (2022). Predicting Image Credibility in Fake News Over Social Media Using Multi-Modal Approach. Neural Computing and Applications, 34, 21503–21517. https://doi.org/10.1007/s00521-021-06086-4 DOI: https://doi.org/10.1007/s00521-021-06086-4
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Yogesh

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























