REINFORCING CULTURAL NARRATIVES USING AI-GENERATED DIGITAL ART

Authors

  • Akhilesh Kumar Khan Greater Noida, Uttar Pradesh 201306, India.
  • Prince Kumar Associate ,Professor,School of Business Management, Noida international University 203201
  • Gunveen Ahluwalia Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Dr. Umakanth.S Professor, Department of Management Studies, JAIN (Deemed-to-be University), Bengaluru, Karnataka, India
  • Amit Kumar Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Kirti Jha Assistant Professor, Department of Development Studies, Vivekananda Global University, Jaipur, India
  • Suhas Bhise Department of E&TC Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, 411037 India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6754

Keywords:

AI-Generated Art, Cultural Narratives, Digital Heritage, Human–Machine Collaboration, Algorithmic Semiotics, Generative Models, Folk Art Preservation

Abstract [English]

The paper examines the ways in which artificial intelligence (AI) can support and redefine the cultural discourse, using digital art as a medium. Focusing on the combination of generative AI like StyleGAN2 and Stable Diffusion into a set of traditional folk motifs, the research creates an algorithmic framework of cultural semiotics, which considers AI as a collaborative meaning-making agent. Taking the case study of Madhubani art, the study is an integration of both computational modeling and community-based analysis to investigate the applicability of synthesizing algorithms to preserve aesthetic authenticity without limiting innovative creativity. Such quantitative findings, as Fréchet Inception Distance (FID), Structural Similarity Index (SSIM), and viewer perception scores, suggest that hybrid human-AI partnerships are the most balanced and lead to the preservation of the symbolic depth and the increase of the emotional resonance. The qualitative analysis also shows that AI systems, when trained on ethics, are capable of encoding, reconstructing, and recontextualizing symbols of culture, and aid in the continuation of narratives between generations. The results confirm that AI is not a substitute of cultural tradition but the continuation of it that presents a sustainable approach to digital heritage preservation and cross-cultural creativity. The paper ends by recommending participatory, transparent and explainable AI models to guarantee cultural integrity in the emerging digital art practices.

References

Arshad, M. A., Jubery, T., Afful, J., Jignasu, A., Balu, A., Ganapathysubramanian, B., and Krishnamurthy, A. (2024). Evaluating Neural Radiance Fields for 3D Plant Geometry Reconstruction in Field Conditions. Plant Phenomics, 6, 0235. https://doi.org/10.34133/plantphenomics.0235 DOI: https://doi.org/10.34133/plantphenomics.0235

Colavizza, G., Blanke, T., Jeurgens, C., and Noordegraaf, J. (2021). Archives and AI: An Overview of Current Debates and Future Perspectives. ACM Journal on Computing and Cultural Heritage, 15, 1–15. https://doi.org/10.1145/3479010 DOI: https://doi.org/10.1145/3479010

Croce, V., Caroti, G., De Luca, L., Jacquot, K., Piemonte, A., and Véron, P. (2021). From the Semantic Point Cloud to Heritage-Building Information Modeling: A Semiautomatic Approach Exploiting Machine Learning. Remote Sensing, 13, 461. https://doi.org/10.3390/rs13030461 DOI: https://doi.org/10.3390/rs13030461

European Learning and Intelligent Systems Excellence (ELISE) Consortium. (2021). Creating a European AI Powerhouse: A Strategic Research Agenda. ELISE Consortium.

Gavgiotaki, D., Ntoa, S., Margetis, G., Apostolakis, K. C., and Stephanidis, C. (2023). Gesture-Based Interaction for AR Systems: A Short Review. In Proceedings of the 16th International Conference on Pervasive Technologies Related to Assistive Environments (284–292). https://doi.org/10.1145/3594806.3594815 DOI: https://doi.org/10.1145/3594806.3594815

Gu, K., Maugey, T., Knorr, S., and Guillemot, C. (2022). Omni-Nerf: Neural Radiance Field from 360 Image Captures. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME) (1–6). https://doi.org/10.1109/ICME52920.2022.9859817 DOI: https://doi.org/10.1109/ICME52920.2022.9859817

Harisanty, D., Obille, K., Anna, N., Purwanti, E., and Retrialisca, F. (2024). Cultural Heritage Preservation in the Digital Age: Harnessing Artificial Intelligence for the Future—A Bibliometric Analysis. Digital Library Perspectives, 40, 609–630. https://doi.org/10.1108/DLP-01-2024-0018 DOI: https://doi.org/10.1108/DLP-01-2024-0018

Hill, J., and West, H. (2020). Improving the Student Learning Experience Through Dialogic Feed-Forward Assessment. Assessment and Evaluation in Higher Education, 45, 82–97. https://doi.org/10.1080/02602938.2019.1608908 DOI: https://doi.org/10.1080/02602938.2019.1608908

Kersten, T. P., Tschirschwitz, F., Deggim, S., and Lindstaedt, M. (2018). Virtual Reality for Cultural Heritage Monuments: From 3D Data Recording to Immersive Visualisation. In M. Ioannides et al. (Eds.), Digital Heritage: Progress in Cultural Heritage Documentation, Preservation, and Protection (Vol. 11197, 74–83). Springer. https://doi.org/10.1007/978-3-030-01765-1_9 DOI: https://doi.org/10.1007/978-3-030-01765-1_9

Kniaz, V. V., Knyaz, V. A., Bordodymov, A., Moshkantsev, P., Novikov, D., and Barylnik, S. (2023). Double NeRF: Representing Dynamic Scenes as Neural Radiance Fields. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLVIII, 115–120. https://doi.org/10.5194/isprs-archives-XLVIII-2-W3-2023-115-2023 DOI: https://doi.org/10.5194/isprs-archives-XLVIII-2-W3-2023-115-2023

Li, X., Yin, K., Shan, Y., Wang, X., and Geng, W. (2024). Effects of user Interface Orientation on Sense of Immersion in Augmented Reality. International Journal of Human–Computer Interaction, 41, 4685–4699. https://doi.org/10.1080/10447318.2024.2352923 DOI: https://doi.org/10.1080/10447318.2024.2352923

Li, Y., Huang, J., Tian, F., Wang, H.-A., and Dai, G.-Z. (2019). Gesture Interaction in Virtual Reality. Virtual Reality and Intelligent Hardware, 1, 84–112. https://doi.org/10.3724/SP.J.2096-5796.2018.0006 DOI: https://doi.org/10.3724/SP.J.2096-5796.2018.0006

Lin, C. H., Ma, W. C., Torralba, A., and Lucey, S. (2021). BARF: Bundle-Adjusting Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 5741–5751). DOI: https://doi.org/10.1109/ICCV48922.2021.00569

Lorenzoni, G., Iacono, S., Martini, L., Zolezzi, D., and Vercelli, G. V. (2025). Virtual Reality and Conversational Agents for Cultural Heritage Engagement. In Proceedings of the Future of Education Conference.

Münster, S., Maiwald, F., Di Lenardo, I., Henriksson, J., Isaac, A., Graf, M., Beck, C., and Oomen, J. (2024). Artificial Intelligence for Digital Heritage Innovation: Setting up an RandD Agenda for Europe. Heritage, 7, 794–816. https://doi.org/10.3390/heritage7020038 DOI: https://doi.org/10.3390/heritage7020038

Osoba, O. A., and Welser IV, W. (2017). An Intelligence in our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. https://doi.org/10.7249/RR1744 DOI: https://doi.org/10.7249/RR1744

Time Machine FET-FLAGSHIP-CSA. (2020). Time Machine: Big Data of the Past for the Future of Europe—A Proposal to the European Commission for a Large-Scale Research Initiative. Time Machine Organisation.

Ulutas Aydogan, S., Münster, S., Girardi, D., Palmirani, M., and Vitali, F. (2019). A Framework to Support Digital Humanities and Cultural Heritage Studies Research. In Proceedings of the Workshop on Research and Education in Urban History in the Age of Digital Libraries (237–267). https://doi.org/10.1007/978-3-030-93186-5_11 DOI: https://doi.org/10.1007/978-3-030-93186-5_11

Wollentz, G., Heritage, A., Morel, H., Forgesson, S., Iwasaki, A., and Cadena-Irizar, A. (2023). Foresight for Heritage: A Review of Future Change to Shape Research, Policy and Practice. ICCROM.

Zhitomirsky-Geffet, M., Kizhner, I., and Minster, S. (2023). What do They Make us See: A Comparative Study of Cultural Bias in Online Databases of two Large Museums. Journal of Documentation, 79, 320–340. https://doi.org/10.1108/JD-02-2022-0047 DOI: https://doi.org/10.1108/JD-02-2022-0047

Downloads

Published

2025-12-20

How to Cite

Khan, A. K., Kumar, P., Ahluwalia, G. ., S, U., Kumar, A., Jha, K., & Bhise, S. (2025). REINFORCING CULTURAL NARRATIVES USING AI-GENERATED DIGITAL ART. ShodhKosh: Journal of Visual and Performing Arts, 6(3s), 12–21. https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6754