EMOTION-DRIVEN GENERATIVE SYSTEMS PRODUCING PERSONALIZED VISUAL ART BASED ON USER PREFERENCES
DOI:
https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7496Keywords:
Emotion-Driven Generation, Affective Computing, Personalized Visual Art, Multimodal Emotion Recognition, Generative Adversarial Networks (Gans), Human–AI Co-CreationAbstract [English]
Generative systems that are based on emotions are a revolutionary approach in computational creativity because they allow the user to create visual artworks that are specific to the user and their emotional state. This paper will suggest a unified system wherein multimodal emotion recognition is integrated with powerful generative models to generate adaptive and expressive works of art. The system combines various modalities of inputs like facial expressions, voice signals and physiological data, and it detects and encodes emotions using deep learning networks such as convolutional neural networks (CNNs), long short-term memory (LSTM) networks and transformer based networks. An organized algorithmic sequence is proposed, including the parameterization, real time emotion recognition, feature encodings, and art creation that is adaptive. The experimental tests reveal dramatic enhancement of the accuracy of personalization, consistency in emotions, and interactivity when compared to conventional non-interactive systems of art. The results of visualization and case studies also prove the possibility of the system to dynamically change the artistic styles, color palette and compositions based on the preferences of particular users. This study presents the possibilities of emotion-sensitive generative models to reconfigure human-machine co-creation, provide scalable approaches to interactive digital art, therapeutic and immersive user-centered design spaces.
References
Ali, W., Kumar, J., Mawuli, C. B., She, L., and Shao, J. (2023). Dynamic Context Management in Context-Aware Recommender Systems. Computers & Electrical Engineering, 107, 108622.
Brisco, R., Hay, L., and Dhami, S. (2023). Exploring the Role of Text-to-Image AI in Concept Generation. Proceedings of the Design Society, 3, 1835–1844.
Feng, W., Zhu, W., Fu, T. J., Jampani, V., Akula, A., He, X., and Wang, W. Y. (2023). LayoutGPT: Compositional Visual Planning and Generation with Large Language Models. arXiv. arXiv:2305.15393
Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School Engagement: Potential of the Concept, State of the Evidence. Review of Educational Research, 74, 59–109.
Gao, Y., Sheng, T., Xiang, Y., Xiong, Y., Wang, H., and Zhang, J. (2023). Chat-Rec: Towards Interactive and Explainable LLMs-augmented recommender system. arXiv. arXiv:2303.14524
Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., and Wang, H. (2023). Retrieval-Augmented Generation for Large Language Models: A survey. arXiv. arXiv:2312.10997
Kim, W. B., and Choo, H. J. (2023). How Virtual Reality Shopping Experience Enhances Consumer Creativity: The Mediating Role of Perceptual Curiosity. Journal of Business Research, 154, 113378.
Li, B., Li, G., Xu, J., Li, X., Liu, X., Wang, M., and Lv, J. (2023). A Personalized Recommendation Framework Based on MOOC System Integrating Deep Learning and Big Data. Computers & Electrical Engineering, 106, 108571.
Li, Y., Li, Z., Zhang, K., Dan, R., Jiang, S., and Zhang, Y. (2023). ChatDoctor: A Medical Chat Model Fine-Tuned On a Large Language Model Using Medical Domain Knowledge. Cureus, 15, e40895.
Liu, Y., He, H., Han, T., Zhang, X., Liu, M., Tian, J., and Ge, B. (2024). Understanding LLMs: A Comprehensive Overview from Training to Inference. arXiv. arXiv:2401.02038
Pandey, R., Kambale, S., Bhalekar, P., and Gawande, D. (2025). An Analytical Study of the Role of Augmented Reality (AR) in Online Shopping Experience Using Amazon app. International Journal of Research and Development in Management Review, 14(1), 86–90.
Qian, Y. (2025). Pedagogical Applications of Generative AI in Higher Education: A Systematic Review of the Field. TechTrends, 69, 1105–1120.
Vartiainen, H., and Tedre, M. (2023). Using Artificial Intelligence in Craft Education: Crafting with Text-To-Image Generative Models. Digital Creativity, 34, 1–21.
Wang, H., Huang, W., Deng, Y., Wang, R., Wang, Z., Wang, Y., and Wong, K. F. (2024). UniMS-RAG: A Unified Multi-Source Retrieval-Augmented Generation for Personalized Dialogue Systems. arXiv. arXiv:2401.13256
Wang, Y., and Xue, L. (2024). Using Ai-Driven Chatbots to Foster Chinese EFL Students’ Academic Engagement: An Intervention Study. Computers in Human Behavior, 159, 108353.
Yang, Z., and Shin, J. (2025). The Impact of Gen AI on Art and Design Program Education. The Design Journal, 28, 310–326.
Zhu, D., Chen, J., Shen, X., Li, X., and Elhoseiny, M. (2023). Minigpt-4: Enhancing Vision–Language Understanding with Advanced Large Language Models. arXiv. arXiv:2304.10592
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Prasanna Kumar E , Kiran Ingale , Damodaran B, Mistry Roma Lalitchandra , Dr. Rahul Amin , Simran Kalra

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























