EMOTION-DRIVEN GENERATIVE SYSTEMS PRODUCING PERSONALIZED VISUAL ART BASED ON USER PREFERENCES

Authors

  • Prasanna Kumar E Assistant Professor, Meenakshi College of Arts and Science, Meenakshi Academy of Higher Education and Research, Chennai, Tamil Nadu 600080, India
  • Kiran Ingale Assistant Professor, Department of E&TC Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, 411037, India
  • Damodaran B Associate Professor, Psychology, Meenakshi College of Arts and Science, Meenakshi Academy of Higher Education and Research, Chennai, Tamil Nadu 600080, India
  • Mistry Roma Lalitchandra Assistant Professor, Department of Design, Vivekananda Global University, Jaipur, India
  • Dr. Rahul Amin Associate Professor, Department of Journalism and Mass Communication, ARKA JAIN University, Jamshedpur, Jharkhand, India
  • Simran Kalra Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India

DOI:

https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7496

Keywords:

Emotion-Driven Generation, Affective Computing, Personalized Visual Art, Multimodal Emotion Recognition, Generative Adversarial Networks (Gans), Human–AI Co-Creation

Abstract [English]

Generative systems that are based on emotions are a revolutionary approach in computational creativity because they allow the user to create visual artworks that are specific to the user and their emotional state. This paper will suggest a unified system wherein multimodal emotion recognition is integrated with powerful generative models to generate adaptive and expressive works of art. The system combines various modalities of inputs like facial expressions, voice signals and physiological data, and it detects and encodes emotions using deep learning networks such as convolutional neural networks (CNNs), long short-term memory (LSTM) networks and transformer based networks. An organized algorithmic sequence is proposed, including the parameterization, real time emotion recognition, feature encodings, and art creation that is adaptive. The experimental tests reveal dramatic enhancement of the accuracy of personalization, consistency in emotions, and interactivity when compared to conventional non-interactive systems of art. The results of visualization and case studies also prove the possibility of the system to dynamically change the artistic styles, color palette and compositions based on the preferences of particular users. This study presents the possibilities of emotion-sensitive generative models to reconfigure human-machine co-creation, provide scalable approaches to interactive digital art, therapeutic and immersive user-centered design spaces.

References

Ali, W., Kumar, J., Mawuli, C. B., She, L., and Shao, J. (2023). Dynamic Context Management in Context-Aware Recommender Systems. Computers & Electrical Engineering, 107, 108622.

Brisco, R., Hay, L., and Dhami, S. (2023). Exploring the Role of Text-to-Image AI in Concept Generation. Proceedings of the Design Society, 3, 1835–1844.

Feng, W., Zhu, W., Fu, T. J., Jampani, V., Akula, A., He, X., and Wang, W. Y. (2023). LayoutGPT: Compositional Visual Planning and Generation with Large Language Models. arXiv. arXiv:2305.15393

Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School Engagement: Potential of the Concept, State of the Evidence. Review of Educational Research, 74, 59–109.

Gao, Y., Sheng, T., Xiang, Y., Xiong, Y., Wang, H., and Zhang, J. (2023). Chat-Rec: Towards Interactive and Explainable LLMs-augmented recommender system. arXiv. arXiv:2303.14524

Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., and Wang, H. (2023). Retrieval-Augmented Generation for Large Language Models: A survey. arXiv. arXiv:2312.10997

Kim, W. B., and Choo, H. J. (2023). How Virtual Reality Shopping Experience Enhances Consumer Creativity: The Mediating Role of Perceptual Curiosity. Journal of Business Research, 154, 113378.

Li, B., Li, G., Xu, J., Li, X., Liu, X., Wang, M., and Lv, J. (2023). A Personalized Recommendation Framework Based on MOOC System Integrating Deep Learning and Big Data. Computers & Electrical Engineering, 106, 108571.

Li, Y., Li, Z., Zhang, K., Dan, R., Jiang, S., and Zhang, Y. (2023). ChatDoctor: A Medical Chat Model Fine-Tuned On a Large Language Model Using Medical Domain Knowledge. Cureus, 15, e40895.

Liu, Y., He, H., Han, T., Zhang, X., Liu, M., Tian, J., and Ge, B. (2024). Understanding LLMs: A Comprehensive Overview from Training to Inference. arXiv. arXiv:2401.02038

Pandey, R., Kambale, S., Bhalekar, P., and Gawande, D. (2025). An Analytical Study of the Role of Augmented Reality (AR) in Online Shopping Experience Using Amazon app. International Journal of Research and Development in Management Review, 14(1), 86–90.

Qian, Y. (2025). Pedagogical Applications of Generative AI in Higher Education: A Systematic Review of the Field. TechTrends, 69, 1105–1120.

Vartiainen, H., and Tedre, M. (2023). Using Artificial Intelligence in Craft Education: Crafting with Text-To-Image Generative Models. Digital Creativity, 34, 1–21.

Wang, H., Huang, W., Deng, Y., Wang, R., Wang, Z., Wang, Y., and Wong, K. F. (2024). UniMS-RAG: A Unified Multi-Source Retrieval-Augmented Generation for Personalized Dialogue Systems. arXiv. arXiv:2401.13256

Wang, Y., and Xue, L. (2024). Using Ai-Driven Chatbots to Foster Chinese EFL Students’ Academic Engagement: An Intervention Study. Computers in Human Behavior, 159, 108353.

Yang, Z., and Shin, J. (2025). The Impact of Gen AI on Art and Design Program Education. The Design Journal, 28, 310–326.

Zhu, D., Chen, J., Shen, X., Li, X., and Elhoseiny, M. (2023). Minigpt-4: Enhancing Vision–Language Understanding with Advanced Large Language Models. arXiv. arXiv:2304.10592

Downloads

Published

2026-04-11

How to Cite

Kumar E , P., Ingale , K., B, D., Lalitchandra , M. R., Amin , R. ., & Kalra, S. (2026). EMOTION-DRIVEN GENERATIVE SYSTEMS PRODUCING PERSONALIZED VISUAL ART BASED ON USER PREFERENCES. ShodhKosh: Journal of Visual and Performing Arts, 7(4s), 363–371. https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7496