EMOTION MODELING IN SCULPTURE DESIGN USING NEURAL NETWORKS

Authors

  • Sahil Suri Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Dr. Lakshman K Associate Professor, Department of Management Studies, JAIN (Deemed-to-be University), Bengaluru, Karnataka, India
  • Eeshita Goyal Assistant Professor,School of Business Management, Noida international University 203201
  • Gopal Goyal Professor, Architecture, Vivekananda Global University, Jaipur, India
  • Gourav Sood Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Gayatri Shashikant Mirajkar Department: Electronics and Telecommunication Engineering College Name: Arvind Gavali College of Engineering
  • Prashant Anerao Department of Mechanical Engineering Vishwakarma Institute of Technology, Pune, Maharashtra, 411037 India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6756

Keywords:

Emotion Modeling, Computational Aesthetics, Neural Networks, 3D Sculpture, Graph Neural Networks, Emotion–Form Embedding, Affective Computing, Human–AI Collaboration

Abstract [English]

The paper introduces a unified method of designing sculptures on a feeling-sensitive neural network basis. The proposed Emotion-Form Neural Embedding Network (EFNEN) is based on the combination of Convolutional Neural Networks (CNNs) and Graph Neural Networks (GNNs) to learn emotion-related correlations between sculptural form and emotion. The system was trained and tested using a selection of 1,200 annotated 3D models that had both geometric and a set of 3 emotion labels (valence and arousal) assigned to them. EFNEN obtained a correlation coefficient (r = 0.88) and 92.4% accuracy with human perceptual ratings, which was better than the baseline models. Latent emotion space and feature-emotion heatmap visualizations showed that the predictors of positive affect are curvature, symmetry, and balance. The model facilitates the classification of emotions as well as emotion-driven three dimensional form generation, thus leading to collaborative co-creation of artists and AI systems. The findings indicate that emotion is calculally formulated and synthesized to form a measurable aesthetic dimension, which makes EFNEN a platform of affective computational art and human-AI creative synergy.

References

Achlioptas, P., Ovsjanikov, M., Haydarov, K., Elhoseiny, M., and Guibas, L. J. (2021). ArtEmis: Affective language for visual art. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (11564–11574). https://doi.org/10.1109/CVPR46437.2021.01140 DOI: https://doi.org/10.1109/CVPR46437.2021.01140

Bai, Z., Nakashima, Y., and García, N. (2021). Explain me the Painting: Multi-Topic Knowledgeable Art Description Generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (5402–5412). https://doi.org/10.1109/ICCV48922.2021.00537 DOI: https://doi.org/10.1109/ICCV48922.2021.00537

Brooks, T., Holynski, A., and Efros, A. A. (2023). InstructPix2Pix: Learning to Follow Image Editing Instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (18392–18402). https://doi.org/10.1109/CVPR52729.2023.01764 DOI: https://doi.org/10.1109/CVPR52729.2023.01764

Capece, S., and Chivăran, C. (2020). The Sensorial Dimension of the Contemporary Museum Between Design and Emerging Technologies. IOP Conference Series: Materials Science and Engineering, 949, Article 012067. https://doi.org/10.1088/1757-899X/949/1/012067 DOI: https://doi.org/10.1088/1757-899X/949/1/012067

Dalay, L. (2020). The Impact of Biophilic Design Elements on the Atmospheric Perception of the Interior Space. Uluslararası Peyzaj Mimarlığı Araştırmaları Dergisi, 4, 4–20.

Duarte, F., and Baranauskas, M. C. C. (2020). An Experience with Deep Time Interactive Installations within a Museum Scenario. Institute of Computing, University of Campinas.

García, N., and Vogiatzis, G. (2018). How to Read Paintings: Semantic Art Understanding with Multimodal Retrieval. In Proceedings of the European Conference on Computer Vision Workshops (Vol. 11130, 676–691). https://doi.org/10.1007/978-3-030-11012-3_52 DOI: https://doi.org/10.1007/978-3-030-11012-3_52

García, N., Ye, C., Liu, Z., Hu, Q., Otani, M., Chu, C., Nakashima, Y., and Mitamura, T. (2020). A dataset and baselines for visual question answering on art. In Proceedings of the European Conference on Computer Vision Workshops (Vol. 12536, 92–108). https://doi.org/10.1007/978-3-030-66096-3_8 DOI: https://doi.org/10.1007/978-3-030-66096-3_8

Khaleghimoghaddam, N., Bala, H. A., Özmen, G., and Öztürk, Ş. (2022). Neuroscience and Architecture: What Does the Brain Tell to an Emotional Experience of Architecture via a Functional MR Study? Frontiers of Architectural Research, 11, 877–890. https://doi.org/10.1016/j.foar.2022.02.007 DOI: https://doi.org/10.1016/j.foar.2022.02.007

Lavdas, A. A., Mehaffy, M. W., and Salingaros, N. A. (2023). AI, the Beauty of Places, and the Metaverse: Beyond Geometrical Fundamentalism. Architectural Intelligence, 2, Article 8. https://doi.org/10.1007/s44223-023-00026-z DOI: https://doi.org/10.1007/s44223-023-00026-z

Lim, Y., Donaldson, J., Jung, H., Kunz, B., Royer, D., Ramalingam, S., Thirumaran, S., and Stolterman, E. (2008). Emotional Experience and Interaction Design. In Affect and Emotion in Human–Computer Interaction (116–129). Springer. DOI: https://doi.org/10.1007/978-3-540-85099-1_10

Loshchilov, I., and Hutter, F. (2017). Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations.

Mohamed, Y., Khan, F. F., Haydarov, K., and Elhoseiny, M. (2022). It is okay to not be okay: Overcoming Emotional Bias in Affective Image Captioning by Contrastive Data Collection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (21231–21240). https://doi.org/10.1109/CVPR52688.2022.02058 DOI: https://doi.org/10.1109/CVPR52688.2022.02058

Rajcic, N., and McCormack, J. (2020). Mirror Ritual: Human–Machine Co-Construction of Emotion. In Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction (697–702). https://doi.org/10.1145/3374920.3375293 DOI: https://doi.org/10.1145/3374920.3375293

Ramm, T. M., Werwie, M., Otto, T., Gloor, P. A., and Salingaros, N. A. (2024). Artificial Intelligence Evaluates How Humans Connect to the Built Environment: A Pilot Study of two Experiments in Biophilia. Sustainability, 16(2), Article 868. https://doi.org/10.3390/su16020868 DOI: https://doi.org/10.3390/su16020868

Savaş, B., Verwijmeren, T., and van Lier, R. (2021). Aesthetic Experience and Creativity in Interactive art. Art and Perception, 9, 167–198. https://doi.org/10.1163/22134913-bja10024 DOI: https://doi.org/10.1163/22134913-bja10024

Szubielska, M., Imbir, K., and Szymańska, A. (2021). The Influence of the Physical Context and Knowledge of Artworks on the Aesthetic Experience of Interactive Installations. Current Psychology, 40, 3702–3715. https://doi.org/10.1007/s12144-019-00322-w DOI: https://doi.org/10.1007/s12144-019-00322-w

Tang, R., Liu, L., Pandey, A., Jiang, Z., Yang, G., Kumar, K., Stenetorp, P., Lin, J., and Ture, F. (2023). What the DAAM: Interpreting Stable Diffusion Using Cross-Attention. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (5644–5659). https://doi.org/10.18653/v1/2023.acl-long.310 DOI: https://doi.org/10.18653/v1/2023.acl-long.310

Taylor, R. P., Micolich, A. P., and Jonas, D. (1999). Fractal Expressionism. Physics World, 12(10), 25–29. https://doi.org/10.1088/2058-7058/12/10/21 DOI: https://doi.org/10.1088/2058-7058/12/10/21

Wu, Y., Nakashima, Y., and García, N. (2023). Not Only Generative Art: Stable Diffusion for Content–Style Disentanglement in Art Analysis. In Proceedings of the ACM International Conference on Multimedia Retrieval (199–208). https://doi.org/10.1145/3591106.3592262 DOI: https://doi.org/10.1145/3591106.3592262

Xu, L., Huang, M. H., Shang, X., Yuan, Z., Sun, Y., and Liu, J. (2023). Meta Compositional Referring Expression Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (19478–19487). https://doi.org/10.1109/CVPR52729.2023.01866 DOI: https://doi.org/10.1109/CVPR52729.2023.01866

Downloads

Published

2025-12-20

How to Cite

Suri, S. ., K, . L., Goyal, E. ., Goyal, G., Sood, G., Mirajkar, G. S., & Anerao, P. . (2025). EMOTION MODELING IN SCULPTURE DESIGN USING NEURAL NETWORKS. ShodhKosh: Journal of Visual and Performing Arts, 6(3s), 31–40. https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6756