EMOTION-AWARE TUTORING SYSTEMS FOR PERFORMING ARTS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6746Keywords:
Emotion-Aware Tutoring, Affective Computing, Multimodal Emotion Recognition, Dance Emotion Coaching, Theatre Dialogue Correction, Vocal Expressivity Assessment, Emotional Accuracy, Creative PedagogyAbstract [English]
Digital performing arts education Emotion-aware tutoring systems are a new development in the field of digital performing arts, which combines the concept of affective computing with multimodal learning analytics to improve the growth of expressive skills. Conventional online training systems have been emphasizing more on technical accuracy pitch accuracy, movement accuracy or dialogue accuracy but have failed to consider the emotional shades required in artistic excellence. The current research introduces and analyzes an AI-based framework that helps to capture emotional stimuli based on face expression analysis, voice prosody analysis, gesture monitoring, and movement patterns. The system is capable of detecting emotional deviations, feedback provision, and helping the learner understand themselves by three examples in dance, theatre, and vocal music. The quantitative data reveals that there are large improvements in the emotional accuracy (32-40%), expressive clarity, and performance coherence in all domains. Adaptive feedback mechanism in the system leads to increased engagement and elevation of emotional literacy where learners can tune the emotions and the expressive behavior. The results reveal the significance of incorporating emotion-sensitive technologies into the arts pedagogy and emphasize the importance of the hybrid human-AI co-learning models in the process of balancing between technical instructions and creative autonomy. The ethical concerns, such as emotional privacy, cultural difference, and algorithmic prejudice are addressed to make sure that there is responsible execution. In general, the paper shows that emotion-sensitive tutoring systems can provide a revolutionary method of performing arts education, enhancing more expressive growth, and overcome the traditional disadvantages of teaching digital arts in a digital setting.
References
Ahmed, F., Bari, A. H., and Gavrilova, M. L. (2019). Emotion Recognition from Body Movement. IEEE Access, 8, 11761–11781. https://doi.org/10.1109/ACCESS.2019.2963113 DOI: https://doi.org/10.1109/ACCESS.2019.2963113
Al-Fraihat, D., Joy, M., Sinclair, J., and Masa'deh, R. (2020). Evaluating E-Learning Systems Success: An Empirical Study. Computers in Human Behavior, 102, 67–86. https://doi.org/10.1016/j.chb.2019.08.004 DOI: https://doi.org/10.1016/j.chb.2019.08.004
Alshaikh, Z., Tamang, L., and Rus, V. (2020). A Socratic Tutor for Source Code Comprehension. In Artificial Intelligence in Education (AIED 2020) ( 15–19). Springer. https://doi.org/10.1007/978-3-030-52240-7_3 DOI: https://doi.org/10.1007/978-3-030-52240-7_3
Anwar, A., Haq, I. U., Mian, I. A., Shah, F., Alroobaea, R., Hussain, S., Ullah, S. S., and Umar, F. (2022). Applying Real-Time Dynamic Scaffolding Techniques During Tutoring Sessions Using Intelligent Tutoring Systems. Mobile Information Systems, 2022, 6006467. https://doi.org/10.1155/2022/6006467 DOI: https://doi.org/10.1155/2022/6006467
Bermudez-Edo, M., Elsaleh, T., Barnaghi, P., and Taylor, K. (2016). IoT-Lite: A Lightweight Semantic Model for the Internet of Things. In Proceedings of the 2016 IEEE UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld (90–97). IEEE. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0035 DOI: https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0035
Canal, F. Z., Müller, T. R., Matias, J. C., Scotton, G. G., de Sá Junior, A. R., Pozzebon, E., and Sobieranski, A. C. (2022). A Survey on Facial Emotion Recognition Techniques: A State-Of-The-Art Literature Review. Information Sciences, 582, 593–617. https://doi.org/10.1016/j.ins.2021.10.005 DOI: https://doi.org/10.1016/j.ins.2021.10.005
Elansary, L., Taha, Z., and Gad, W. (2024). Survey on Emotion Recognition Through Posture Detection and the Possibility of its Application in Virtual Reality. arXiv. arXiv:2408.01728
Elsheikh, R. A., Mohamed, M. A., Abou-Taleb, A. M., and Ata, M. M. (2024). Improved Facial Emotion Recognition Model Based on a Novel Deep Convolutional Structure. Scientific Reports, 14, 29050. https://doi.org/10.1038/s41598-024-79167-8 DOI: https://doi.org/10.1038/s41598-024-79167-8
Försterling, M., Gerdemann, S., Parkinson, B., and Hepach, R. (2024). Exploring the Expression of Emotions in Children's Body Posture Using Openpose. In Proceedings of the Annual Meeting of the Cognitive Science Society, Rotterdam, Netherlands.
Gursesli, M. C., Lombardi, S., Duradoni, M., Bocchi, L., Guazzini, A., and Lanata, A. (2024). Facial Emotion Recognition (FER) Through Custom Lightweight CNN Model: Performance Evaluation in Public Datasets. IEEE Access, 12, 45543–45559. https://doi.org/10.1109/ACCESS.2024.3380847 DOI: https://doi.org/10.1109/ACCESS.2024.3380847
Hasan, M. A., Noor, N. F. M., Rahman, S. S. B. A., and Rahman, M. M. (2020). The Transition from Intelligent to Affective Tutoring System: A Review and Open Issues. IEEE Access, 8, 204612–204638. https://doi.org/10.1109/ACCESS.2020.3036990 DOI: https://doi.org/10.1109/ACCESS.2020.3036990
Khare, S. K., Blanes-Vidal, V., Nadimi, E. S., and Acharya, U. R. (2024). Emotion Recognition and Artificial Intelligence: A Systematic Review (2014–2023) and Research Recommendations. Information Fusion, 102, 102019. https://doi.org/10.1016/j.inffus.2023.102019 DOI: https://doi.org/10.1016/j.inffus.2023.102019
Mohana, M., and Subashini, P. (2024). Facial Expression Recognition using Machine Learning and Deep Learning Techniques: A Systematic Review. SN Computer Science, 5, 432. https://doi.org/10.1007/s42979-024-02792-7 DOI: https://doi.org/10.1007/s42979-024-02792-7
Mousavinasab, E., Zarifsanaiey, N., Niakan Kalhori, S. R., Rakhshan, M., Keikha, L., and Ghazi Saeedi, M. (2021). Intelligent Tutoring Systems: A Systematic Review of Characteristics, Applications, and Evaluation Methods. Interactive Learning Environments, 29, 142–163. https://doi.org/10.1080/10494820.2018.1558257 DOI: https://doi.org/10.1080/10494820.2018.1558257
Sun, L., Kangas, M., and Ruokamo, H. (2023). Game-Based Features in Intelligent Game-Based Learning Environments: A Systematic Literature Review. Interactive Learning Environments. Advance online publication. https://doi.org/10.1080/10494820.2023.2179638 DOI: https://doi.org/10.1080/10494820.2023.2179638
Trinh Van, L., Dao Thi Le, T., Le Xuan, T., and Castelli, E. (2022). Emotional Speech Recognition Using Deep Neural Networks. Sensors, 22, 1414. https://doi.org/10.3390/s22041414 DOI: https://doi.org/10.3390/s22041414
Wu, J., Zhang, Y., Sun, S., Li, Q., and Zhao, X. (2022). Generalized Zero-Shot Emotion Recognition from Body Gestures. Applied Intelligence, 52, 8616–8634. https://doi.org/10.1007/s10489-021-02927-w DOI: https://doi.org/10.1007/s10489-021-02927-w
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Jairam Poudwal, Praney Madan, Sadhana Sargam, Torana Kamble, Ayaan Faiz, Dr. Dhirendra Nath Thatoi

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























