INTELLIGENT PERFORMANCE EVALUATION IN DANCE TRAINING
DOI:
https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6752Keywords:
Intelligent Dance Evaluation, Computer Vision in Performing Arts, AI-Based Choreography Analysis, Digital Aesthetics, Pose Estimation, Affective Computing, Immersive Dance Training, Artistic Performance Assessment, Hybrid AI–Human Pedagogy, Expressive Movement ModelingAbstract [English]
The paper discusses how intelligent systems can transform the world of dance training, with a focus on how artificial intelligence, computer vision, affective computing, and immersive technologies may be used to analyze the technical and artistic aspects of the performance. The old-style dance pedagogy is deeply based on human interpretation, which, being abundant in cultural and expressive understanding, is subjective and restricted due to its limitation to perceptual limits. The suggested smart assessment scheme will consist of pose estimation, time-related features extraction, stylistic modeling and emotional analysis to produce multi-dimensional ratings that represent accuracy, expressiveness, musicality, and style authenticity. The experimental outcomes can prove a high level of improvements in rhythm alignment, movement smoothness, and expressive clarity with high correlations between the scores obtained through AI and human experts. The results indicate the promise of AI-human pedagogy, in which computer-based intelligence systems provide real-time and data-driven feedback and the teacher provides cultural and interpretive 3. Although facing certain difficulties associated with dataset diversity, artistic subtlety and ethical implications, the analysis shows that intelligent performance assessment has the potential to increase the accessibility, accuracy, and creative exploration of dance education, which will be a significant leap forward in the future state of digitally augmented performing arts.
References
Babu, P. A., Nagaraju, V. S., and Vallabhuni, R. R. (2021). Speech Emotion Recognition System with Librosa. In Proceedings of the IEEE International Conference on Communication Systems and Network Technologies, IEEE. DOI: https://doi.org/10.1109/CSNT51715.2021.9509714
Chéron, C., Laptev, I., and Schmid, C. (2015). P-CNN: Pose-based CNN Features for Action Recognition. In Proceedings of the ACM International Conference on Multimedia, ACM. DOI: https://doi.org/10.1109/ICCV.2015.368
Ishii, R., et al. (2024). Let’s Dance Together! AI Dancers Can Dance to Your Favorite Music and Style. In Proceedings of the 26th International Conference on Multimodal Interaction Companion, 88–90. ACM. DOI: https://doi.org/10.1145/3686215.3688373
Khosla, P., et al. (2021). Supervised Contrastive Learning (arXiv:2004.11362). arXiv.
Lee, S.-E., Shibata, K., Nonaka, S., Nobuhara, S., and Nishino, K. (2022). Extrinsic Camera Calibration from a Moving Person. IEEE Robotics and Automation Letters, 7, 10344–10351. https://doi.org/10.1109/LRA.2022.3194027 DOI: https://doi.org/10.1109/LRA.2022.3192629
Li, R., Yang, S., Ross, D. A., and Kanazawa, A. (2021). AI Choreographer: Music Conditioned 3D Dance Generation with AIST++. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), IEEE. DOI: https://doi.org/10.1109/ICCV48922.2021.01315
Lin, J. H. (2015). “Just Dance”: The Effects of Exergame Feedback and Controller use on Physical Activity and Psychological Outcomes. Games for Health Journal, 4, 183–189. https://doi.org/10.1089/g4h.2014.0100 DOI: https://doi.org/10.1089/g4h.2014.0092
Liu, Y. Z., Zhang, T. F., Li, Z., and Deng, L. Q. (2023). Deep Learning-Based Standardized Evaluation and Human Pose Estimation: A Novel Approach to Motion Perception. Traitement du Signal, 40, 2313–2320. https://doi.org/10.18280/ts.400621 DOI: https://doi.org/10.18280/ts.400549
Malleson, C., Collomosse, J., and Hilton, A. (2020). Real-Time Multi-Person Motion Capture from Multi-View Video and IMUs. International Journal of Computer Vision, 128, 1594–1611. https://doi.org/10.1007/s11263-020-01288-7 DOI: https://doi.org/10.1007/s11263-019-01270-5
Singh, A., et al. (2021). Semi-Supervised Action Recognition with Temporal Contrastive Learning (arXiv:2102.02751). arXiv. DOI: https://doi.org/10.1109/CVPR46437.2021.01025
Takahashi, K., et al. (2018). Human Pose as Calibration Pattern: 3D Human Pose Estimation with Multiple Unsynchronized and Uncalibrated Cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 1856–1867. IEEE. DOI: https://doi.org/10.1109/CVPRW.2018.00230
Tsuchida, S., Fukayama, S., Hamasaki, M., and Goto, M. (2019). AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Database for Dance Information Processing. In Proceedings of the 20th International Society for Music Information Retrieval Conference.
Tu, H., Wang, C., and Zeng, W. (2020). VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment (arXiv:2004.06239). arXiv. DOI: https://doi.org/10.1007/978-3-030-58452-8_12
Ullah, I., Manzo, M., Shah, M., and Madden, M. G. (2022). Graph Convolutional Networks: Analysis, Improvements and Results. Applied Intelligence, 52, 9033–9044. https://doi.org/10.1007/s10489-022-03387-3 DOI: https://doi.org/10.1007/s10489-021-02973-4
Yao, B., Jiang, X., Khosla, A., Lin, A. L., Guibas, L., and Fei-Fei, L. (2011). Human Action Recognition by Learning Bases of Action Attributes and Parts. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), IEEE. DOI: https://doi.org/10.1109/ICCV.2011.6126386
Zhang, X. (2023). The Application of Artificial Intelligence Technology in Dance Teaching. In Proceedings of the 1st International Conference on Real-Time Intelligent Systems, 23–30. DOI: https://doi.org/10.1007/978-3-031-55848-1_4
Zhenyu, S. (2025). Gamified Learning Experience Based on Digital Media and the Application of Virtual Teaching Robots in Dance Teaching. Entertainment Computing, 52, Article 100785. https://doi.org/10.1016/j.entcom.2024.100785 DOI: https://doi.org/10.1016/j.entcom.2024.100785
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Eeshita Goyal, Likhith S R, Girish Kalele, Divya Sharma, Santosh Ku. Behera, Shreyas Dingankar

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























