DEEP LEARNING FOR PERFORMANCE ASSESSMENT IN DANCE AND MUSIC

Authors

  • Mithhil Arora Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Samrat Bandyopadhyay Assistant Professor, Department of Computer Science & IT, ARKA JAIN University Jamshedpur, Jharkhand, India, Email Id-samrat.b@arkajainuniversity.ac.in, Orcid Id- 0009-0009-7595-5403
  • Mona Sharma Assistant Professor, School of Business Management, Noida international University 203201
  • Sakshi Sobti Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Sanika Sahastra buddhae Assistant Professor, Department of Interior Design, Parul Institute of Design, Parul University, Vadodara, Gujarat, India
  • Dr. Anil Hingmire Department of Computer Engineering, Vidyavardhini's college of Engineering and Technology, Vasai, Mumbai University

DOI:

https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6751

Keywords:

Deep Learning, Performance Assessment, Music Analysis, Dance Evaluation, Feature Fusion, Explainable AI

Abstract [English]

Dance and music performance quality has been a consistently debated aspect of performance that has largely been based on human judgment which is subject to bias and lack of consistency. The recent progress in artificial intelligence, and especially deep learning provides a strong alternative to objective, data-driven assessment. This paper hopes to investigate the application of deep learning models Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, and Gated Recurrent Units (GRUs) to evaluate the performance of artists in these two areas. The proposed framework uses a mixture of multimodal data including motion capture or audio data and visual data to extract and combine features which determine rhythm and expression, synchronization and technical accuracy. The methodology focuses on the strong training, validation and testing plans to provide the accuracy and generalization to all the performers and genres. One use of this research is in real time feedback systems to learn music and dance, in competitions, automated scoring, and intelligent tutoring systems that adjust to the level of performance of the learner. Moreover, the paper identifies the opportunities of cross-cultural dataset growth, methods of bias mitigation, and explainable AI processes to provide transparency in automated assessments. The outcomes of the experiments prove the effectiveness of deep learning models to capture subtle features of performance, which is better than the traditional and classical approaches to machine learning. This study is part of the emerging convergence of artificial intelligence and performing arts, which will open the door to more equitable, more knowledgeable, and more globally applicable evaluation mechanisms.

References

Ahir, K., Govani, K., Gajera, R., and Shah, M. (2020). Application on Virtual Reality for Enhanced Education Learning, Military Training and Sports. Augmented Human Research, 5, 7. https://doi.org/10.1007/s41133-019-0025-2 DOI: https://doi.org/10.1007/s41133-019-0025-2

Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T., Zhang, F., and Grundmann, M. (2020). BlazePose: On-Device Real-Time Body Pose Tracking (arXiv: 2006.10204). arXiv.

Choi, J.-H., Lee, J.-J., and Nasridinov, A. (2021). Dance Self-Learning Application and Its Dance Pose Evaluations. In Proceedings of the 36th Annual ACM Symposium on Applied Computing (1037–1045). https://doi.org/10.1145/3412841.3441980 DOI: https://doi.org/10.1145/3412841.3441980

Davis, S., Thomson, K. M., Zonneveld, K. L. M., Vause, T. C., Passalent, M., Bajcar, N., and Sureshkumar, B. (2023). An Evaluation of Virtual Training for Teaching Dance Instructors to Implement a Behavioral Coaching Package. Behavior Analysis in Practice, 16, 1–13. https://doi.org/10.1007/s40617-023-00779-z DOI: https://doi.org/10.1007/s40617-023-00779-z

Desmarais, Y., Mottet, D., Slangen, P., and Montesinos, P. (2021). A Review of 3D Human Pose Estimation Algorithms for Markerless Motion Capture. Computer Vision and Image Understanding, 212, 103275. https://doi.org/10.1016/j.cviu.2021.103275 DOI: https://doi.org/10.1016/j.cviu.2021.103275

Dias Pereira Dos Santos, A., Loke, L., Yacef, K., and Martinez-Maldonado, R. (2022). Enriching Teachers’ Assessments of Rhythmic Forró Dance Skills by Modelling Motion Sensor Data. International Journal of Human–Computer Studies, 161, 102776. https://doi.org/10.1016/j.ijhcs.2022.102776 DOI: https://doi.org/10.1016/j.ijhcs.2022.102776

Grishchenko, I., Bazarevsky, V., Zanfir, A., Bazavan, E. G., Zanfir, M., Yee, R., Raveendran, K., Zhdanovich, M., Grundmann, M., and Sminchisescu, C. (2022). BlazePose GHUM holistic: Real-time 3D Human Landmarks and Pose Estimation (arXiv:2206.11678). arXiv.

Guo, H., Zou, S., Xu, Y., Yang, H., Wang, J., Zhang, H., and Chen, W. (2022). DanceVis: Toward Better Understanding of Online Cheer and Dance Training. Journal of Visualization, 25, 159–174. https://doi.org/10.1007/s12650-021-00783-x DOI: https://doi.org/10.1007/s12650-021-00783-x

Iqbal, J., and Sidhu, M. S. (2022). Acceptance of Dance Training System Based on Augmented Reality and Technology Acceptance Model (TAM). Virtual Reality, 26, 33–54. https://doi.org/10.1007/s10055-021-00529-y DOI: https://doi.org/10.1007/s10055-021-00529-y

Izard, S. G., Juanes, J. A., García-Peñalvo, F. J., Estella, J. M. G., Ledesma, M. J. S., and Ruisoto, P. (2018). Virtual Reality as an Educational and Training Tool for Medicine. Journal of Medical Systems, 42, 50. https://doi.org/10.1007/s10916-018-0900-2 DOI: https://doi.org/10.1007/s10916-018-0900-2

Jin, Y., Suzuki, G., and Shioya, H. (2022). Detecting and Visualizing Stops in Dance Training by Neural Network Based on Velocity and Acceleration. Sensors, 22, 5402. https://doi.org/10.3390/s22145402 DOI: https://doi.org/10.3390/s22145402

Kanko, R. M., Laende, E. K., Davis, E. M., Selbie, W. S., and Deluzio, K. J. (2021). Concurrent Assessment of Gait Kinematics Using Marker-Based and Markerless Motion Capture. Journal of Biomechanics, 127, 110665. https://doi.org/10.1016/j.jbiomech.2021.110665 DOI: https://doi.org/10.1016/j.jbiomech.2021.110665

Lei, Y., Li, X., and Chen, Y. J. (2022). Dance Evaluation Based on Movement and Neural Network. Journal of Mathematics, 2022, 1–7. https://doi.org/10.1155/2022/6968852 DOI: https://doi.org/10.1155/2022/6968852

Li, D., Yi, C., and Gu, Y. (2021). Research on College Physical Education and Sports Training Based on Virtual Reality Technology. Mathematical Problems in Engineering, 2021, 6625529. https://doi.org/10.1155/2021/6625529 DOI: https://doi.org/10.1155/2021/6625529

Lugaresi, C., Tang, J., Nash, H., McClanahan, C., Uboweja, E., Hays, M., Zhang, F., Chang, C.-L., Yong, M. G., Lee, J., et al. (2019). MediaPipe: A Framework for Building Perception Pipelines (arXiv:1906.08172). arXiv.

Xie, B., Liu, H., Alghofaili, R., Zhang, Y., Jiang, Y., Lobo, F. D., Li, C., Li, W., Huang, H., Akdere, M., et al. (2021). A review on Virtual Reality Skill Training Applications. Frontiers in Virtual Reality, 2, 645153. https://doi.org/10.3389/frvir.2021.645153 DOI: https://doi.org/10.3389/frvir.2021.645153

Zhai, X. (2021). Dance Movement Recognition Based on Feature Expression and Attribute Mining. Complexity, 2021, 9935900. https://doi.org/10.1155/2021/9935900 DOI: https://doi.org/10.1155/2021/9935900

Downloads

Published

2025-12-16

How to Cite

Arora, M., Bandyopadhyay, S., Sharma, M., Sobti, S., buddhae, S. S., & Hingmire, A. (2025). DEEP LEARNING FOR PERFORMANCE ASSESSMENT IN DANCE AND MUSIC. ShodhKosh: Journal of Visual and Performing Arts, 6(2s), 230–240. https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6751