DANCING INTO THE DIGITAL AGE: PRESERVING INDIAN CLASSICAL DANCE NARRATIVES THROUGH VIRTUAL AVATARS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i2.2025.6273Keywords:
Dance, Animation, Cultural Translation, Narration, Storyboarding, Motion Capture, Pose and Posture, Emotion Depiction, Pure Dance Structure, Dance Ontology, Model FittingAbstract [English]
In this work, we explored methods for expressing dance narratives in a sequence on a virtual avatar, particularly in the context of Indian classical dance styles. This study has many different goals. First of all, it would aid in the preservation of multiple dance practices that are rapidly disintegrating since there aren't enough learners to impart lessons to them. Second, it may inspire a younger audience to be more engaged in discovering more about their cultural heritage by recasting it into a format that is easily accessible to them and can be valuable. Furthermore, this research will facilitate the organization of dance subtleties' morphology as well as their translation, retrieval, and storage processes.
The aforementioned requirements, which are ontological and resource-intensive, would aid in digitizing vintage dance forms, making it easier to navigate information heading onward, including comprehending meaning, translating it into other languages, using rhythm, and other labour-intensive operations. This paper includes an examination of several approaches and strategies that are currently in use and could potentially implemented in future endeavours for the translation, conservation, and preservation of dance narratives.
This research investigates the process of transforming dancer-choreographed stories into inanimate objects and virtual characters. It aims to explore animation techniques for converting Indian dance forms into animated narration, preserving cultural heritage, understanding emotions in animated sequences, and mapping body movements to capture dance structures' essence.
References
Ventura, P., & Bisig, D. (2016). Algorithmic Reflections on Choreography. Human technology, 12. DOI: https://doi.org/10.17011/ht/urn.201611174656
Habib, K., & Soliman, T. (2015). Cartoons’ effect in changing children mental response and behavior. Open Journal of Social Sciences, 3(09), 248. DOI: https://doi.org/10.4236/jss.2015.39033
Kim, S. I., Yoon, M., Whang, S. M., Tversky, B., & Morrison, J. B. (2007). The effect of animation on comprehension and interest. Journal of Computer Assisted Learning, 23(3), 260-270. DOI: https://doi.org/10.1111/j.1365-2729.2006.00219.x
Kitagawa, M., & Windsor, B. (2012). MoCap for artists: workflow and techniques for motion capture. Focal Press.
Moeslund, T. B., & Granum, E. (2001). A survey of computer vision-based human motion capture. Computer vision and image understanding, 81(3), 231-268. DOI: https://doi.org/10.1006/cviu.2000.0897
Jung, M., Fischer, R., Gleicher, M., Thingvold, J. A., & Bevan, M. (2000). Motion capture and editing: Bridging principle and practice.
Coleman, P., Bibliowicz, J., Singh, K., & Gleicher, M. (2008, July). Staggered poses: a character motion representation for detail-preserving editing of pose and coordinated timing. In Proceedings of the 2008 acm siggraph/eurographics symposium on computer animation (pp. 137-146). Eurographics Association.
Świtoński, A., Josiński, H., Jędrasiak, K., Polański, A., & Wojciechowski, K. (2010, September). Classification of poses and movement phases. In International Conference on Computer Vision and Graphics (pp. 193-200). Springer, Berlin, Heidelberg. DOI: https://doi.org/10.1007/978-3-642-15910-7_21
Bänziger, T., Grandjean, D., & Scherer, K. R. (2009). Emotion recognition from expressions in face, voice, and body: the Multimodal Emotion Recognition Test (MERT). Emotion, 9(5), 691. DOI: https://doi.org/10.1037/a0017088
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of nonverbal behavior, 28(2), 117-139. DOI: https://doi.org/10.1023/B:JONB.0000023655.25550.be
Calvo, R., D'Mello, S., Gratch, J., Kappas, A., Lhommet, M., & Marsella, S. (2015). Expressing Emotion Through Posture and Gesture. The Oxford Handbook of Affective Computing. DOI: https://doi.org/10.1093/oxfordhb/9780199942237.013.039
Mallik, A., Chaudhury, S., & Ghosh, H. (2011). Nrityakosha: Preserving the intangible heritage of Indian classical dance. Journal on Computing and Cultural Heritage (JOCCH), 4(3), 11 DOI: https://doi.org/10.1145/2069276.2069280
Jadhav, S. (2010). A computational model for bharata natyam choreography. International Journal of Computer Science and Information Security, 8(7), 231-233.
Samanta, S., Purkait, P., & Chanda, B. (2012). Indian classical dance classification by learning dance-pose bases. DOI: https://doi.org/10.1109/WACV.2012.6163050
Mohanty, A., & Sahay, R. R. (2018). Rasabodha: Understanding Indian classical dance by recognizing emotions using deep learning. Pattern Recognition, 79, 97-113. DOI: https://doi.org/10.1016/j.patcog.2018.01.035
Dhanapalan, B. (2016, October). Capturing Kathakali: Performance capture, digital aesthetics, and the classical dance of India. In Virtual System & Multimedia (VSMM), 2016 22nd International Conference on (pp. 1-7). IEEE. DOI: https://doi.org/10.1109/VSMM.2016.7863197
Hassan, E., Chaudhury, S., & Gopal, M. (2011, December). Annotating dance posture images using multi kernel feature combination. In Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2011 Third National Conference on (pp. 41-45). IEEE. DOI: https://doi.org/10.1109/NCVPRIPG.2011.16
Amini, R., & Lisetti, C. (2013, September). HapFACS: An open source API/software to generate FACS-based expressions for ECAs animation and for corpus generation. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on (pp. 270-275). IEEE. DOI: https://doi.org/10.1109/ACII.2013.51
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Suma Dawn, Monali Bhattacharya

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.