AI-GENERATED DANCE MOVEMENTS AND CREATIVE OWNERSHIP
DOI:
https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6694Keywords:
AI-Generated Choreography, Creative Ownership, Computational Creativity, Generative Models, Motion Synthesis, Intellectual Property, Dance Technology, Human–AI Collaboration, Artistic Authorship, Digital PerformanceAbstract [English]
Artificial intelligence and dance choreography have started to create a paradigm shift in the debate about creativity, authorship, and ownership in digital art. An example of this can be found in AI generated dance works, created using generative models like Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs) and even motion capture data generation where machines are able to recreate and innovate in areas of human art. These systems decompose their temporal motion patterns, spatial pathways as well as gestures of expression in order to creatively generate new choreographic sequences independently, posing complicated questions on the authorship of creativity. Conventional models of intellectual property presuppose human-based creativity, but AI generation is the result of the algorithmic combination, not will. The rights of such AI-generated movements are unclear between the choreographer who trained such a model, the creators of the algorithm, or is in the open sphere. Moreover, the assimilation of AI technologies questions aesthetics and ethics, which triggers the redefinition of artistic identity and joint authorship of human beings and machines. The paper will critically analyze the philosophical, legal, and technical aspects of AI-based choreography, focusing on the necessity of new legal provisions and ethical standards, in accordance with the new creative paradigm of hybrid creativity. Through addressing the creative opportunities and the issues of ownership of AI-generated dance, the study helps to extend the discussion concerning the cultural production, the co-creation of human beings and machines, and the changing sense of originality in the era of artificial intelligence.
References
Alemi, O., Françoise, J., and Pasquier, P. (2017). GrooveNet: Real-Time Music-Driven Dance Movement Generation Using Artificial Neural Networks. In Proceedings of the International Conference on Computational Creativity.
Baaj, I. (2024). Synergies Between Machine Learning and Reasoning: An Introduction by the Kay R. Amel group. International Journal of Approximate Reasoning, 171, 109206. https://doi.org/10.1016/j.ijar.2024.109206 DOI: https://doi.org/10.1016/j.ijar.2024.109206
Brock, A., Donahue, J., and Simonyan, K. (2019). Large Scale Gan Training for High Fidelity Natural Image Synthesis. Arxiv Preprint arXiv:1809.11096.
Cao, Z., Hidalgo Martinez, G., Simon, T., Wei, S., and Sheikh, Y. A. (2021). OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity fieqlds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1), 172–186. https://doi.org/10.1109/TPAMI.2019.2929257 DOI: https://doi.org/10.1109/TPAMI.2019.2929257
Darda, K. M., and Cross, E. S. (2023). The Computer, a Choreographer? Aesthetic Responses to Randomly Generated Dance Choreography by a Computer. Heliyon, 9(1), e12987. DOI: https://doi.org/10.1016/j.heliyon.2022.e12750
Fdili Alaoui, S., Schiphorst, T., and Carlson, K. (2021). Exploring Embodied Interaction in Human–AI Co-Creative Choreography. International Journal of Human–Computer Studies, 150, 102609. https://doi.org/10.1016/j.ijhcs.2021.102609 DOI: https://doi.org/10.1016/j.ijhcs.2021.102609
He, X., et al. (2024). Id-Animator: Zero-Shot Identity-Preserving Human Video Generation. arXiv preprint.
He, Y., Yang, T., Zhang, Y., Shan, Y., and Chen, Q. (2022). Latent Video Diffusion Models For High-Fidelity Long Video Generation. arXiv preprint arXiv:2211.09836.
Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko, A., … Norouzi, M. (2022). Imagen Video: High Definition Video Generation with Diffusion Models. arXiv preprint arXiv:2210.02303.
Li, X. (2021). The art of dance from the perspective of artificial intelligence. Journal of Physics: Conference Series, 1852(4), 042011. https://doi.org/10.1088/1742-6596/1852/4/042011 DOI: https://doi.org/10.1088/1742-6596/1852/4/042011
Tang, Y., Liu, S., and Kim, H. (2022). Transformer-Based Sequence Modeling for Dance Motion Prediction. Neural Networks, 150, 213–228. https://doi.org/10.1016/j.neunet.2022.03.006 DOI: https://doi.org/10.1016/j.neunet.2022.03.006
Wadibhasme, R. N., Chaudhari, A. U., Khobragade, P., Mehta, H. D., Agrawal, R., and Dhule, C. (2024). Detection and Prevention of Malicious Activities in Vulnerable Network Security using Deep Learning. In 2024 International Conference on Innovations and Challenges in Emerging Technologies (ICICET). IEEE, 1-6. https://doi.org/10.1109/ICICET59348.2024.10616289 DOI: https://doi.org/10.1109/ICICET59348.2024.10616289
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Raman Verma, Mohan Garg, Ashutosh Roy, Abhijeet Panigra, Dr. Gayatri Nayak

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























