HYBRID AI-HUMAN MUSIC COMPOSITION FOR PEDAGOGY

Authors

  • Paramjit Baxi Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Dr. Susmita Panda Associate Professor, Department of Computer Science and Engineering, Institute of Technical Education and Research, Siksha 'O' Anusandhan (Deemed to be University) Bhubaneswar, Odisha, India
  • Sachin Mittal Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Nidhi Tewatia Assistant Professor, School of Business Management, Noida international University 203201
  • Battula Bhavya Assistant Professor, Department of Computer Science and Engineering, Presidency University, Bangalore, Karnataka, India
  • Dr. Fariyah Saiyad Associate Professor, Bath Spa University Academic Center RAK , UAE

DOI:

https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6695

Keywords:

Hybrid Music Composition, Generative AI, Pedagogy, Creative Learning, Music Education Technology, Human-AI Collaboration, Composition Scaffolding, Educational Creativity

Abstract [English]

This paper will discuss how the hybrid AI-human music composition setting can have a pedagogical effect on the creative process and musical knowledge of undergraduate learners. They used a mixed-method research study that entailed expert analysis of student composition, system interaction logs, and reflective learner feedback. It was found that students with the hybrid system obtained much better results in the increase of harmonic coherence, melodic structure, rhythmic variation, and a general creative expression than the control group with the traditional tools. Behavioral studies indicate that AI-generated recommendations proved more helpful when composing the first ideas, and students progressively used self-refinement in the later stages of the composition. Evaluations of creativity levels and self-efficacy scores among the experimental group had a significant positive change and demonstrate the relevance of the system in increasing idea generation, decreasing creative anxiety, and enhancing critical engagement skills. The paper adds to a growing body of evidence that AI-assisted learning in a hybrid form provides useful new directions in the field of music education, helping to develop a more in-depth musical perception and more convenient creative investigation.

References

Agostinelli, A., Et Al. (2023). Musiclm: Generating Music From Text (Arxiv:2301.11325). Arxiv. Https://Arxiv.Org/Abs/2301.11325

Briot, J.-P., Hadjeres, G., And Pachet, F.-D. (2017). Deep Learning Techniques For Music Generation: A Survey. Arxiv. Https://Arxiv.Org/Abs/1709.01620

Brunner, G., Konrad, A., Wang, Y.,and Wattenhofer, R. (2018). Midi-Vae: Modeling Dynamics and Instrumentation Of Music With Applications to Style Transfer. Arxiv. Https://Arxiv.Org/Abs/1809.07600

Chen, J., Tan, X., Luan, J., Qin, T., and Liu, T.-Y. (2020). Hifisinger: Towards High-Fidelity Neural Singing Voice Synthesis. Arxiv. Https://Arxiv.Org/Abs/2009.01776

Chu, H., Et Al. (2022). An Empirical Study on How People Perceive Ai-Generated Music. In Proceedings of the Acm International Conference on Information and Knowledge Management (Pp. 304–314). Acm. Https://Doi.Org/10.1145/3511808.3557278 DOI: https://doi.org/10.1145/3511808.3557235

Conklin, D. (2003). Music Generation from Statistical Models. in Proceedings of the Aisb Symposium on Artificial Intelligence, Creativity, Arts and Science. Society for the Study Of Artificial Intelligence and Simulation of Behaviour, 30–35.

Copet, J., Kreuk, F., Gat, I., Remez, T., Kant, D., Synnaeve, G., Adi, Y., And Defossez, A. (2024). Simple and Controllable Music Generation (Arxiv:2306.05284). Arxiv. Https://Arxiv.Org/Abs/2306.05284

Cross, I. (2023). Music in the Digital Age: Commodity, Community, Communion. Ai And Society, 38, 2387–2400. Https://Doi.Org/10.1007/S00146-022-01517-5 DOI: https://doi.org/10.1007/s00146-023-01670-9

Cifka, O., Simsekli, U., and Richard, G. (2020). Groove2groove: One-Shot Music Style Transfer With Supervision From Synthetic Data. Ieee/Acm Transactions On Audio, Speech, and Language Processing, 28, 2638–2650. Https://Doi.Org/10.1109/Taslp.2020.3017211 DOI: https://doi.org/10.1109/TASLP.2020.3019642

Deruty, E., Grachten, M., Lattner, S., Nistal, J., and Aouameur, C. (2022). On the Development And Practice of AI Technology For Contemporary Popular Music Production. Transactions of the International Society for Music Information Retrieval, 5(1), 35–50. Https://Doi.Org/10.5334/Tismir.121 DOI: https://doi.org/10.5334/tismir.100

Herremans, D., Chuan, C.-H., and Chew, E. (2017). A Functional Taxonomy of Music Generation Systems. Acm Computing Surveys, 50(5), Article 69. Https://Doi.Org/10.1145/3108242 DOI: https://doi.org/10.1145/3108242

Huang, Q., Et Al. (2023). Noise2music: Text-Conditioned Music Generation With Diffusion Models (Arxiv:2302.03917). Arxiv. Https://Arxiv.Org/Abs/2302.03917

Pinski, M., Adam, M., and Benlian, A. (2023). AI Knowledge: Improving AI Delegation Through Human Enablement. In Proceedings of the Chi Conference On Human Factors In Computing Systems (Pp. 1–17). Acm. Https://Doi.Org/10.1145/3544548.3580934 DOI: https://doi.org/10.1145/3544548.3580794

Wang, T., Diaz, D. V., Brown, C., and Chen, Y. (2023). Exploring the Role of AI Assistants in Computer Science Education: Methods, Implications, and Instructor Perspectives. In Proceedings of the Ieee Symposium on Visual Languages and Human-Centric Computing (Vl/Hcc) (Pp. 92–102). Ieee. Https://Doi.Org/10.1109/Vlhcc57795.2023.00021 DOI: https://doi.org/10.1109/VL-HCC57772.2023.00018

Zhao, Y., Yang, M., Lin, Y., Zhang, X., Shi, F., Wang, Z., Ding, J., and Ning, H. (2025). Ai-Enabled Text-To-Music Generation: a Comprehensive Review of Methods, Frameworks, and Future Directions. Electronics, 14(6), 1197. Https://Doi.Org/10.3390/Electronics14061197 DOI: https://doi.org/10.3390/electronics14061197

Downloads

Published

2025-12-16

How to Cite

Baxi, P., Panda, S., Mittal, S., Tewatia, N., Bhavya, B., & Saiyad, F. (2025). HYBRID AI-HUMAN MUSIC COMPOSITION FOR PEDAGOGY. ShodhKosh: Journal of Visual and Performing Arts, 6(2s), 481–490. https://doi.org/10.29121/shodhkosh.v6.i2s.2025.6695