EMOTION-AWARE ADAPTIVE MUSIC RECOMMENDATION SYSTEM USING REAL-TIME AFFECTIVE STATE ANALYSIS

Authors

  • Dr. Harish Barapatre Associate Professor, Department of Computer Engineering, Yadavrao Tasgaonkar Institute of Engineering and Technology, Bhivpuri Road Karjat, Maharashtra 410201, India
  • Vishal Santosh Barguje Student, Department of Computer Engineering, Yadavrao Tasgaonkar Institute of Engineering and Technology, Bhivpuri Road Karjat, Maharashtra 410201, India
  • Pratiksha Shreesahil Vacche Student, Department of Computer Engineering, Yadavrao Tasgaonkar Institute of Engineering and Technology, Bhivpuri Road Karjat, Maharashtra 410201, India
  • Abhishek Vilas Chaudhari Student, Department of Computer Engineering, Yadavrao Tasgaonkar Institute of Engineering and Technology, Bhivpuri Road Karjat, Maharashtra 410201, India

DOI:

https://doi.org/10.29121/ijoest.v10.i2.2026.755

Keywords:

Emotion Recognition, Music Recommendation System, Affective Computing, Machine Learning, Human-Computer Interaction, Adaptive Systems

Abstract

Emotion plays a critical role in human–music interaction, influencing listening behavior, mood regulation, and cognitive engagement. Existing music recommendation systems, such as those used in Spotify and Apple Music, primarily rely on historical user preferences, collaborative filtering, or genre-based classification, which fail to capture the dynamic and real-time emotional states of users. This limitation results in suboptimal personalization and reduced user satisfaction.

This paper proposes an Emotion-Aware Adaptive Music Recommendation System that integrates real-time affective state detection with intelligent music mapping. The framework utilizes multimodal inputs such as facial expressions, textual sentiment, or physiological cues to infer user emotions and dynamically adjust music recommendations. A structured pipeline is designed to process emotional signals, compute emotion intensity scores, and map them to suitable music features such as tempo, genre, and energy levels.

Unlike traditional systems, the proposed approach emphasizes context-aware personalization, enabling continuous adaptation to changing user emotions. The system is conceptualized with a mathematically grounded scoring mechanism and an interpretable decision layer to ensure transparency and robustness. The proposed framework contributes to the advancement of affective computing in entertainment systems and provides a foundation for next-generation intelligent media platforms.

Downloads

Download data is not yet available.

References

Adomavicius, G., and Tuzhilin, A. (2005). Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6), 734–749. https://doi.org/10.1109/TKDE.2005.99

Bogdanov, D., Haro, M., Fuhrmann, F., Xambó, A., Gómez, E., and Herrera, P. (2013). Semantic Audio Content-Based Music Recommendation and Visualization Based on User Preference Examples. Information Processing and Management, 49(1), 13–33. https://doi.org/10.1016/j.ipm.2012.06.004

Choi, K., Fazekas, G., Sandler, M., and Cho, K. (2017). Convolutional Recurrent Neural Networks for Music Classification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2392–2396). https://doi.org/10.1109/ICASSP.2017.7952585

Das, S. R. (2018). Emotion Recognition: A Survey. International Journal of Advanced Research in Computer Science, 9(2).

Ekman, P. (1993). Facial Expression and Emotion. American Psychologist, 48(4), 384–392. https://doi.org/10.1037/0003-066X.48.4.384

Eyben, F., Wöllmer, M., and Schuller, B. (2013). Recent Developments in OpenSMILE, the Munich Open-Source Multimedia Feature Extractor. In Proceedings of the ACM International Conference on Multimedia (835–838). https://doi.org/10.1145/2502081.2502224

Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y. (2015). Challenges in Representation Learning: A Report on Three Machine Learning Contests. Neural Networks, 64, 59–63. https://doi.org/10.1016/j.neunet.2014.09.005

Hu, Y., Chen, X., and Yang, D. (2009). Lyric-Based Song Emotion Detection with Affective Lexicon. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (123–128).

Koren, Y., Bell, R., and Volinsky, C. (2009). Matrix Factorization Techniques for Recommender Systems. Computer, 42(8), 30–37. https://doi.org/10.1109/MC.2009.263

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems (NIPS) (1097–1105).

Lee, J., and Nam, J. (2017). Multi-Level and Multi-Scale Feature Aggregation Using Pre-Trained CNN for Music Auto-Tagging. IEEE Signal Processing Letters, 24(8), 1208–1212. https://doi.org/10.1109/LSP.2017.2713830

Liu, B. (2012). Sentiment Analysis and Opinion Mining. Morgan and Claypool. https://doi.org/10.1007/978-3-031-02145-9

McFee, B., Raffel, C., Liang, D., Ellis, D. P. W., McVicar, M., Battenberg, E., and Nieto, O. (2015). Librosa: Audio and Music Signal Analysis in Python. In Proceedings of the Python in Science Conference (SciPy). https://doi.org/10.25080/Majora-7b98e3ed-003

Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv.

Oramas, S., Nieto, O., Barbieri, F., and Serra, X. (2017). Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR).

Picard, R. W. (1997). Affective Computing. MIT Press. https://doi.org/10.7551/mitpress/1140.001.0001

Russell, J. A. (1980). A Circumplex Model of Affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714

Sarwar, B., Karypis, G., Konstan, J., and Riedl, J. (2001). Item-Based Collaborative Filtering Recommendation Algorithms. In Proceedings of the World Wide Web Conference (WWW) (285–295). https://doi.org/10.1145/371920.372071

Schedl, M. (2019). Deep Learning in Music Recommendation Systems. Frontiers in Applied Mathematics and Statistics, 5. https://doi.org/10.3389/fams.2019.00044

Scherer, K. R. (2005). What are Emotions? And how can they be Measured? Social Science Information, 44(4), 695–729. https://doi.org/10.1177/0539018405058216

Schuller, B., Batliner, A., Steidl, S., and Seppi, D. (2012). Recognising Realistic Emotions and Affect in Speech. IEEE Signal Processing Magazine, 29(4), 96–108.

Soleymani, M., Garcia, D., Jou, B., Schuller, B., Chang, S. F., and Pantic, M. (2017). A Survey of Multimodal Sentiment Analysis. Image and Vision Computing, 65, 3–14. https://doi.org/10.1016/j.imavis.2017.08.003

Tkalčič, M., De Carolis, B., de Gemmis, M., Odić, A., and Košir, A. (2019). Emotion-Aware Recommender Systems: A Review. Journal of Intelligent Information Systems, 53(1), 1–31.

Wang, X., He, X., Wang, M., Feng, F., and Chua, T. S. (2017). Neural Collaborative Filtering. In Proceedings of the World Wide Web Conference (WWW) (173–182). https://doi.org/10.1145/3038912.3052569

Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. (2016). Hierarchical Attention Networks for Document Classification. In Proceedings of NAACL-HLT (1480–1489). https://doi.org/10.18653/v1/N16-1174

Downloads

Published

2026-04-30

How to Cite

Barapatre, H. ., Barguje, V. S. ., Vacche, P. S. ., & Chaudhari, A. V. (2026). EMOTION-AWARE ADAPTIVE MUSIC RECOMMENDATION SYSTEM USING REAL-TIME AFFECTIVE STATE ANALYSIS. International Journal of Engineering Science Technologies, 10(2), 82–96. https://doi.org/10.29121/ijoest.v10.i2.2026.755