CULTURAL ENTREPRENEURSHIP AND THE DIGITAL ECONOMY: EMERGING BUSINESS MODELS IN PERFORMING ARTS
DOI:
https://doi.org/10.29121/shodhkosh.v7.i2s.2026.7361Keywords:
Cultural Entrepreneurship, Digital Economy, Performing Arts, Business Model, Innovation, Audience Engagement, Hybrid Models, Digital Platforms, Adversarial SystemAbstract [English]
This has been greatly influenced by high development rate of the digital economy thus leading to new forms of cultural entrepreneurship and new business models in the performing art industry. This paper examines how digital technologies can contribute to the business model development of performing arts, in its sustainability, interactions with the audience, and the creation of value. Based on the theories of cultural entrepreneurship, digital entrepreneurship and the platform economy, the paper presents a conceptual model, which combines the cultural creativity with the technological adoption and business model innovation. The research method is a mixed-method study, which includes quantitative survey analysis and qualitative knowledge about conduct and practice of performing arts using interviews and case studies of practitioners and organizations in the performing arts sector. The results indicate that the adoption of digital technologies is a significant mediator of the ability to facilitate new types of business processes like hybrid performances, subscription-based platforms, crowdfunding, and blockchain-enabled monetization. The models increase the world coverage, better revenue diversification, and interactive audience. The paper also displays the moderating nature of the audience participation in enhancing the relationship between innovation and sustainability. Digital and hybrid models have been found to be better in scalability, accessibility, and long-term viability compared to traditional models, but issues of platform dependency, digital inequality, and intellectual property issues still remain. The results provide an insight into the need to strategically combine digital means and approaches to art to achieve economic and cultural sustainability. The research is an addition to the literature since it provides an extensive framework of the understanding of the cultural interaction of entrepreneurship and digital economy in the performing arts. It offers a real-life lesson to artists, cultural organizations, and policymakers who can seek to negotiate and make ends meet creative enterprises in a more digitalized world.
References
Dolhansky, B., et al. (2020). The Deepfake Detection Challenge (DFDC) Dataset. arXiv.
Firc, A., Malinka, K., and Hanáček, P. (2023). Deepfakes as a Threat to Speaker and Facial Recognition: An Overview of Tools and Attack Vectors. Heliyon, 9, e15090. https://doi.org/10.1016/j.heliyon.2023.e15090 DOI: https://doi.org/10.1016/j.heliyon.2023.e15090
He, Z., Zuo, W., Kan, M., Shan, S., and Chen, X. (2019). AttGAN: Facial Attribute Editing by Only Changing What You Want. IEEE Transactions on Image Processing, 28, 5464–5478. https://doi.org/10.1109/TIP.2019.2916751 DOI: https://doi.org/10.1109/TIP.2019.2916751
Ilankovan, M. (2025). Deep Learning and Optimization Approaches in Prediction of Iot Traffic Using Gradient Boosting, Auto-Metric Graph Neural Network, and Lyapunov Optimization-Based Predictive Model: A Review. International Journal of Advanced Computer Engineering and Communication Technology, 14(2), 116–122.
Kane, T. B. (2019). Artificial Intelligence in Politics: Establishing Ethics. IEEE Technology and Society Magazine, 38(1), 72–80. https://doi.org/10.1109/MTS.2019.2894474 DOI: https://doi.org/10.1109/MTS.2019.2894474
Kang, J., Ji, S. K., Lee, S., Jang, D., and Hou, J. U. (2022). Detection Enhancement for Various Deepfake Types Based on Residual Noise and Manipulation Traces. IEEE Access, 10, 69031–69040. https://doi.org/10.1109/ACCESS.2022.3185121 DOI: https://doi.org/10.1109/ACCESS.2022.3185121
Malik, A., Kuribayashi, M., Abdullahi, S. M., and Khan, A. N. (2022). Deepfake Detection for Human Face Images and Videos: A Survey. IEEE Access, 10, 18757–18775. https://doi.org/10.1109/ACCESS.2022.3151186 DOI: https://doi.org/10.1109/ACCESS.2022.3151186
Maras, M. H., and Alexandrou, A. (2019). Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos. International Journal of Evidence and Proof, 23(3), 255–262. https://doi.org/10.1177/1365712718807226 DOI: https://doi.org/10.1177/1365712718807226
Masood, M., et al. (2023). Deepfakes Generation and Detection: State-of-the-Art, Open Challenges, Countermeasures, and Way Forward. Applied Intelligence, 53, 3974–4026. https://doi.org/10.1007/s10489-022-03766-z DOI: https://doi.org/10.1007/s10489-022-03766-z
Mirsky, Y., and Lee, W. (2021). The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/3425780 DOI: https://doi.org/10.1145/3425780
Nah, F.-H., Zheng, R., Cai, J., Siau, K., and Chen, L. (2023). Generative AI and ChatGPT: Applications, Challenges, and AI-Human Collaboration. Journal of Information Technology Case and Application Research, 25(4), 277–304. https://doi.org/10.1080/15228053.2023.2233814 DOI: https://doi.org/10.1080/15228053.2023.2233814
Nguyen, T. T., et al. (2022). Deep Learning for Deepfakes Creation and Detection: A Survey. Computer Vision and Image Understanding, 223, 103525. https://doi.org/10.1016/j.cviu.2022.103525 DOI: https://doi.org/10.1016/j.cviu.2022.103525
Öhman, C. (2020). Introducing the Pervert's Dilemma: A Contribution to the Critique of Deepfake Pornography. Ethics and Information Technology, 22, 133–140. https://doi.org/10.1007/s10676-019-09522-1 DOI: https://doi.org/10.1007/s10676-019-09522-1
Passos, L. A., et al. (2024). A Review of Deep Learning-Based Approaches for Deepfake Content Detection. Expert Systems, 41, e13570. https://doi.org/10.1111/exsy.13570 DOI: https://doi.org/10.1111/exsy.13570
Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., and Ortega-Garcia, J. (2020). Deepfakes and Beyond: A Survey of Face Manipulation and Fake Detection. Information Fusion, 64, 131–148. https://doi.org/10.1016/j.inffus.2020.06.014 DOI: https://doi.org/10.1016/j.inffus.2020.06.014
Xi, Z., Huang, W., Wei, K., Luo, W., and Zheng, P. (2023). Ai-Generated Image Detection Using a Cross-Attention Enhanced Dual-Stream Network. In Proceedings of the Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (1463–1470). https://doi.org/10.1109/APSIPAASC58517.2023.10317126 DOI: https://doi.org/10.1109/APSIPAASC58517.2023.10317126
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Shopita khurana, Dr. Kamaljeet Kaur, Sukant Tiwari, Dr. Vikash Kumar, Dr. Ashok Bairagi, Dr. Jyotsana Thakur

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























