ADAPTIVE INTERFACES IN AI-POWERED ART GALLERIES
DOI:
https://doi.org/10.29121/shodhkosh.v6.i3s.2025.6820Keywords:
Adaptive Interfaces, Artificial Intelligence, Art Galleries, Personalization, Human–Computer Interaction, Interactive ArtAbstract [English]
The introduction of artificial intelligence (AI) to art galleries is transforming the visitor-artwork interaction and establishing new adaptive and personalized experiences that are more context-relevant. This paper will discuss the concept of adaptive interface design and deployment within AI-assisted art spaces, and how dynamic systems can adapt the exhibition content to both a specific user profile, behavior, and emotional reaction. In an in-depth analysis of the literature on interactive technologies in museums, user experience design, and machine learning-based personalization, the research paper reveals current gaps in visitor interaction and the flexibility of the interface. The data are gathered using a mixed-method research design by using user observation, interviews, and analytics to comprehend the interaction patterns and the cognitive response to adaptive systems. The suggested system architecture is based on user profiling, behavior tracking, and real-time content adjustment algorithms that are implemented with the use of AI frameworks like TensorFlow and OpenCV. The examples of AI-boosted galleries that already exist and a prototype of an application at a local exhibition show that adaptive interfaces could have a significant potential to engage viewers further and lead to meaningful interactions with art. Measures of evaluation such as dwell time, emotional resonance and interaction diversity demonstrate a significant increase in user satisfaction and learning retention. The results point to the importance of AI-based adaptive interface as an intelligent mediator of art and audience, which can be scaled down to future digital curation activities and can contribute to inclusivity and accessibility in contemporary art environments.
References
Bi, Z., Zhang, N., Xue, Y., Ou, Y., Ji, D., Zheng, G., and Chen, H. (2023). OceanGPT: A Large Language Model for Ocean Science Tasks. Arxiv Preprint Arxiv:2310.02031. https://doi.org/10.18653/v1/2024.acl-long.184 DOI: https://doi.org/10.18653/v1/2024.acl-long.184
Chen, Z., Lin, M., Wang, Z., Zang, M., and Bai, Y. (2024). PreparedLLM: Effective Pre-Pretraining Framework for Domain-Specific Large Language Models. Big Earth Data, 8(4), 649–672. https://doi.org/10.1080/20964471.2024.2396159 DOI: https://doi.org/10.1080/20964471.2024.2396159
Deng, C., Zhang, T., He, Z., Chen, Q., Shi, Y., Zhou, L., Fu, L., Zhang, W., Wang, X., Zhou, C., et al. (2023). Learning a Foundation Language Model for Geoscience Knowledge Understanding and Utilization. Arxiv Preprint Arxiv:2306.05064.
Deng, J., Zubair, A., and Park, Y. J. (2023). Limitations of Large Language Models in Medical Applications. Postgraduate Medical Journal, 99(1179), 1298–1299. https://doi.org/10.1093/postmj/qgad069 DOI: https://doi.org/10.1093/postmj/qgad069
Hofer, M., Obraczka, D., Saeedi, A., Köpcke, H., and Rahm, E. (2024). Construction of Knowledge Graphs: Current State and Challenges. Information, 15(8), 509. https://doi.org/10.3390/info15080509 DOI: https://doi.org/10.3390/info15080509
Li, Y., Ma, S., Wang, X., Huang, S., Jiang, C., Zheng, H. T., Xie, P., Huang, F., and Jiang, Y. (2023). EcomGPT: Instruction-Tuning Large Language Model with Chain-of-Task Tasks for E-Commerce. Arxiv Preprint arXiv:2308.06966. https://doi.org/10.1609/aaai.v38i17.29820 DOI: https://doi.org/10.1609/aaai.v38i17.29820
Luo, R., Sun, L., Xia, Y., Qin, T., Zhang, S., Poon, H., and Liu, T. Y. (2022). BioGPT: Generative Pre-Trained Transformer for Biomedical Text Generation and Mining. Briefings in Bioinformatics, 23(6), bbac409. https://doi.org/10.1093/bib/bbac409 DOI: https://doi.org/10.1093/bib/bbac409
Nguyen, H. T. (2023). A Brief Report on LawGPT 1.0: A Virtual Legal Assistant Based on GPT-3. Arxiv Preprint arXiv:2302.05729.
Pal, S., Bhattacharya, M., Lee, S. S., and Chakraborty, C. (2024). A Domain-Specific Next-Generation Large Language Model (LLM) or ChatGPT is Required for Biomedical Engineering and Research. Annals of Biomedical Engineering, 52(3), 451–454. https://doi.org/10.1007/s10439-023-03306-x DOI: https://doi.org/10.1007/s10439-023-03306-x
Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., and Wu, X. (2024). Unifying Large Language Models and Knowledge Graphs: A Roadmap. IEEE Transactions on Knowledge and Data Engineering, 36(8), 3580–3599. https://doi.org/10.1109/TKDE.2024.3352100 DOI: https://doi.org/10.1109/TKDE.2024.3352100
Siriwardhana, S., Weerasekera, R., Wen, E., Kaluarachchi, T., Rana, R., and Nanayakkara, S. (2023). Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering. Transactions of the Association for Computational Linguistics, 11, 1–17. https://doi.org/10.1162/tacl_a_00530 DOI: https://doi.org/10.1162/tacl_a_00530
Venigalla, A., Frankle, J., and Carbin, M. (2022). BiomedLM: A Domain-Specific Large Language Model for Biomedical Text. MosaicML, 23(2).
Wu, J., Yang, S., Zhan, R., Yuan, Y., Chao, L. S., and Wong, D. F. (2025). A Survey on Llm-Generated Text Detection: Necessity, Methods, and Future Directions. Computational Linguistics, 51(2), 275–338. https://doi.org/10.1162/coli_a_00549 DOI: https://doi.org/10.1162/coli_a_00549
Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P., Rosenberg, D., and Mann, G. (2023). BloombergGPT: A Large Language Model for Finance. Arxiv Preprint ArXiv:2303.17564.
Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Zhong, S., Yin, B., and Hu, X. (2024). Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond. ACM Transactions on Knowledge Discovery from Data, 18(5), 1–32. https://doi.org/10.1145/3649506 DOI: https://doi.org/10.1145/3649506
Yildirim, I., and Paul, L. (2024). From Task Structures to World Models: What do LLMs know? Trends in Cognitive Sciences, 28(5), 404–415. https://doi.org/10.1016/j.tics.2024.02.008 DOI: https://doi.org/10.1016/j.tics.2024.02.008
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Tripti Sharma, Aakash Sharma, Syed Rashid Anwar, Kanika Seth, Tanya Singh, Devanand Choudhary

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























