VISUAL STORYTELLING AND EXPLAINABLE INTELLIGENCE IN ORGANIZATIONAL CHANGE COMMUNICATION
DOI:
https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6965Keywords:
Visual Storytelling, Explainable Artificial Intelligence, Organizational Change, Communication, Decision Transparency, Stakeholder Engagement, Human-Centered IntelligenceAbstract [English]
The failure to align communication, trust, and lack of understanding of complex strategic decisions by stakeholders is a common cause of failure in the organization change initiatives. This study discusses the use of visual storytelling in conjunction with explainable artificial intelligence to promote organizational change communication through better sensemaking, transparency, and interactions among various stakeholder groups. Visual storytelling enables visually structured development of abstract change narratives, data trends, and strategic rationales into coherent visual pattern including timelines, process maps, and narrative dashboards to enable intuitive understanding and emotional appeal. Explainable intelligence is a complement to this method that exposes the logic, assumptions, and dependencies of data that underlie AI-based recommendations in change planning, risk analysis and performance projection. Using human-oriented images combined with decipherable AI products, the proposed framework will bridge the gap between analytical systems of decision making and human cognition. The research paradigm is that of a multi-layered communicative approach where visual-based stories situate explainable knowledge, thereby enabling the leader to provide information about what is being done, as well as why the decisions are defensible. This strategy facilitates development of trust, minimizes resistance and allows people to be informed when undergoing transformation processes. The paper proposed framework brings out essential dimensions such as interpretability, narrative coherence, cognitive load reduction and ethical transparency. The study is significant to the organizational communication theory because it places explainable intelligence as a communicative resource instead of a strictly technical characteristic. In practical terms it provides the principles of design of implementation of AI-aided visual narratives to change management and other applications in leadership communication, allowing enduring alignment, responsibility, and mutual organizational perception during change in multiplex institutional settings.
References
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., and others. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 DOI: https://doi.org/10.1016/j.inffus.2019.12.012
Birkle, C., Pendlebury, D. A., Schnell, J., and Adams, J. (2020). Web of Science as a Data Source for Research on Scientific and Scholarly Activity. Quantitative Science Studies, 1(1), 363–376. https://doi.org/10.1162/qss_a_00018 DOI: https://doi.org/10.1162/qss_a_00018
Carvalho, D. V., Pereira, E. M., and Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8), Article 832. https://doi.org/10.3390/electronics8080832 DOI: https://doi.org/10.3390/electronics8080832
Hu, Z. F., Kuflik, T., Mocanu, I. G., Najafian, S., and Shulner Tal, A. (2021). Recent Studies of Xai-Review. In Proceedings of the Adjunct 29th ACM Conference on User Modeling, Adaptation and Personalization (421–431). https://doi.org/10.1145/3450614.3463354 DOI: https://doi.org/10.1145/3450614.3463354
Islam, M. R., Ahmed, M. U., Barua, S., and Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 12(3), Article 1353. https://doi.org/10.3390/app12031353 DOI: https://doi.org/10.3390/app12031353
Madan, B. S., Zade, N. J., Lanke, N. P., Pathan, S. S., Ajani, S. N., and Khobragade, P. (2024). Self-Supervised Transformer Networks: Unlocking New Possibilities for Label-Free Data. Panamerican Mathematical Journal, 34(4), 194–210. https://doi.org/10.52783/pmj.v34.i4.1878 DOI: https://doi.org/10.52783/pmj.v34.i4.1878
Minh, D., Wang, H. X., Li, Y. F., and Nguyen, T. N. (2022). Explainable Artificial Intelligence: A Comprehensive Review. Artificial Intelligence Review, 55, 3503–3568. https://doi.org/10.1007/s10462-021-10088-y DOI: https://doi.org/10.1007/s10462-021-10088-y
Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., van Keulen, M., and Seifert, C. (2023). From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Computing Surveys, 55, Article 295. https://doi.org/10.1145/3583558 DOI: https://doi.org/10.1145/3583558
Saarela, M., and Geogieva, L. (2022). Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model. Applied Sciences, 12(19), Article 9545. https://doi.org/10.3390/app12199545 DOI: https://doi.org/10.3390/app12199545
Saarela, M., and Kärkkäinen, T. (2020). Can we Automate Expert-Based Journal Rankings? Analysis of the Finnish Publication Indicator. Journal of Informetrics, 14(4), Article 101008. https://doi.org/10.1016/j.joi.2020.101008 DOI: https://doi.org/10.1016/j.joi.2020.101008
Saeed, W., and Omlin, C. (2023). Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities. Knowledge-Based Systems, 263, Article 110273. https://doi.org/10.1016/j.knosys.2023.110273 DOI: https://doi.org/10.1016/j.knosys.2023.110273
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., and Müller, K.-R. (2021). Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE, 109(3), 247–278. https://doi.org/10.1109/JPROC.2021.3060483 DOI: https://doi.org/10.1109/JPROC.2021.3060483
Saranya, A., and Subhashini, R. (2023). A Systematic Review of Explainable Artificial Intelligence Models and Applications: Recent Developments and Future Trends. Decision Analytics Journal, 7, Article 100230. https://doi.org/10.1016/j.dajour.2023.100230 DOI: https://doi.org/10.1016/j.dajour.2023.100230
Schwalbe, G., and Finzel, B. (2024). A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts. Data Mining and Knowledge Discovery, 38, 3043–3101. https://doi.org/10.1007/s10618-022-00867-8 DOI: https://doi.org/10.1007/s10618-022-00867-8
Wang, Y., Zhang, T., Guo, X., and Shen, Z. (2024). Gradient Based Feature Attribution in Explainable AI: A Technical Review (arXiv:2403.10415). arXiv.
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Shikha Verma Kashyap, Shradha Purohit, Dr. Arvind Kumar, Farah Iqbal Mohd Jawaid, Dr. Jambi Ratna Raja Kumar, Dr. Samir N. Ajani

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























