UNMASKING DECEPTION IN THE AGE OF ARTIFICIAL INTELLIGENCE: A COMPREHENSIVE ANALYSIS OF INDIAN CELEBRITY’S DEEPFAKES NEWS
DOI:
https://doi.org/10.29121/shodhkosh.v4.i2.2023.2268Keywords:
Deepfakes, Artificial Intelligence, Misinformation, Celebrities, Cybersecurity, Digital LiteracyAbstract [English]
The rapid advancement of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) has ushered in a new era of digital disruption, particularly in the domain of disinformation and content manipulation. Among the various applications emerging from this progress, the phenomenon of deepfakes has emerged as a formidable challenge. Deepfakes represent synthetic media productions, intricately crafted through AI algorithms, with the ability to seamlessly replace a person's likeness in videos or images. The consequences of deepfakes are profound, encompassing the propagation of misinformation, reputation damage, and erosion of trust in digital content. The rising cases of deepfake news underscore a significant threat in the field of artificial intelligence. To mitigate this issue a comprehensive strategy requires to development of awareness, education, technological advancements, and strong legal frameworks to safeguard identities and curtail the misuse of deepfakes. This involves key steps like the development of detection technologies, the establishment of clear legal guidelines, heightened public awareness, empowerment of individuals, and promotion of responsible AI use.
This paper conducts an in-depth analysis of three case studies involving prominent Indian celebrities—Rashmika Mandhana, Kajol Devgan, and Katrina Kaif—affected by deepfake news. The prime objective of the research is to understand the key factors that determine the authenticity of these deepfake contents to combat the spread of misinformation by promoting responsible AI usage and fostering a culture of digital literacy. Through concerted efforts encompassing technological innovation, legal reform, public awareness, and individual empowerment, the researcher tries to counter the threat posed by Deepfakes and uphold the integrity of digital discourse in the age of AI.
References
Ahmed, S. (2022). Impact of deepfake technology on Digital World Authenticity: A Review. International Journal of Engineering and Management Research, 12(3), 78–84. https://doi.org/10.31033/ijemr.12.3.10 DOI: https://doi.org/10.31033/ijemr.12.3.10
Alattar, A., Sharma, R., & Scriven, J. (2020). A system for mitigating the problem of Deepfake News videos using watermarking. Electronic Imaging, 32(4). https://doi.org/10.2352/issn.2470- 1173.2020.4.mwsf-117 DOI: https://doi.org/10.2352/ISSN.2470-1173.2020.4.MWSF-117
Ali, A., Khan Ghouri, K. F., Naseem, H., Soomro, T. R., Mansoor, W., & Momani, A. M. (2022). Battle of Deep Fakes: Artificial Intelligence set to become a major threat to the individual and national security. 2022 International Conference on Cyber Resilience (ICCR). https://doi.org/10.1109/iccr56254.2022.9995821 DOI: https://doi.org/10.1109/ICCR56254.2022.9995821
Al-Khazraji, S. H., Saleh, H. H., Khalid, A. I., & Mishkhal, I. A. (2023). Impact of deepfake technology on social media: Detection, misinformation and societal implications. The Eurasia Proceedings of Science Technology Engineering and Mathematics, 23, 429–441. https://doi.org/10.55549/epstem.1371792 DOI: https://doi.org/10.55549/epstem.1371792
ARSLAN, F. (2023a). Deepfake technology: A criminological literature review. Sakarya ÜniversitesiHukukFakültesiDergisi, 11(1), 701–7https://doi.org/10.56701/shd.1293642 DOI: https://doi.org/10.56701/shd.1293642
Barber, A. (2023). Freedom of expression meets deepfakes. Synthese, 202(2). https://doi.org/10.1007/s11229-023-04266-4 DOI: https://doi.org/10.1007/s11229-023-04266-4
Bhaumik, A. (2023, December 4). Regulating deepfakes and generative AI in India | Explained. The Hindu. https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in- india-explained/article67591640.ece
Bisong, E. (2019). What is deep learning? Building Machine Learning and Deep Learning Models on Google Cloud Platform, 327–329. https://doi.org/10.1007/978-1-4842-4470-8_27 DOI: https://doi.org/10.1007/978-1-4842-4470-8_27
Chemerys, H. (2024). Detection and systematization of signs and markers of modifications in media content for the development of a methodology to enhancing critical thinking in the era of Deepfakes. Physical and Mathematical Education, 39(1), 70–77. https://doi.org/10.31110/fmo2024.v39i1-10 DOI: https://doi.org/10.31110/fmo2024.v39i1-10
Doss, C., Mondschein, J., Shu, D., Wolfson, T., Kopecky, D., Fitton-Kane, V. A., Bush, L., & Tucker, C. (2023). Deepfakes and scientific knowledge dissemination. Scientific Reports, 13(1). https://doi.org/10.1038/s41598-023-39944-3 DOI: https://doi.org/10.1038/s41598-023-39944-3
El-Gayar, M. M., Abouhawwash, M., Askar, S. S., & Sweidan, S. (2024). A novel approach for detecting deep fake videos using graph neural network. Journal of Big Data, 11(1). https://doi.org/10.1186/s40537-024-00884-y DOI: https://doi.org/10.1186/s40537-024-00884-y
Gil, R., Virgili-Gomà, J., López-Gil, J.-M., & García, R. (2023). Deepfakes: Evolution and trends. Soft Computing, 27(16), 11295–11318. https://doi.org/10.1007/s00500-023-08605-y DOI: https://doi.org/10.1007/s00500-023-08605-y
Joshi, M., Fredrick, O. (2023, November 21). UP-rooting deepfake menace with helpline & online tools | Modi, Rashmika, Katrina, Kajol among victims. News18. https://www.news18.com/india/up-rooting-the-deepfake-menace-with-helpline-online-tools- modi-rashmika-katrina-to-kajol-among-victims-8671137.html
Kiliç, B., & Kahraman, M. E. (2023). Current usage areas of deepfake applications with Artificial Intelligence Technology. İletişimveToplumAraştırmalarıDergisi, 3(2), 301–332. https://doi.org/10.59534/jcss.1358318 DOI: https://doi.org/10.59534/jcss.1358318
Laffier, J., & Rehman, A. (2023). Deepfakes and harm to women. Journal of Digital Life and Learning, 3(1), 1–21. https://doi.org/10.51357/jdll.v3i1.218 DOI: https://doi.org/10.51357/jdll.v3i1.218
Malik, A., Kuribayashi, M., Abdullahi, S. M., & Khan, A. N. (2022). Deepfake detection for human face images and videos: A survey. IEEE Access, 10, 18757–18775. https://doi.org/10.1109/access.2022.3151186 DOI: https://doi.org/10.1109/ACCESS.2022.3151186
Mankoo, S. S. (2023a). Deepfakes- the digital threat in the real world. Gyan Management Journal, 17(1), 71–77. https://doi.org/10.48165/gmj.2022.17.1.8 DOI: https://doi.org/10.48165/gmj.2022.17.1.8
Menon, B. N. N. &. S. (2023, December 22). Why are there so many deepfakes of Bollywood actresses? BBC NEWS. https://www.bbc.com/news/entertainment-arts-67726019
Muhammad Ilman Abidin. (2023a). Legal Review of liability from Deepfake Artificial Intelligence that contains pornography. MIMBAR :Jurnal Sosial Dan Pembangunan, 39(2). https://doi.org/10.29313/mimbar.v39i2.2965 DOI: https://doi.org/10.29313/mimbar.v39i2.2965
Mukhopadhyay, S. (2023, November 6). Why Rashmika Mandhana Deep Fake video may spell trouble for social media platforms. Times Now. https://www.timesnownews.com/india/why- rashmika-mandhana-deep-fake-video-may-spell-trouble-for-social-media-platforms-article- 105014455
Mukta, Md. S., Ahmad, J., Raiaan, M. A., Islam, S., Azam, S., Ali, M. E., & Jonkman, M. (2023). An investigation of the effectiveness of deepfake models and Tools. Journal of Sensor and Actuator Networks, 12(4), 61. https://doi.org/10.3390/jsan12040061 DOI: https://doi.org/10.3390/jsan12040061
Patarlapati, N. (2023). Unmasking reality: Exploring the sociological impacts of Deepfake Technology. International Journal for Research in Applied Science and Engineering Technology, 11(11), 882–889. https://doi.org/10.22214/ijraset.2023.56639 DOI: https://doi.org/10.22214/ijraset.2023.56639
Prasoon, P., & Ramakrishnan, P. N. (2024). Deepfake Dystopia: Navigating the landscape of threats and safeguards in multimedia content. International Journal of Trendy Research in Engineering and Technology, 07(6 December 2023), 01–07. https://doi.org/10.54473/ijtret.2024.8101 DOI: https://doi.org/10.54473/IJTRET.2024.8101
Real-time AI deepfake chatbots are coming. How do we weed them out? (2024, February 13). World Economic Forum. https://www.weforum.org/agenda/2024/02/4-ways-to-future-proof- against-deepfakes-in-2024-and-beyond/
Shahzad, H. F., Rustam, F., Flores, E. S., Luís Vidal Mazón, J., de la Torre Diez, I., & Ashraf, I. (2022). A review of image processing techniques for deepfakes. Sensors, 22(12), 4556. https://doi.org/10.3390/s22124556 DOI: https://doi.org/10.3390/s22124556
Singh, Y. (2024, January 22). Katrina Kaif becomes victim of deepfake for second time as she speaks fluent Turkish, see VIDEO. PINKVILLA. https://www.pinkvilla.com/entertainment/news/katrina-kaif-becomes-victim-of-deepfake-for-second-time-as-she-speaks-fluent-turkish-see-video-1273233
Statesman News Service & Statesman News Service. (2023, November 6). ‘Can be taken to court’: On Rashmika Mandhana’s deep fake video, IT minister warns online platforms. The Statesman. https://www.thestatesman.com/india/can-be-taken-to-court-on-rashmika-mandhanas-deep-fake- video-it-minister-warns-online-platforms-1503238316.html
Sturino, F. S. (2023). Deepfake technology and individual rights. Social Theory and Practice, 49(1), 161–187. https://doi.org/10.5840/soctheorpract2023310184 DOI: https://doi.org/10.5840/soctheorpract2023310184
Tsikerdekis, M. (2014, September 1). Online deception in social media – Communications of the ACM. https://cacm.acm.org/research/online-deception-in-social-media/ DOI: https://doi.org/10.1145/2629612
Tucker, A. (2023). Synthetic Media and deepfakes: Tactical media in the Pluriverse. Digital Studies/Le Champ Numérique (DSCN) Open Issue 2023, 13(1). https://doi.org/10.16995/dscn.8651 DOI: https://doi.org/10.16995/dscn.8651
V. Baravkar, Prof. P., Survase, N., Shrimandale, S., & Hande, G. (2023). Survey on “Deepvision’s human eye blink pattern analysis for deepfake detection.” INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT, 07(10), 1–11. https://doi.org/10.55041/ijsrem26418 DOI: https://doi.org/10.55041/IJSREM26418
Vamsi, V. V., Shet, S. S., Reddy, S. S., Rose, S. S., Shetty, S. R., Sathvika, S., M. S., S., & Shankar, S. P. (2022). Deepfake detection in digital media forensics. Global Transitions Proceedings, 3(1), 74–79. https://doi.org/10.1016/j.gltp.2022.04.017 DOI: https://doi.org/10.1016/j.gltp.2022.04.017
Walczyna, T., & Piotrowski, Z. (2023). Quick overview of Face Swap Deep fakes. Applied Sciences, 13(11), 6711. https://doi.org/10.3390/app13116711 DOI: https://doi.org/10.3390/app13116711
Wang, T., Liu, M., Cao, W., & Chow, K. P. (2022). Deepfake noise investigation and detection. Forensic Science International: Digital Investigation, 42, 301395. https://doi.org/10.1016/j.fsidi.2022.301395 DOI: https://doi.org/10.1016/j.fsidi.2022.301395
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Dr. Jayanta Kumar Panda, Rajnandini Panigrahy

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























