EXPLORING GANS IN DIGITAL MEDIA EDUCATION

Authors

  • Rahul Thakur Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Dr. Sonia Munjal Professor, Department of Master of Business Administration, Noida Institute of Engineering and Technology, Greater Noida, Uttar Pradesh, India
  • Abhinav Mishra Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Dr. Ankita Gandhi Assistant Professor, Department of Computer science and Engineering, Faculty of Engineering and Technology, Parul institute of Engineering and Technology, Parul University, Vadodara, Gujarat, India
  • Mohit Malik Assistant Professor, School of Business Management, Noida international University 203201, India
  • Dr. Zuleika Homavazir Professor, ISME - School of Management & Entrepreneurship, ATLAS SkillTech University, Mumbai, Maharashtra, India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i1s.2025.6620

Keywords:

Generative Adversarial Networks, Digital Media Education, Computational Creativity, AI Literacy, Human–AI Collaboration, Ethical AI, Creative Pedagogy, Deep Learning In Education

Abstract [English]

The Generative Adversarial Networks (GANs) introduced to the education of digital media can be regarded as a significant change in the point of intersection of creativity, computation, and ethics in educational context. The creative tools and the educational model of both the use of GANs will be discussed in the paper in terms of pedagogical and cognitive implications. The research employed a mixed-methodology in three institutions, such as design, technical and vocational setting, to determine the effect of the changes on the student creativity, technical aptitude and the moral cognizance. The quantitative findings demonstrated the existence of considerable improvements in the knowledge of neural structures, model training and data interpretation, but a substantial disparity was observed between the average scores concerning proficiency abilities following the exposure to GAN-based modules. The qualitative findings indicated that the level of engagement increased in the learners and it presupposed the iterative workflows that were co-creative and led to the mix of the human and machine creativity. Moreover, the ethical way of thinking could be developed, which allowed deepfakes, data bias, intellectual property concerns to become more aware and responsible digital creators were created through the assistance of GANs. The GAN-based education would promote creative autonomy and reflective judgment, which will be requested of future professionals working in the creativity industries that will be automated with AI by putting them in the position of being co-creators with AI. The article recommends the systematic pedagogical plan to integrate tools associated with GAN, project-based learning, and ethics courses to produce sustainable and inclusive creative courses. Lastly, the paper will present GANs as a revolutionary piece of technology, but also as something that would revolutionize the creative way of teaching in the 21st century as they would produce a pool of AI-sensitive creators who would be morally aware.

References

Ali, S., Payne, B. H., Williams, R., Park, H. W., and Breazeal, C. (2019). Constructionism, Ethics, and Creativity: Developing Primary and Middle School Artificial Intelligence Education. In Proceedings of the International Workshop on Education in Artificial Intelligence K–12

Chen, F. J., Zhu, F., Wu, Q. X., Hao, Y. M., Wang, E. D., and Cui, Y. G. (2021). A Review of Generative Adversarial Networks and their Applications in Image Generation. Journal of Computer Science, 44(2), 347–369.

Dorodchi, M., Al-Hossami, E., Benedict, A., and Demeter, E. (2019). Using Synthetic Data Generators to Promote Open Science in Higher Education Learning Analytics. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 4672–4675). IEEE. https://doi.org/10.1109/BigData47090.2019.9006475 DOI: https://doi.org/10.1109/BigData47090.2019.9006475

Grebo, B., Krstulović-Opara, L., and Domazet, Z. (2023). Thermal to Digital Image Correlation Image to Image Translation with Cyclegan and Pix2Pix. Materials Today: Proceedings, 93, 752–760. https://doi.org/10.1016/j.matpr.2023.06.219 DOI: https://doi.org/10.1016/j.matpr.2023.06.219

Ha, D., and Eck, D. (2017). A Neural Representation of Sketch Drawings. arXiv.

Hatwar, L. R., Pohane, R. B., Bhoyar, S., and Padole, S. P. (2025). Mathematical Modeling on Decay of Radioactive Material Affects Cancer Treatment. International Journal of Research and Development Management Review, 14(1), 180–182. DOI: https://doi.org/10.65521/ijrdmr.v14i1.501

Johnson, J., Gupta, A., and Fei-Fei, L. (2018). Image Generation from Scene Graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1219–1228). DOI: https://doi.org/10.1109/CVPR.2018.00133

Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. (2020). Analyzing and Improving the Image Quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR42600.2020.00813 DOI: https://doi.org/10.1109/CVPR42600.2020.00813

Kaur, D., Sobiesk, M., Patil, S., Liu, J., Bhagat, P., Gupta, A., and Markuzon, N. (2021). Application of Bayesian Networks to Generate Synthetic Health Data. Journal of the American Medical Informatics Association, 28(4), 801–811. https://doi.org/10.1093/jamia/ocaa303 DOI: https://doi.org/10.1093/jamia/ocaa303

Kim, H., Garrido, P., Tewari, A., Xu, W., Thies, J., Niessner, M., Perez, P., Richardt, C., Zollhöfer, M., and Theobalt, C. (2018). Deep Video Portraits. ACM Transactions on Graphics, 37(4), Article 163. https://doi.org/10.1145/3197517.3201283 DOI: https://doi.org/10.1145/3197517.3201283

Li, B., Qi, X., Lukasiewicz, T., and Torr, P. H. S. (2019). Controllable Text-To-Image Generation. In Advances in Neural Information Processing Systems (Vol. 32).

Lin, E. (2023). Comparative Analysis of Pix2Pix and CycleGAN for Image-To-Image Translation. Highlights in Science, Engineering and Technology, 39, 915–925. https://doi.org/10.54097/hset.v39i.6676 DOI: https://doi.org/10.54097/hset.v39i.6676

Lomas, N. (2020, August 17). Deepfake Video App Reface is Just Getting Started on Shapeshifting Selfie Culture. TechCrunch.

Machine Learning for Artists. (2016). Machine Learning for Artists.

Oppenlaender, J. (2022). The Creativity of Text-To-Image Generation. In Proceedings of the International Academic Mindtrek Conference (pp. 192–202). https://doi.org/10.1145/3569219.3569352 DOI: https://doi.org/10.1145/3569219.3569352

Wang, Z., She, Q., and Ward, T. E. (2021). Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy. ACM Computing Surveys, 54(2), Article 42. https://doi.org/10.1145/3439723 DOI: https://doi.org/10.1145/3439723

Downloads

Published

2025-12-10

How to Cite

Thakur, R., Munjal, S., Mishra, A., Gandhi, A., Malik, M., & Homavazir, Z. (2025). EXPLORING GANS IN DIGITAL MEDIA EDUCATION. ShodhKosh: Journal of Visual and Performing Arts, 6(1s), 42–52. https://doi.org/10.29121/shodhkosh.v6.i1s.2025.6620