NEURAL STYLE TRANSFER AS DIGITAL REPRESENTATION THROUGH PAINTING USING DEEP LEARNING
DOI:
https://doi.org/10.29121/granthaalayah.v14.i2SCE.2026.6704Keywords:
Neural Style Transfer (NST), Painting, Convolutional Neural Networks (CNNs)Abstract [English]
The intersection of art and technology, particularly the development of deep learning and artificial intelligence, has opened new avenues for creative expression. Neural Style Transfer (NST) is a new approach that uses deep neural networks to add creative style to digital images. This technique overlays visual elements—such as texture, color patterns, and brushstrokes—from one image onto the content of another, producing interesting and imaginative results.
This study explores the fundamental concepts, methodology, and artistic significance of neural style transfer. This technique is primarily based on convolutional neural networks (CNNs), which are capable of extracting high-level content features and low-level style features from images. Through an optimization process that separates and then reassembles these features, NST creates images that retain the structural content of the original image while adopting the artistic style of the reference artwork.
This research highlights the interdisciplinary impact of neural style transfer, demonstrating its relevance not only in digital art and graphic design, but also in fields such as animation, multimedia, and visual communication. Furthermore, NST serves as a bridge between traditional artistic methods and modern computational techniques, allowing artists, designers, and technologists to collaborate and explore new creative possibilities. The results show that neural style transfer is an important step toward integrating deep learning.
Downloads
References
Efros, A. A., and Freeman, W. T. (2001). Image Quilting for Texture Synthesis and Transfer. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’01) (341–346). ACM. https://doi.org/10.1145/383259.383296 DOI: https://doi.org/10.1145/383259.383296
Garber, D. D. (1981). Computational Models for Texture Analysis and Texture Synthesis (Doctoral dissertation). University of Southern California, Image Processing Institute. https://doi.org/10.21236/ADA102470 DOI: https://doi.org/10.21236/ADA102470
Gatys, L. A., Ecker, A. S., and Bethge, M. (2015). A Neural Algorithm of Artistic Style. arXiv. https://arxiv.org/abs/1508.06576
Harrison, P. (2001). A Non-Hierarchical Procedure for Re-Synthesis of Complex Textures. In WSCG 2001 Conference Proceedings (190–197).
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Computer Vision—ECCV 2016 (694–711). Springer. https://doi.org/10.1007/978-3-319-46475-6_43 DOI: https://doi.org/10.1007/978-3-319-46475-6_43
Khaligh-Razavi, S.-M., and Kriegeskorte, N. (2014). Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLOS Computational Biology, 10(11), e1003915. https://doi.org/10.1371/journal.pcbi.1003915 DOI: https://doi.org/10.1371/journal.pcbi.1003915
Kriegeskorte, N., Mur, M., Ruff, D. A., Kiani, R., Bodurka, J., Esteky, H., Tanaka, K., and Bandettini, P. A. (2008). Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey. Neuron, 60(6), 1126–1141. https://doi.org/10.1016/j.neuron.2008.10.043 DOI: https://doi.org/10.1016/j.neuron.2008.10.043
Kwatra, V., Schödl, A., Essa, I., Turk, G., and Bobick, A. (2003). Graphcut Textures: Image and Video Synthesis Using Graph Cuts. ACM Transactions on Graphics, 22(3), 277–286. https://doi.org/10.1145/882262.882264 DOI: https://doi.org/10.1145/882262.882264
Kyprianidis, J. E., Collomosse, J., Wang, T., and Isenberg, T. (2013). State of the “Art”: A Taxonomy of Artistic Stylization Techniques for Images and Video. IEEE Transactions on Visualization and Computer Graphics, 19(5), 866–885. https://doi.org/10.1109/TVCG.2012.160 DOI: https://doi.org/10.1109/TVCG.2012.160
Lee, H., Seo, S., Ryoo, S., and Yoon, K. (2010). Directional Texture Transfer. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering (NPAR ’10) (43–48). ACM. https://doi.org/10.1145/1809939.1809945 DOI: https://doi.org/10.1145/1809939.1809945
Liu, S. (2022). An Overview of Color Transfer and Style Transfer for Images and Videos. arXiv. https://arxiv.org/abs/2204.13339
Long, J., Shelhamer, E., and Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (3431–3440). https://doi.org/10.1109/CVPR.2015.7298965 DOI: https://doi.org/10.1109/CVPR.2015.7298965
Mahendran, A., and Vedaldi, A. (2014). Understanding Deep Image Representations by Inverting Them. arXiv. DOI: https://doi.org/10.1109/CVPR.2015.7299155
Pitié, F., Kokaram, A. C., and Dahyot, R. (2007). Automated Colour Grading Using Colour Distribution Transfer. Computer Vision and Image Understanding, 107, 123–137. https://doi.org/10.1016/j.cviu.2006.11.011 DOI: https://doi.org/10.1016/j.cviu.2006.11.011
Popat, K., and Picard, R. W. (n.d.). Novel Cluster-Based Probability Model for Texture Synthesis, Classification, and Compression. In Proceedings of SPIE: Visual Communications and Image Processing.
Ruderman, D. L., Cronin, T. W., and Chiao, C.-C. (1998). Statistics of Cone Responses to Natural Images: Implications for Visual Coding. Journal of the Optical Society of America A, 15(8), 2036–2045. https://doi.org/10.1364/JOSAA.15.002036 DOI: https://doi.org/10.1364/JOSAA.15.002036
Wang, X., and Yu, J. (2020). Learning to Cartoonize Using White-Box Cartoon Representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (8090–8099). https://doi.org/10.1109/CVPR42600.2020.00811 DOI: https://doi.org/10.1109/CVPR42600.2020.00811
Wei, L.-Y., and Levoy, M. (2000). Fast Texture Synthesis Using Tree-Structured Vector Quantization. In Proceedings of SIGGRAPH 2000 (479–488). https://doi.org/10.1145/344779.345009 DOI: https://doi.org/10.1145/344779.345009
Xu, M., and Ding, Y. (2022). Color Transfer Algorithm Between Images Based on a Two-Stage Convolutional Neural Network. Sensors, 22, 1–21. https://doi.org/10.3390/s22207779 DOI: https://doi.org/10.3390/s22207779
Xu, Y., Guo, B., and Shum, H.-Y. (2000). Chaos Mosaic: Fast and Memory Efficient Texture Synthesis (Technical Report MSR-TR-2000-32). Microsoft Research.
Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J. (1997). Algorithm 778: L-BFGS-B: Fortran Subroutines for Large-Scale Bound-Constrained Optimization. ACM Transactions on Mathematical Software, 23(4), 550–560. https://doi.org/10.1145/279232.279236 DOI: https://doi.org/10.1145/279232.279236
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Mahesh Vishvakarma

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.





















