ENHANCING AESTHETIC APPEAL OF PRINTS USING NEURAL NETWORKS

Authors

  • Dr. Parag Amin Professor, ISME - School of Management & Entrepreneurship, ATLAS SkillTech University, Mumbai, Maharashtra, India
  • Paramjit Baxi Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, 174103, India
  • Jaspreet Sidhu Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India
  • Dr. Suresh Kumar Lokhande Department of Informatics, Osmania University, Hyderabad, Telangana, India.
  • Dr. Ritesh Rastogi Professor, Department of Information Technology, Noida Institute of Engineering and Technology, Greater Noida, Uttar Pradesh, India
  • Subhash Kumar Verma Professor, School of Business Management, Noida international University 203201

DOI:

https://doi.org/10.29121/shodhkosh.v6.i1s.2025.6647

Keywords:

Neural Networks, Image Aesthetics, Print Enhancement, Deep Learning, Digital Printing

Abstract [English]

The appearance of printed pictures is extremely crucial to visual communication, product display, and art. Conventional methods of enhancing prints, such as colour repair, contrast control, and texturing refinements, tend to require that the end user manually adjusts the filter or relies on predefined filters, so they cannot be as adaptable to other types of pictures and printed media. With the emergence of deep learning and, in particular, neural networks, now, there are more options to enhance the appearance of printed materials using automated and effective methods. This study proposes a neural network-based algorithm to make printed photographs appear more attractive by learning complex, non-linear variants by using large datasets as the input. The process involves selecting the appropriate data sets, processing to balance the variations in texture, tone and colour and creating a convolutional neural network (CNN) framework that is actually designed to evaluate aesthetics. The model is trained to do perceptual loss functions and tested against standard picture improvement algorithms such as histogram equalisation and standard picture improvement algorithms. The experiments have better outcomes when compared to standard approaches in the aspects of colour balance, pattern accuracy, and overall visual appeal. In addition, the proposed model is highly effective with an expansive print media, including digital, letterpress, and cloth printing.

References

Abdallah, A., Berendeyev, A., Nuradin, I., Nurseitov, D., and (2022). TNCR: Table net Detection and Classification Dataset. Neurocomputing, 473, 79–97. https://doi.org/10.1016/j.neucom.2021.11.101 DOI: https://doi.org/10.1016/j.neucom.2021.11.101

Anantrasirichai, N., and Bull, D. (2022). Artificial Intelligence in the Creative Industries: A Review. Artificial Intelligence Review, 55, 589–656. https://doi.org/10.1007/s10462-021-10039-7 DOI: https://doi.org/10.1007/s10462-021-10039-7

Balaji, A., Balanjali, D., Subbaiah, G., Reddy, A. A., and Karthik, D. (2025). Federated Deep Learning for Robust Multi-Modal Biometric Authentication Based on Facial and Eye-Blink Cues. International Journal of Advanced Computer Engineering and Communication Technology (IJACECT), 14(1), 17–24. https://doi.org/10.65521/ijacect.v14i1.167 DOI: https://doi.org/10.65521/ijacect.v14i1.167

Cetinic, E., and She, J. (2022). Understanding and Creating Art with AI: Review and Outlook. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(1), 1–22. https://doi.org/10.1145/3475799 DOI: https://doi.org/10.1145/3475799

Cheng, M. (2022). The Creativity of Artificial Intelligence in Art. Proceedings, 81, Article 110. https://doi.org/10.3390/proceedings2022081110 DOI: https://doi.org/10.3390/proceedings2022081110

Guo, D. H., Chen, H. X., Wu, R. L., and Wang, Y. G. (2023). AIGC Challenges and Opportunities Related to Public Safety: A case study of ChatGPT. Journal of Safety Science and Resilience, 4(4), 329–339. https://doi.org/10.1016/j.jnlssr.2023.08.001 DOI: https://doi.org/10.1016/j.jnlssr.2023.08.001

Hermerén, G. (2024). Art and Artificial Intelligence. Cambridge University Press. https://doi.org/10.1017/9781009431798 DOI: https://doi.org/10.1017/9781009431798

Jin, X., Li, X., Lou, H., Fan, C., Deng, Q., Xiao, C., Cui, S., and Singh, A. K. (2023). Aesthetic Attribute Assessment of Images Numerically on Mixed Multi-Attribute Datasets. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(4), 1–16. https://doi.org/10.1145/3547144 DOI: https://doi.org/10.1145/3547144

Leong, W. Y., and Zhang, J. B. (2025). AI on Academic Integrity and Plagiarism Detection. ASM Science Journal, 20, Article 75. https://doi.org/10.32802/asmscj.2025.1918 DOI: https://doi.org/10.32802/asmscj.2025.1918

Leong, W. Y., and Zhang, J. B. (2025). Ethical Design of AI for Education and Learning Systems. ASM Science Journal, 20, 1–9. https://doi.org/10.32802/asmscj.2025.1917 DOI: https://doi.org/10.32802/asmscj.2025.1917

Lou, Y. Q. (2023). Human Creativity in the AIGC Era. Journal of Design Economics and Innovation, 9(4), 541–552. https://doi.org/10.1016/j.sheji.2024.02.002 DOI: https://doi.org/10.1016/j.sheji.2024.02.002

Marcus, G., Davis, E., and Aaronson, S. (2022). A Very Preliminary Analysis of DALL-E 2 (arXiv preprint). arXiv.

Oksanen, A., Cvetkovic, A., Akin, N., Latikka, R., Bergdahl, J., Chen, Y., and Savela, N. (2023). Artificial Intelligence in Fine Arts: A Systematic Review of Empirical Research. Computers in Human Behavior: Artificial Humans, 1, Article 100004. https://doi.org/10.1016/j.chbah.2023.100004 DOI: https://doi.org/10.1016/j.chbah.2023.100004

Shao, L. J., Chen, B. S., Zhang, Z. Q., Zhang, Z., and Chen, X. R. (2024). Artificial Intelligence Generated Content (AIGC) in Medicine: A Narrative Review. Mathematical Biosciences and Engineering, 21(2), 1672–1711. https://doi.org/10.3934/mbe.2024073 DOI: https://doi.org/10.3934/mbe.2024073

Tang, L., Wan, L., Wang, T., and Li, S. (2023). DECANet: Image Semantic Segmentation Method Based on Improved DeepLabv3+. Laser and Optoelectronics Progress, 60, 92–100. https://doi.org/10.3788/LOP212704 DOI: https://doi.org/10.3788/LOP212704

Wang, Z., Xu, Y., Cui, L., Shang, J., and Wei, F. (2021). LayoutReader: Pre-Training of Text and Layout for Reading Order Detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) (4735–4746). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.emnlp-main.389 DOI: https://doi.org/10.18653/v1/2021.emnlp-main.389

Wu, Y., Jiang, J., Huang, Z., and Tian, Y. (2022). FPANet: Feature Pyramid Aggregation Network for Real-Time Semantic Segmentation. Applied Intelligence, 52, 3319–3336. https://doi.org/10.1007/s10489-021-02603-z DOI: https://doi.org/10.1007/s10489-021-02603-z

Wu, Z., and Wang, Q. (2022). Leaf Image Segmentation Based on Support Vector Machine. Software Engineering, 6, Article 25.

Yang, A., Bai, Y., Liu, H., Jin, K., Xue, T., and Ma, W. (2022). Application of SVM and its Improved Model in Image Segmentation. Mobile Networks and Applications, 27, 851–861. https://doi.org/10.1007/s11036-021-01817-2 DOI: https://doi.org/10.1007/s11036-021-01817-2

Downloads

Published

2025-12-10

How to Cite

Amin, P., Baxi, P., Sidhu, J., Lokhande, S. K., Rastogi, R., & Verma, S. K. (2025). ENHANCING AESTHETIC APPEAL OF PRINTS USING NEURAL NETWORKS. ShodhKosh: Journal of Visual and Performing Arts, 6(1s), 195–205. https://doi.org/10.29121/shodhkosh.v6.i1s.2025.6647