MACHINE LEARNING ALGORITHMS FOR CLASSIFYING VISUAL ARTISTIC STYLES ACROSS HISTORICAL PERIODS
DOI:
https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7464Keywords:
Artistic Style Classification, Machine Learning, Deep Learning, Convolutional Neural Networks, Image Feature Extraction, Visual Art AnalysisAbstract [English]
A broad spectrum of interdisciplinary research problems have been identified in the homogeneous grouping of visual artistic styles through time: as a research topic at the precincts of machine learning and art history. This paper introduces a unified methodology of automatic detection of artistic styles with traditional machine learning algorithms as well as with current deep learning architectures. First, local handcrafted feature extraction methods such as Scale-Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG) and Local Binary Patterns (LBP) are used to extract low-level visual features of the image such as texture, edges, and structure composition. These characteristics are also supplemented by color histograms and spatial composition descriptors to improve the ability to represent. Multi-class style categorization is then done using classical classifiers that include Support Vector Machines (SVM), K-Nearest Neighbors (KNN) and Random Forest. On the same note, Convolutional Neural Networks (CNN), ResNet, VGG, and EfficientNet are deep learning models that are trained to extract high-level abstract features directly using image data. The paper also examines the hybrid methods which can combine the handcrafted and deep features to enhance the accuracy of the classification. Empirical evidence shows that deep learning models are more effective and accurate in classification with a classification accuracy of over 90 percent in standard art data. The proposed framework is effective and scalable with respect to automated artistic style recognition and it helps in preservation, curation, and analysis of the digital art.
References
Ajorloo, S., Jamarani, A., Kashfi, M., Haghi Kashani, M., and Najafizadeh, A. (2024). A Systematic Review of Machine Learning Methods in Software Testing. Applied Soft Computing, 162, 111805. https://doi.org/10.1016/j.asoc.2024.111805
Barglazan, A.-A., Brad, R., and Constantinescu, C. (2024). Image Inpainting Forgery Detection: A Review. Journal of Imaging, 10(2), 42. https://doi.org/10.3390/jimaging10020042
Chen, G., Wen, Z., and Hou, F. (2023). Application of Computer Image Processing Technology in Old Artistic Design Restoration. Heliyon, 9, e21366. https://doi.org/10.1016/j.heliyon.2023.e21366
Chen, Z., and Zhang, Y. (2023). CA-GAN: The Synthesis of Chinese Art Paintings Using Generative Adversarial Networks. The Visual Computer, 40, 5451–5463. https://doi.org/10.1007/s00371-023-03115-2
Dobbs, T., and Ras, Z. (2022). On Art Authentication and the Rijksmuseum Challenge: A Residual Neural Network Approach. Expert Systems with Applications, 200, 116933. https://doi.org/10.1016/j.eswa.2022.116933
Fu, Y., Wang, W., Zhu, L., Ye, X., and Yue, H. (2024). Weakly Supervised Semantic Segmentation Based on Superpixel Affinity. Journal of Visual Communication and Image Representation, 101, 104168. https://doi.org/10.1016/j.jvcir.2024.104168
Gao, X., Tian, Y., and Qi, Z. (2020). RPD-GAN: Learning to Draw Realistic Paintings with Generative Adversarial Network. IEEE Transactions on Image Processing, 29, 8706–8720. https://doi.org/10.1109/TIP.2020.3018856
Gui, X., Zhang, B., Li, L., and Yang, Y. (2023). DLP-GAN: Learning to Draw Modern Chinese Landscape Photos with Generative Adversarial Network. Neural Computing and Applications, 36, 5267–5284. https://doi.org/10.1007/s00521-023-09345-8
Nishad, S., Singh, N., Tiwari, S., and Panday, M. A. (2026). A Deep Learning Approach to Detecting and Preventing Misinformation in Online Media. International Journal of Advanced Computer Theory and Engineering, 15(1), 6–10. https://doi.org/10.65521/ijacte.v15i1S.1300
Schaerf, L., Postma, E., and Popovici, C. (2024). Art Authentication with Vision Transformers. Neural Computing and Applications, 36, 11849–11858. https://doi.org/10.1007/s00521-023-08864-8
Wang, Z., Wang, P., Liu, K., Wang, P., Fu, Y., Lu, C. T., Aggarwal, C. C., Pei, J., and Zhou, Y. (2024). A Comprehensive Survey on Data Augmentation. arXiv.
Zaurín, J. R., and Mulinka, P. (2023). pytorch-widedeep: A Flexible Package for Multimodal Deep Learning. Journal of Open Source Software, 8(84), 5027. https://doi.org/10.21105/joss.05027
Zeng, Z., Zhang, P., Qiu, S., Li, S., and Liu, X. (2024). A Painting Authentication Method Based on Multi-Scale Spatial–Spectral Feature Fusion and Convolutional Neural Network. Computers and Electrical Engineering, 118, 109315. https://doi.org/10.1016/j.compeleceng.2024.109315
Zhang, Y., Xie, S., Liu, X., and Zhang, N. (2024). LMGAN: A Progressive End-to-End Chinese Landscape Painting Generation Model. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) ( 1–7). https://doi.org/10.1109/IJCNN60899.2024.10651018
Zhao, S., Fan, Q., Dong, Q., Xing, Z., Yang, X., and He, X. (2024). Efficient Construction and Convergence Analysis of Sparse Convolutional Neural Networks. Neurocomputing, 597, 128032. https://doi.org/10.1016/j.neucom.2024.128032
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Dr. Pratibha V. Kashid, Ganesh Korwar, Kiran Shyam Khandare, Baoxin Le, Pramod Rahate, Nikita P. Katariya, Suresh Arumugam

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























