SALIENT AREA DISCERNMENT VIA HIGH DIMENSIONAL COLOR TRANSFORM AND LOCAL SPECIAL PLATFORM

  • P. Santhiya PG & Research Department of Computer Science, Tiruppur kumaran College for Women, India
  • S. Selvi Department of Computer Applications, Tiruppur kumaran College for Women, India
Keywords: Salient Region Detection, Superpixel, Saliency Map, Trimap, High-Dimensional Color Space, Fourier Transform

Abstract

Detecting visually salient regions in images is fundamental problems and it is useful for applications like image segmentation, adaptive compression, and object recognition. A salient object region is a soft decomposition of foreground and background image elements. To detect salient regions in an image in terms of the saliency maps. To create a saliency map by using a linear combination of colors in high-dimensional color space. To improve the performance of saliency estimation, utilize the relative location and color contrast between superpixels. To resolve the saliency estimation from trimap by using learning based algorithm. This is based on an examination that salient regions frequently have individual colors’ compared with backgrounds in human sensitivity however, human perception is complicated and extremely nonlinear. The tentative outcome on three benchmark datasets show that our approach is valuable in assessment with the prior state-of-the-art saliency estimation methods. Finally, salient region detection that outputs full resolution saliency map with well-defined boundaries of the salient object. 

Downloads

Download data is not yet available.

References

G. Li and Y. Yu, “Deep contrast learning for salient object detection, “in Proc. CVPR, 2016. DOI: https://doi.org/10.1109/CVPR.2016.58

M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2011, pp. 409–416. DOI: https://doi.org/10.1109/CVPR.2011.5995344

L. Wang, H. Lu, X. Ruan, and M.-H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proc. CVPR, 2015, pp. 3183–3192. DOI: https://doi.org/10.1109/CVPR.2015.7298938

R. Zhao, W. Ouyang, H. Li, and X. Wang, “Saliency detection by multicontext deep learning,” in Proc. CVPR, 2015, pp. 1265–1274. DOI: https://doi.org/10.1109/CVPR.2015.7298731

R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1265– 1274, 2015. DOI: https://doi.org/10.1109/CVPR.2015.7298731

Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. De-Coste, W. Di, and Y. Yu. Hd-cnn: Hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 2740–2748,2015. DOI: https://doi.org/10.1109/ICCV.2015.314

S. Xie and Z. Tu. Holistically-nested edge detection. arXiv preprint arXiv:1504.06375, 2015.

K. Wang, L. Lin, J. Lu, C. Li, and K. Shi. Pisa: Pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. Image Processing, IEEE Transactions on,24(10):3019–3033, Oct 2015. DOI: https://doi.org/10.1109/TIP.2015.2432712

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proc. CVPR, 2014, pp. 2814–2821. DOI: https://doi.org/10.1109/CVPR.2014.360

Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in Proc. CVPR, 2014, pp. 280–287. DOI: https://doi.org/10.1109/CVPR.2014.43

K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

J. Mairal, F. Bach, and J. Ponce, “Sparse modeling for image and vision processing,” arXiv preprint arXiv:1411.3230, 2014. DOI: https://doi.org/10.1561/9781680830095

J. Kim, D. Han, Y.-W. Tai, and J. Kim, “Salient region detection via high-dimensional color transform,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 883–890. DOI: https://doi.org/10.1109/CVPR.2014.118

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proc. CVPR, 2013, pp. 3166–3173. DOI: https://doi.org/10.1109/CVPR.2013.407

P. Wang, G. Zeng, R. Gan, J. Wang, and H. Zha. Structure-sensitive superpixels via geodesic distance. International journal of computer vision, 103(1):1–21, 2013. DOI: https://doi.org/10.1007/s11263-012-0588-6

R. Wu, Y. Yu, and W. Wang. Scale: Supervised and cascaded laplacian eigenmaps for visual object recognition based on nearest neighbors. In CVPR, 2013. DOI: https://doi.org/10.1109/CVPR.2013.117

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In Computer Vision and Pattern Recognition (CVPR),2013 IEEE Conference on, pages 3166–3173. IEEE,2013. DOI: https://doi.org/10.1109/CVPR.2013.407

X. Ren and D. Ramanan, “Histograms of sparse codes for object detection, “in IEEE Conference on Computer Vision and Pattern Recognition, pp. 3246–3253, 2013.

P. Siva, C. Russell, T. Xiang, and L. Agapito, “Looking beyond the image: Unsupervised learning for object saliency and detection,” in Proc. IEEE Conf. Computer Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 3238–3245. DOI: https://doi.org/10.1109/CVPR.2013.416

X. Bai, B. Shi, C. Zhang, X. Cai, and L. Qi, “Text/non-text image classification in the wild with convolutional neural networks,” Pattern Recognition, vol. 66, pp. 437–446, 2017. DOI: https://doi.org/10.1016/j.patcog.2016.12.005

Published
2018-10-31
How to Cite
P. Santhiya, & S. Selvi. (2018). SALIENT AREA DISCERNMENT VIA HIGH DIMENSIONAL COLOR TRANSFORM AND LOCAL SPECIAL PLATFORM . International Journal of Engineering Technologies and Management Research, 5(10), 17-24. https://doi.org/10.29121/ijetmr.v5.i10.2018.298