SALIENT AREA DISCERNMENT VIA HIGH DIMENSIONAL COLOR TRANSFORM AND LOCAL SPECIAL PLATFORM
DOI:
https://doi.org/10.29121/ijetmr.v5.i10.2018.298Keywords:
Salient Region Detection, Superpixel, Saliency Map, Trimap, High-Dimensional Color Space, Fourier TransformAbstract
Detecting visually salient regions in images is fundamental problems and it is useful for applications like image segmentation, adaptive compression, and object recognition. A salient object region is a soft decomposition of foreground and background image elements. To detect salient regions in an image in terms of the saliency maps. To create a saliency map by using a linear combination of colors in high-dimensional color space. To improve the performance of saliency estimation, utilize the relative location and color contrast between superpixels. To resolve the saliency estimation from trimap by using learning based algorithm. This is based on an examination that salient regions frequently have individual colors’ compared with backgrounds in human sensitivity however, human perception is complicated and extremely nonlinear. The tentative outcome on three benchmark datasets show that our approach is valuable in assessment with the prior state-of-the-art saliency estimation methods. Finally, salient region detection that outputs full resolution saliency map with well-defined boundaries of the salient object.
Downloads
References
G. Li and Y. Yu, “Deep contrast learning for salient object detection, “in Proc. CVPR, 2016. DOI: https://doi.org/10.1109/CVPR.2016.58
M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2011, pp. 409–416. DOI: https://doi.org/10.1109/CVPR.2011.5995344
L. Wang, H. Lu, X. Ruan, and M.-H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proc. CVPR, 2015, pp. 3183–3192. DOI: https://doi.org/10.1109/CVPR.2015.7298938
R. Zhao, W. Ouyang, H. Li, and X. Wang, “Saliency detection by multicontext deep learning,” in Proc. CVPR, 2015, pp. 1265–1274. DOI: https://doi.org/10.1109/CVPR.2015.7298731
R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1265– 1274, 2015. DOI: https://doi.org/10.1109/CVPR.2015.7298731
Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. De-Coste, W. Di, and Y. Yu. Hd-cnn: Hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 2740–2748,2015. DOI: https://doi.org/10.1109/ICCV.2015.314
S. Xie and Z. Tu. Holistically-nested edge detection. arXiv preprint arXiv:1504.06375, 2015.
K. Wang, L. Lin, J. Lu, C. Li, and K. Shi. Pisa: Pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. Image Processing, IEEE Transactions on,24(10):3019–3033, Oct 2015. DOI: https://doi.org/10.1109/TIP.2015.2432712
W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proc. CVPR, 2014, pp. 2814–2821. DOI: https://doi.org/10.1109/CVPR.2014.360
Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in Proc. CVPR, 2014, pp. 280–287. DOI: https://doi.org/10.1109/CVPR.2014.43
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
J. Mairal, F. Bach, and J. Ponce, “Sparse modeling for image and vision processing,” arXiv preprint arXiv:1411.3230, 2014. DOI: https://doi.org/10.1561/9781680830095
J. Kim, D. Han, Y.-W. Tai, and J. Kim, “Salient region detection via high-dimensional color transform,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 883–890. DOI: https://doi.org/10.1109/CVPR.2014.118
C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proc. CVPR, 2013, pp. 3166–3173. DOI: https://doi.org/10.1109/CVPR.2013.407
P. Wang, G. Zeng, R. Gan, J. Wang, and H. Zha. Structure-sensitive superpixels via geodesic distance. International journal of computer vision, 103(1):1–21, 2013. DOI: https://doi.org/10.1007/s11263-012-0588-6
R. Wu, Y. Yu, and W. Wang. Scale: Supervised and cascaded laplacian eigenmaps for visual object recognition based on nearest neighbors. In CVPR, 2013. DOI: https://doi.org/10.1109/CVPR.2013.117
C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In Computer Vision and Pattern Recognition (CVPR),2013 IEEE Conference on, pages 3166–3173. IEEE,2013. DOI: https://doi.org/10.1109/CVPR.2013.407
X. Ren and D. Ramanan, “Histograms of sparse codes for object detection, “in IEEE Conference on Computer Vision and Pattern Recognition, pp. 3246–3253, 2013.
P. Siva, C. Russell, T. Xiang, and L. Agapito, “Looking beyond the image: Unsupervised learning for object saliency and detection,” in Proc. IEEE Conf. Computer Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 3238–3245. DOI: https://doi.org/10.1109/CVPR.2013.416
X. Bai, B. Shi, C. Zhang, X. Cai, and L. Qi, “Text/non-text image classification in the wild with convolutional neural networks,” Pattern Recognition, vol. 66, pp. 437–446, 2017. DOI: https://doi.org/10.1016/j.patcog.2016.12.005
Downloads
Published
How to Cite
Issue
Section
License
License and Copyright Agreement
In submitting the manuscript to the journal, the authors certify that:
- They are authorized by their co-authors to enter into these arrangements.
- The work described has not been formally published before, except in the form of an abstract or as part of a published lecture, review, thesis, or overlay journal.
- That it is not under consideration for publication elsewhere.
- That its release has been approved by all the author(s) and by the responsible authorities – tacitly or explicitly – of the institutes where the work has been carried out.
- They secure the right to reproduce any material that has already been published or copyrighted elsewhere.
- They agree to the following license and copyright agreement.
Copyright
Authors who publish with International Journal of Engineering Technologies and Management Research agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC BY-SA 4.0) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors can enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or edit it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) before and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
For More info, please visit CopyRight Section