CFP last date
20 December 2024
Reseach Article

Inverse Bilateral Filter for Saliency

by Dao Nam Anh
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 118 - Number 10
Year of Publication: 2015
Authors: Dao Nam Anh
10.5120/20780-3343

Dao Nam Anh . Inverse Bilateral Filter for Saliency. International Journal of Computer Applications. 118, 10 ( May 2015), 11-19. DOI=10.5120/20780-3343

@article{ 10.5120/20780-3343,
author = { Dao Nam Anh },
title = { Inverse Bilateral Filter for Saliency },
journal = { International Journal of Computer Applications },
issue_date = { May 2015 },
volume = { 118 },
number = { 10 },
month = { May },
year = { 2015 },
issn = { 0975-8887 },
pages = { 11-19 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume118/number10/20780-3343/ },
doi = { 10.5120/20780-3343 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:01:18.475154+05:30
%A Dao Nam Anh
%T Inverse Bilateral Filter for Saliency
%J International Journal of Computer Applications
%@ 0975-8887
%V 118
%N 10
%P 11-19
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The analysis and automatic detection of visual salient image regions has been the subject of considerable research useful in object segmentation, adaptive compression and re-targeting. However, the nature of the essential mechanisms intervening human visual saliency remains elusive. To assess the validity of salient regions some kind of prior model is consistently required. This paper proposes a new model using inverse bilateral fitter that allows the system to output saliency maps with salient objects in their context. The filter is described firstly for automatically learning local contrast distribution to accurately predict salient image regions. Along with the contrast distribution checking, local opposition is analyzed by the second application of the inverse bilateral filter to establish fuzzy boundary of salient regions in form of trimap. This approach is shown to increase the reliability of identifying visual salient objects. Output from the research has potential applications in the areas of object detection and recognition.

References
  1. Lempitsky, V. , Kohli, P. , Rother, C. , Sharp, T. : Image segmentation with a bounding box prior. In: ICCV (2009).
  2. Marchesotti, L. , Cifarelli, C. , Csurka, G. : A framework for visual saliency detection with applications to image thumbnailing. In: ICCV (2009).
  3. D. Walther et al. , Selective visual attention enables learning and recognition of multiple objects in cluttered scenes, Computer Vision and Image Understanding, vol. 100, nos. 1/2, pp. 41-63, 2005.
  4. S. Avidan and A. Shamir, Seam carving for content-aware image resizing, ACM Trans. Graphics, vol. 26, no. 3, pp. 1-9, 2007.
  5. Y. -F. Ma and H. -J. Zhang, Contrast-based image attention analysis by using fuzzy growing, Proc. 11th ACM Int'l Conf. Multimedia, pp. 374-381, 2003.
  6. C. Christopoulos, A. Skodras, and T. Ebrahimi, The JPEG2000 still image coding system: an overview, IEEE Trans. Consumer Electronics, vol. 46, no. 4, pp. 1103-1127, Nov. 2000.
  7. C. Guo and L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression, IEEE Trans. Image Processing, vol. 19, no. 1, 2010.
  8. S. Marat, M. Guironnet, and D. Pellerin, Video summarization using a visual attentional model, Proc. 15th European Signal Proc Conf. , pp. 1784-1788, 2007.
  9. Michael Rubinstein, Ariel Shamir, and Shai Avidan. 2009. Multi-operator media retargeting. ACM Trans. Graph. 28, 3, Article 23 (July 2009), 11 pages.
  10. Wolf, L. Guttmann, M. ; Cohen-Or, D. , Non-homogeneous content-driven video-retargeting, Computer Vision, 2007. ICCV.
  11. Ali Borji, Dicky N. Sihite, and Laurent Itti, Salient Object detection: a benchmark, ECCV 2012.
  12. C. Tomasi, R. Manduchi. Bilateral filtering for gray and color images. Procedings of IEEE international Conf. on Computer Vision, Bombay, India, 1998: 839-846.
  13. S. Paris, P. Kornprobst, J. Tumblin and F. Durand: Bilateral filtering: theory and applications, Computer Graphics and Vision Vol 4, No. 1 (2008) 1- 73.
  14. S. Paris and F. Durand, a fast approximation of the bilateral filter using a signal processing approach, Computer Science and Artificial Intelligence LaboratoryMIT, 2006.
  15. Tie Liu, Jian Sun, Nan-Ning Zheng, Xiaoou Tang and Heung-Yeung Shum. Learning to detect a salient object. In Proc. IEEE Cont. on Comp Vision and pattern Recognition (CVPR), Minneapolis, Minnesota, 2007.
  16. Treisman, A. , Gelade, G. , 1980. A feature-integration theory of attention. Cognitive Psychology, 12(1), pp. 97-136.
  17. Laurent Itti, Christof Koch, Feature combination strategies for saliency-based visual attention systems, Journal of Electronic Imaging 10(1), 161–169, 2001 SPIE and IS&T.
  18. S. Treue and J. C. M. Trujillo, Feature-based attention influences motion processing gain in macaque visual cortex, Nature (London) 399, 575–579, 1999.
  19. Wolfe, J. M. (1994). Guided Search 2. 0: A revised model of visual search. Psych. Bull. & Rev. , 1, 202–238.
  20. Tsotsos, J. K. , Culhane, S. , Wai, W. , Lai, Y. , Davis, N. , Nuflo, F. , Modeling visual attention via selective tuning, Artificial Intelligence 78(1-2), p 507 - 547, 1995.
  21. Scholl B. J. (2001). Objects and attention: The state of the art. Cognition, 80 (1–2), 1–16.
  22. Grossberg, S. , Raizada, R. D. , 2000. Contrast-sensitive perceptual grouping and object-based attention in the laminar circuits of primary visual cortex. Vision Res. 40, 1413– 1432.
  23. T Huang, S Grossberg, Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation, Technical Report CAS/CNS-TR-09-010, 2010.
  24. F Orabona1, G Metta1, G Sandini. A Proto-object Based Visual Attention Model.
  25. Jian Li and Martin D. Levine, Image and video region saliency based on space and motion, CIM25, Symposium On Brain, Body And Machine, 10–12, 2010.
  26. M Dziemianko, A Clarke, F Keller, Object-based saliency as a predictor of attention in visual tasks, Proc of the 35th An Conf of the Cognitive Science Society, 2237–2242. Berlin, 2013.
  27. D Tjondronegoro, Adaptive bilateral filtering using saliency map for de-blocking low bit rate videos, IEEE (ICME) 2014.
  28. Y Miao, R Han, H Shou, A Fast algorithm for content-aware saliency detection and stylized rendering, The 2nd Inter Conf on Computer Application and System Modeling (2012).
  29. Lu SP, Zhang SH. Saliency-based fidelity adaptation preprocessing for video coding. Journal Of Comp Sc and Tech, 26(1): 195{202} 2011.
  30. H. Winnemoller, S. C. Olsen, and B. Gooch, Real-time video abstraction, ACM Transactions on Graphics, vol. 25, no. 3, pp. 1221–1226, Proceedings of the ACM SIGGRAPH conference, 2006.
  31. Gokhan Yildirim, Sabine Susstrunk, FASA: fast, accurate, and size-aware salient object detection, Proceedings of the 12th Asian Conference on Computer Vision, 2014.
  32. J. Leroy, N. Riche, M. Mancas, B. Gosselin, T. Dutoit, 2014, SuperRare: an object-oriented saliency algorithm based on superpixels rarity, IEEE Inter Conf on Robotics and Automation (ICRA 2014).
  33. Olivier Juan, Renaud Keriven, Trimap segmentation for fast and user-friendly alpha matting, Lecture Notes in Computer Science Volume 3752, 2005, pp 186-197.
  34. M. Lindenbaum, M. Fischer, and A. M. Bruckstein, On Gabor's contribution to image enhancement, Pattern Recognition, 27 (1994), pp. 1–8.
  35. Otsu, N. , A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, 1979, pp. 62-66.
  36. Tie Liu, Jian Sun, Nan-Ning Zheng, Xiaoou Tang and Heung-Yeung Shum. Learning to detect a salient object. In Proc. IEEE Cont. on Comp Vision and pattern Recognition (CVPR), Minneapolis, Minnesota, 2007.
  37. Christoph Rhemann, Carsten Rother, Jue Wang, Margrit Gelautz, Pushmeet Kohli, Pamela Rott. A perceptually motivated online benchmark for image matting. CVPR 2009: 1826-1833.
  38. E. G. Richardson, Iain. H. 264 and MPEG-4 Video compression: video coding for next-generation multimedia. Chichester: John Wiley & Sons Ltd. 2003.
  39. R. Margolin, L. Zelnik-Manor, and A. Tal, Saliency for image manipulation, The Visual Computer, 2012.
  40. L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual atten tion for rapid scene analysis. PAMI, 1998.
  41. Xiaodi Hou, Jonathan Harel, Christof Koch: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1): 194-201 (2012)
  42. N. Riche, M. Mancas, B. Gosselin, T. Dutoit, 2012, RARE: A new bottom-up saliency model, Proc of the IEEE Inter Conf on Image Processing, USA, 2012.
  43. Hae Jong Seo and Peyman Milanfar, Nonparametric bottom-up saliency detection by self-resemblance, Journal of vision 9 (12), 15, 2009.
Index Terms

Computer Science
Information Sciences

Keywords

Inverse bilateral filter saliency contrast trimap.