CFP last date
20 January 2025
Reseach Article

Single Image Haze Removal Algorithm using Color Attenuation Prior and Multi-Scale Fusion

by Krati Katiyar, Neha Verma
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 141 - Number 10
Year of Publication: 2016
Authors: Krati Katiyar, Neha Verma
10.5120/ijca2016909827

Krati Katiyar, Neha Verma . Single Image Haze Removal Algorithm using Color Attenuation Prior and Multi-Scale Fusion. International Journal of Computer Applications. 141, 10 ( May 2016), 37-42. DOI=10.5120/ijca2016909827

@article{ 10.5120/ijca2016909827,
author = { Krati Katiyar, Neha Verma },
title = { Single Image Haze Removal Algorithm using Color Attenuation Prior and Multi-Scale Fusion },
journal = { International Journal of Computer Applications },
issue_date = { May 2016 },
volume = { 141 },
number = { 10 },
month = { May },
year = { 2016 },
issn = { 0975-8887 },
pages = { 37-42 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume141/number10/24823-2016909827/ },
doi = { 10.5120/ijca2016909827 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:43:11.849847+05:30
%A Krati Katiyar
%A Neha Verma
%T Single Image Haze Removal Algorithm using Color Attenuation Prior and Multi-Scale Fusion
%J International Journal of Computer Applications
%@ 0975-8887
%V 141
%N 10
%P 37-42
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper compares the Fast Single Image Haze Removal (FSIHR) using Color Attenuation Prior (CAP) and Multi-Scale Fusion (MSF) methods. Single image haze removal has been a challenging problem due to its ill-posed environment. FSIHR works as simple but powerful color attenuation earlier, for removal of haze from a single input hazy image. MSF method is a fusion-based approach that results from two original hazy image inputs by applying a white balance and a contrast enhancing process. To merge the information of the derived inputs successfully, to maintain the regions with good visibility, it filters their important features by computing three measures (weight maps): luminance (Y), chromaticity (C), and saliency (S). The other FSIHR using CAP creates a linear model for modeling the picture depth of the hazy image with a supervised learning method; the depth information can be well recovered. With the depth map of the hazy image, the transmission and the scene radiance restoration via the atmospheric scattering model, and thus efficiently remove the haze from a single image. While the MSF method is faster than existing single image dehazing strategies and yields precise results.

References
  1. G. A. Woodell, D. J. Jobson, Z.-U. Rahman, and G. Hines, “Advanced image processing of aerial imagery,” Proc. SPIE, vol. 6246, p. 62460E, May 2006.
  2. L. Shao, L. Liu, and X. Li, “Feature learning for image classification viamulti objective genetic programming,” IEEE Trans. Neural Netw. Learn.Syst., vol. 25, no. 7, pp. 1359–1371, Jul. 2014.
  3. F. Zhu and L. Shao, “Weakly-supervised cross-domain dictionary learning for visual recognition,” Int. J. Comput. Vis., vol. 109, nos. 1–2,pp. 42–59, Aug. 2014.
  4. Y. Luo, T. Liu, D. Tao, and C. Xu, “Decomposition-based transfer distance metric learning for image classification,” IEEE Trans. ImageProcess., vol. 23, no. 9, pp. 3789–3801, Sep. 2014.
  5. Tao, X. Li, X. Wu, and S. J. Maybank, “Geometric mean for subspaceselection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2,pp. 260–274, Feb. 2009.
  6. J. Han et al., “Representing and retrieving video shots in human-centricbrain imaging space,” IEEE Trans. Image Process., vol. 22, no. 7,pp. 2723–2736, Jul. 2013.
  7. J. Han, K. Ngan, M. Li, and H.-J. Zhang, “A memory learning frameworkfor effective image retrieval,” IEEE Trans. Image Process., vol. 14,no. 4, pp. 511–524, Apr. 2005.
  8. Tao, X. Tang, X. Li, and X. Wu, “Asymmetric bagging and randomsubspace for support vector machines-based relevance feedback in imageretrieval,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 7,pp. 1088–1099, Jul. 2006.
  9. J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, “Object detection inoptical remote sensing images based on weakly supervised learning andhigh-level feature learning,” IEEE Trans. Geosci. Remote Sens., vol. 53,no. 6, pp. 3325–3337, Jun. 2015.
  10. Cheng et al., “Object detection in remote sensing imagery using adiscriminatively trained mixture model,” ISPRS J. Photogramm. RemoteSens., vol. 85, pp. 32–43, Nov. 2013.
  11. J. Han et al., “Efficient, simultaneous detection of multi-class geospatialtargets based on visual saliency modeling and discriminative learning ofsparse coding,” ISPRS J. Photogramm. Remote Sens., vol. 89, pp. 37–48, Mar. 2014.
  12. L. Liu and L. Shao, “Learning discriminative representations fromRGB-D video data,” in Proc. Int. Joint Conf. Artif. Intell., Beijing, China,2013, pp. 1493–1500.
  13. D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and Gabor features for gait recognition,” IEEE Trans. PatternAnal. Mach. Intell., vol. 29, no. 10, pp. 1700–1715, Oct. 2007.
  14. Z. Zhang and D. Tao, “Slow feature analysis for human actionrecognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 3,pp. 436–450, Mar. 2012.
  15. R. Fattal, “Single image dehazing,” ACM Trans. Graph., vol. 27, no. 3, p. 72, Aug. 2008.
  16. P. S. Chavez, Jr., “An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data,” Remote Sens. Environ., vol. 24, no. 3, pp. 459–479, Apr. 1988.
  17. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, Dec. 2011.
  18. L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, Oct. 2001.
  19. Q. Zhu, J. Mai, and L. Shao, “Single image dehazing using color attenuation prior,” in Proc. Brit. Mach. Vis. Conf. (BMVC), Nottingham, U.K., 2014, pp. 1–10.
  20. E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules and Particles. New York, NY, USA: Wiley, 1976.
  21. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), vol. 2. Sep. 1999, pp. 820–827.
  22. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6, pp. 713–724, Jun. 2003.
  23. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis., vol. 48, no. 3, pp. 233–254, Jul. 2002.
  24. S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proc. IEEE Conf. Comput. Vis. PatternRecognit. (CVPR), 2001, pp. II-186–II-193.
  25. J.-P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneousfog,” IEEE Intell. Transp. Syst. Mag., vol. 4, no. 2, pp. 6–20, Apr. 2012.
  26. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A fast semiinverse approach to detect and remove the haze from a single image,” in Proc. Asian Conf. Comput. Vis. (ACCV), 2010, pp. 501–514.
  27. K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 2995–3002.
  28. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013.
  29. A. J. Preetham, P. Shirley, and B. Smits, “A practical analytic model for daylight,” in Proc. ACM Special Interest Group Comput.Graph. (SIGGRAPH), 1999, pp. 91–100.
Index Terms

Computer Science
Information Sciences

Keywords

Dehazing image defogging image restoration depth estimation.