International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 138 - Number 5 |
Year of Publication: 2016 |
Authors: Kiranjeet Kaur, Gurinder Singh |
10.5120/ijca2016908826 |
Kiranjeet Kaur, Gurinder Singh . Multi-Focus Image Fusion in Transform Domain using Steerable Pyramids. International Journal of Computer Applications. 138, 5 ( March 2016), 14-20. DOI=10.5120/ijca2016908826
In optical lenses of conventional cameras, depth of field is restricted to a particular range. Therefore, only those objects that are at a particular distance from the camera captured clearly with proper focus whereas objects at other distances in front of or behind the focus plane remain defocused and blurred. However, for accurately interpreting and analyzing images, it is desired to obtain images with every object in focus. Multi-focus image fusion is an effective technique to solve this problem by combining two or more images of the same scene taken with different focus settings into a single all-in-focus image with extended depth of field, which is very useful for human or machine perception. The main drawbacks of pixel based fusion methods are misalignment of decision map with boundary of focused objects and wrong decision in sub-regions of the focused or defocused regions which produce undesirable artifacts in the final fused image. Therefore frequency domain methods are more preferred then spatial domain methods. In previous years, many kinds of multi-scale transforms have been proposed and adopted for image fusion such as pyramid decomposition, discrete wavelet transform (DWT) , dual-tree complex wavelet transform (DTCWT) , and discrete cosine harmonic wavelet transform (DCHWT). These transforms are widely used as wavelets become dominant filters in most of the techniques but they still have drawbacks. Wavelets are the lack of translation invariance, especially in two- dimensional (2D) signals and the poor selectivity in orientation. This can be overcome by steerable pyramid transform as one can choose the orientation before applying the filters. We have applied this method to achieve multi-focus image fusion of images which have different focus areas while capturing them. Experimental results shows that the proposed method affectively carried out fusion process as performance of the technique has been evaluated by various parameters namely mutual information, QABF factor used for edge perservance measuring and entropy etc.