Abstract

A novel image fusion algorithm based on bidimensional empirical mode decomposition (BEMD) applied to multi-focus color microscopic images is proposed in this paper. The fusion scheme is implemented in YIQ color model, aiming at achieving a balanced result between local feature enhancement and global tonality rendition. In the proposed algorithm, image decomposition is performed on luminance component by BEMD which can perform fully two-dimensional decomposition adaptively without using a priori basis. Upon fusion of each IMF component, the local significance principle fusion rule is used. When fusing the Residue component, the principal component analysis method is adopted. Thanks to the superior quality of BEMD in extracting salient features, the proposed algorithm can gain better fusion results not only in aspect of in-focus information extraction but also in performance of blur elimination. Experimental results demonstrate that the proposed algorithm outperforms the popular fusion algorithm based on wavelet transform. The usage of different color models for realization of the proposed algorithm is also discussed, and YIQ color model is proved to be more suitable.

©2010 Optical Society of America

1. Introduction

In microscopy, high magnifications are achievable for investigating micro-objects but the paradigm is that higher is the required magnification, lower is the depth of focus. In fact, the depth-of-field of a microscope, depending on the different conditions of use, is not sufficient in obtaining a single image in which the whole longitudinal volume of the object is in-focus. If an accurate analysis of the whole object has to be performed, it is necessary to have a single sharp image in which all details of the object, even if they are located at different planes along the longitudinal direction, are still in focus [1,2]. Image fusion technique is an effective method to extend the depth-of-field and overcome this problem [3,4].

A variety of image fusion algorithms have been presented in recent years, which are mostly used in fusion of grayscale image. The core idea of image fusion is to delete away all defocused parts, to extract focus parts of sequence images, and to merge them into one of the focal image [4]. Image fusion algorithms have two major researches: spatial domain algorithms and multiresolution (MR) algorithms such as pyramid algorithm [5] and discrete wavelet transform (DWT) algorithm [3,6]. MR algorithms are more robust and very useful for image fusion because the real world objects usually consist of structures at different scales and human visual system also processes information in a MR fashion [7]. Presently DWT-based fusion algorithm is a study hotspot, and its fusion effect is better [4]. However, it has two shortcomings: one is that the wavelet function and some parameters have to be selected; the second is causing blurring phenomenon probably.

In the context of color domain, the application of fusion has been thought as a direct extension of original technique whereby fusion operation applied to one image channel could be directly replicated to the other ones [8]. Color images fusion algorithms can be divided into two categories: processing in RGB color model and processing in other transformed color model such as YUV, YIQ, HSI and so on [8,9]. Processing in RGB model means fusion scheme is implemented in each RGB channel [3], while processing in transformed model usually separates grayscale information from color data and treats each channel respectively [9].

In this paper, a novel color microscopic image fusion algorithm based on bidimensional empirical mode decomposition (BEMD) implemented in YIQ color model is proposed, aiming at performing the fusion operation adaptively and getting better fusion results with more original in-focus information and better chromatic rendition. In the proposed algorithm, BEMD is applied to fuse Y components of original images, since in-focus information of original microscopic image is mainly contained in luminance component and BEMD has superior quality in extracting salient features in luminance images. Thanks to the usage of BEMD that new form of fully two-dimensional MR decomposition, fusion of Y components is adaptively without using a priori basis so that the trouble in selecting some parameters such as priori basis can be leaved out, and the efficacies of fused image clarity are better than DWT-based algorithm. On the other hand, because the fusion scheme is implemented in YIQ color model, the proposed algorithm can provide better chromatic rendition than algorithms processing in each RGB component which may produce incorrect colors.

This paper is organized as follows. Section 2 presents a detailed description of the proposed BEMD-based fusion algorithm and section 3 provides experimental results and discussions. In section 4 concluding remarks are given.

2. Fusion of color microscopic images

2.1 The scheme of fusion

The fused image can be generated by the following steps. Firstly, the source color microscopic images are transformed in YIQ color model, aiming at applying different fusion operations to the corresponding luminance (Y) and chrominance (I and Q) color components. Secondly, BEMD-based fusion algorithm is used to fuse the Y components: each Y component is decomposed by BEMD into one Residue component and a series of IMF components; the local significance principle fusion rule is applied to fuse each IMF component; the principal component analysis (PCA) rule is adopted to fuse each Residue component; the focused Y component is recovered by carrying out inverse BEMD. Thirdly, I and Q chrominance components are fused using PCA fusion algorithm. Finally, the fused Y, I, and Q component images are transformed to the RGB color model for display by carrying on YIQ inverse transform. The schematic flowchart of the proposed fusion algorithm of color microscopic images is shown in Fig. 1 .

 

Fig. 1 Schematic flowchart of the proposed algorithm.

Download Full Size | PPT Slide | PDF

In the proposed algorithm, considering Y component contains more information of the source image and is more important than I and Q component in color microscopic images fusion, BEMD-based fusion algorithm is used to fuse the Y components. In other words, the Y luminance component is selected as the primary fusion variable which is used to compute the focus measure. The more focus details of the Y components are extracted, the better fusion performance is derived. There are three reasons why we choose the BEMD-based fusion algorithm in fusion of Y components: first, BEMD which is a new form of MR decomposition has superior quality in extracting high frequency information such as edge in Y component; second, unlike other MR analysis tools, such as DWT, which normally examines only horizontal, vertical and diagonal orthonormal details at each decomposed scale, the BEMD produces a fully two-dimensional decomposition of the Y components; third, BEMD decomposes Y components into their components adaptively without using a priori basis, and it is able to decompose images without complicated convolution processes. Considering these, compared with the popular DWT-based fusion algorithm, the BEMD-based fusion algorithm is more suitable to fuse the Y components and can lead to improved fusion performance.

However, in YIQ color model, there are another two chrominance components: I and Q. Here we simply explain why the two components are not fused using BEMD-based fusion algorithm but using PCA-based fusion algorithm. The I and Q components describe the hue and saturation attributes of the color image. Their coefficients are not suitable for carrying out the complicated computation of BEMD, which may bring in more errors in the chromatic rendition and even lead to color distortion. On the other hand, PCA-based fusion algorithm resolves the coefficients of I and Q components by adaptive fusion weight value, and the good color rendition derived from the source images. So, only the luminance component is selected to fuse using BEMD, while the chrominance components (I and Q) are resolved using PCA-based fusion algorithm.

An advantage resulting from the choice of YIQ color model and different fusion rules that are applied to different components could be taken to enhance the performances of fusion processes.

The appropriate transform and fusion rules are important aspects of our image fusion algorithm, which will be discussed later.

2.2 Color model transformation

The RGB color model is composed of the primary colors R (red), G (green), and B (blue) and is the fundamental and most commonly used model for displaying color images. RGB model is suitable for color display, but not good for fusion of color microscopic images because of the high correlation among the R, G, and B components. All the three components will change accordingly if the intensity changes [10]. Fusion of color microscopic images which is operating to R, G and B components respectively may lead to color distortion and poor performance in chromatic rendition. Hence, if R, G and B components are fused independently, undesirable visual effects may occur [8]. To alleviate this problem, we can use other color models such as YIQ which can display different color characteristics suitable for the fusion application of color microscopic images.

The YIQ color model was formerly used in the National Television System Committee (NTSC) television standard [11], and can be derived from the corresponding RGB model by a linear transformation as follows:

[YIQ]=[0.2990.5870.1140.5960.2740.3220.2110.5230.311][RGB].
where Y component is a measure of the luminance of the color, and I and Q components jointly describe the hue and saturation attributes of the color image. Here the R, G and B components are each bounded within [0, 1].

The RGB values can be computed from the components in the YIQ color model using:

[RGB]=[1.0000.9560.6211.0000.2720.6471.0001.1061.703][YIQ].
Figure 2 displays an example of color model transformation from RGB to YIQ. Figure 2(a) shows lena standard images in RGB color model. The transformed Y, I and Q components are shown in Fig. 2(b), Fig. 2(c) and Fig. 2(d), respectively.

 

Fig. 2 Example of color model transformation from RGB to YIQ. (a) Lena standard image in RGB color model. (b) Y component. (c) I component. (d) Q component.

Download Full Size | PPT Slide | PDF

There are four reasons why we choose the YIQ color model in our proposed fusion algorithm: first, the YIQ color model is intended to take advantage of human color-response characteristics [11]; second, the YIQ color model can partly get rid of the correlation of the R, G and B components, and the decorrelation makes the Y, I, and Q component images complementary to each other [12]; third, the linear transformation needs less computation time than nonlinear ones, which makes the YIQ model more preferable to nonlinear systems; fourth, fusion of color microscopic images needs to extract salient features such as edge, while Y component is a likely candidate for edge detection in a color image [10].

Considering the different attributes of the luminance component and the chrominance component, we discuss the importance of each component in the fusion. Here, we do an experiment that calculates the information entropy in each component of lena standard images [shown in Fig. 2(b), Fig. 2(c) and Fig. 2(d)]. The results are: the entropy of Y component is 7.45028; the entropy of I component is 5.93429; the entropy of Q component is 4.81299. The experiment shows that Y component contains more information and is more important than I and Q component in color microscopic images fusion.

2.3 Bidimensional empirical mode decomposition

Empirical mode decomposition (EMD), which was developed in 1998 by Huang et al. [13], is adaptive for analyzing nonlinear and nonstationary data. This decomposition is carried out through a fully data-driven sifting process, so that no basis functions need to be fixed in the analysis process. Therefore, the mode selection corresponds to an automatic filtering, with no need of external intervention [14]. BEMD is an extension of the one-dimensional EMD applied to two-dimensional images and has its unique priorities for adaptively extracting image components satisfying human’s perception [15].

The BEMD process (shown in Fig. 3 ) that decomposes the image into a series of intrinsic mode functions (IMFs) and one Residue component is summarized as follows [1618]:

  • Step 1: Initialization: I = Iorigin and j = 1(index number of IMF);
  • Step 2: Identify all local extrema (both maxima and minima) of the image I;
  • Step 3: Interpolate between maxima and minima to generate upper envelope Eu and lower envelope El;
  • Step 4: Compute envelope mean plane Em by averaging the two envelopes;
  • Step 5: Extract the details: d = IEm;
  • Step 6: Repeat step 2- step 5 with d instead of I until d can be considered as an IMF. An IMF is characterized by two specific properties: the number of zero crossing and the number of extrema points is equal or differs only by one; it has a zero local mean. Usually, if the value of standard deviation (SD) is in the range of 0.2 to 0.3, the stop criterion is met and d fulfills the conditions to be an IMF. SD can be calculated by Eq. (3):

    SD=x=0Xy=0Y[Ij1(x,y)Ij(x,y)]2Ij12(x,y).

  • Step 7: IMF(j) = d, j = j + 1, Resdue = Em;
  • Step 8: Proceed with the modes extraction on Resdue by repeating step 2- step 7 until the stopping condition is satisfied or all the set decomposition levels have been processed.

The first few IMFs contain the highest spatial frequencies which correspond to salient features in the source image, and the Residue represents low frequency information in the source image. An example of the BEMD process is shown in Fig. 4 , where the Lena standard gray level image, the former three IMFs and the Residue are shown respectively.The original image can be recovered by inverse BEMD: Iorigin = ∑jIMF(j) + Resdue.

 

Fig. 4 Example of the BEMD process. (a) Lena standard gray level image. (b) IMF(1). (c) IMF(2). (d) IMF(3). (e) Residue.

Download Full Size | PPT Slide | PDF

In the proposed algorithm, to avoid the boundary effect of BEMD, partial mirror extension is used to extend images before applying bidimensional sifting process. Then, morphological operator is used to detect regional extrema. After that, an important aspect to be considered is the construction of envelopes, which involves interpolation of scattered data formed by the extrema of the image. In this paper, the selected method of scattered data interpolation is the popular method using constructed smooth cubic splines. When upper envelope and lower envelope are obtained, mean plane is computed by averaging them. For the purpose of decomposition velocity rise, we treat the standard deviation shown in Eq. (4) instead of which is shown in Eq. (1) as the character estimate used for the stop criterion (SD<0.3) of IMF.

SD=max|Em|max|Resdue|.

Since the value of mean plane decreases rapidly for the first several iterations and then decreases slowly, the appropriate number of iterations can be used as the stopping criterion. Hence, the appropriate number of iterations to build IMFs is used in this paper. The sifting process is ended until all the set decomposition levels have been processed.

2.4 PCA fusion rule

PCA algorithm is adopted for the fusion of chrominance (I and Q) components and the Residue components obtained from the BEMD process of Y component.

PCA is one of the linear mapping techniques [19], which projects input data of larger dimensionality onto a smaller dimensionality space while maximally preserving the intrinsic information in the input data vectors. It transforms correlated input data into a set of statistically independent features or components, which are usually ordered by decreasing information content. PCA helps to adjust the contribution of each chrominance or residue component in final fused image, so adaptive fusion weight value of the coefficients of the processing components are resolved using PCA.

Consider n processing component images as a vector. Calculate and diagonalize the symmetric covariance matrix, and compute the eigenvalues and eigenvectors. Then the eigenvectors are normalized according to the corresponding eigenvalues from high to low [19]. The elements of normalized principle eigenvector are adaptive fusion weights of the coefficients of the processing components.

Finally, the fused component image is obtained using following equation:

ImF=k=1npkImk.
where pk (k = 1, 2,…, n) is the k th element of normalized principle eigenvector; Imk (k = 1, 2,…, n) is the coefficient of corresponding input component image for each point; ImF denotes the fused component image (corresponding to IF, QF or ResidueF).

When the input chrominance (I and Q) component images are fused using this PCA rule, good color rendition could be seen in the fused chrominance image. When the input Residue component images are fused using this PCA rule, many useful features from the Residue components are preserved in the fused image.

2.5 Local significance principle fusion rule

In multi-focus microscopic image, in-focus regions have high energy in high frequencies, while out-of-focus regions obviously lack high-frequency features. The BEMD decomposition yields bidimensional IMFs, which are the high-frequency components of the Y component images. IMF images contain important information such as edges and region boundaries, and these details need to be preserved as more as possible for our visual perception and understanding of the image. In fact, the large absolute values in the IMF coefficients correspond to high frequency features. On the other hand, we human begins interpret images at the region or object level rather than pixel level. Region based algorithm has many advantages over pixel based like it is less sensitive to noise, better contrast, less affected by misregistration. Also a region is more meaningful structure in multi-focus microscopic image [7]. Thus, upon fusion of each IMF component of the Y luminance images, the local significance principle fusion rule is used.

The local significance principle fusion rule selects only the strongest coefficient at each pixel location in a local region among the IMF images at each decomposition level. The rule is defined as follows:

IMFFj(x,y)=maxn{a=llb=ll|IMFij(x+a,y+b)|}.
where j denotes the level of the decomposition; IMFFj(x,y) denotes the fused image’s IMF coefficients of j th decomposition level at the position (x,y); n denotes the number of source images; IMFij denotes the source image Xi’s IMF coefficients of j th decomposition level, and the size of the square region considering (x,y) as the center is l × l (such as 3 × 3).

3. Experimental results and discussions

3.1 Algorithm validation

The proposed algorithm has been implemented on Matlab 7 and tested on a lot of sets of multi-focus color microscopic images which are already registered. In this section, we present three sets of fusion results which are shown in Fig. 5 , Fig. 6 and Fig. 7 . In general, fusion algorithm based on DWT is the most popular and effective algorithm among various kinds of fusion algorithms. For the purpose of comparison and demonstration the effectiveness of the proposed BEMD-based fusion algorithm, these three sets of test images are subjected to both DWT-based fusion algorithm and BEMD-based fusion algorithm, and the fusion results are presented for comparison. Both the two fusion algorithms are implemented in YIQ color model. In DWT-based fusion algorithm, some adopted fusion operations can be described as below: the I and Q chrominance components are resolved using PCA-based fusion rule; the Y components are fused using DWT algorithm; bior4.4 is adopted as the priori wavelet basis; the fusion of approximate coefficients is based on common plus mean rule; the fusion of particular coefficients is based on the common absolute value maximum selection rule.

 

Fig. 5 The first set of experimental images and fusion results. (a) Source image focused on left portion. (b) Source image focused on right portion. (c) Fused image using DWT. (d) Fused image using BEMD.

Download Full Size | PPT Slide | PDF

 

Fig. 6 The second set of experimental images and fusion results. (a) Source image focused on the left two cells. (b) Source image focused on the middle cell. (c) Source image focused on the right two cells. (d) Fused image using DWT. (e) Fused image using BEMD.

Download Full Size | PPT Slide | PDF

 

Fig. 7 The third set of experimental images and fusion results. (a) Source image focused on inner portion. (b) Source image focused on outer portion. (c) Fused image using DWT. (d) Fused image using BEMD.

Download Full Size | PPT Slide | PDF

The first set of test images of size 768 × 576 are shown in Fig. 5(a) and Fig. 5(b).

The left portion of image in Fig. 5(a) is distinct, whereas the right portion is blurred. On the contrary, the left portion of the image in Fig. 5(b) is blurred, whereas the right portion is distinct. Figure 5(c) shows the fused image using DWT-based fusion algorithm (decomposition level = 3), and Fig. 5(d) shows the fused image using the proposed BEMD-based fusion algorithm (decomposition level = 3). Considering image visual quality, both portions of fused image generated from the proposed algorithm are distinct and the performance in chromatic rendition is satisfactory. Details of image in Fig. 5(d) are clearer than those in Fig. 5(c). The experimental result demonstrates that the proposed algorithm is successful in the fusion of test color microscopic images and outperforms the traditional DWT-based fusion algorithm.

The second set of test images were captured from a biological specimen consisting of algae cells, by using an optical microscope system at 40 × magnification. Three color microscopic images are presented in Fig. 6(a), Fig. 6(b) and Fig. 6(c). Figure 6(d) shows the fused image using DWT-based fusion algorithm (decomposition level = 2), and Fig. 6(e) shows the fused image using BEMD-based fusion algorithm (decomposition level = 2). In Fig. 6(d), the image fused by DWT is blurry. It is clearly seen from the fused image in Fig. 6(e) that the fused image where all algal cells are distinct contains complementary information of the three input images and has good color rendition. Compared with DWT-based algorithm, the proposed algorithm has got better performance in blur elimination.The third example is shown in Fig. 7.

Two test images [shown in Fig. 7(a) and Fig. 7(b)] were taken from the wing specimen of an insect at 20 × magnification. The fused image using DWT-based algorithm (decomposition level = 1) is presented in Fig. 7(c), and the fused image using BEMD-based algorithm (decomposition level = 1) is presented in Fig. 7(d). From the fused image shown in Fig. 7(d) it can be seen that features from both the source images are preserved well and color information is well preserved too. This experimental results show that DWT-based algorithm may cause blurring in the fused image while the proposed BEMD-based algorithm has the ability to eliminate the blurry phenomenon.

Any image fusion algorithm can be assessed using two categories of performance parameters which are subjective indices and objective indices. Subjective indices rely on the ability of people’s comprehension. While objective indices can overcome the influence of human vision, mentality and knowledge, and make machines automatically evaluate of the effectiveness of image fusion [20]. Four objective evaluation criteria are used in this section to evaluate the results of the proposed fusion algorithm and compared with DWT-based algorithm. They are: (1) Entropy; (2) Standard Difference (STD); (3) Average Gradient (AG); (4) Spatial Frequency (SF). In this section, we defined the sum of the criterion value of three components (R, G and B) as each objective evaluation criterion of color image in RGB model. For an M × N image Z, the four objective evaluation criteria are described as follows.

  • (1) Entropy

Entropy is an important index to measure the degree of information-rich in images. To some extent, the greater the Entropy of fusion images, the more information contained in fusion image, the better the integration quality [4]. Entropy is defined as:

Entropy=k(i=0L1(pk)ilog2(pk)i),k=R,G,B.
where p is the probability (frequency) of each grey scale level.

  • (2) Standard Difference (STD)

STD could be applied to evaluate the contrast magnitude of the image [21]. If the STD is larger, distribution of image is scattered, image contrast is larger, and more information can be seen [4]. It is described as follows:

STD=k(i=1Mj=1N(Zk(i,j)μk)2/(M×N)),k=R,G,B.
where μk is the average of the processing component image.

  • (3) Average Gradient (AG)

AG represents mini-detail difference and texture character of image. It can be used to assess the ambiguous extent of the image [19]. AG is calculated as:

AG=ki=1M1j=1N1((Zk(xi,yj)xi)2+(Zk(xi,yj)yi)2)/2(M1)×(N1),k=R,G,B.
  • (4) Spatial Frequency (SF)

SF is widely used in image fusion to measure the overall clarity of an image [22], which is an important indication to measure quality of details in an image. The formula is as follows:

SF=k(RFk2+CFk2),k=R,G,B.
RFk=1M×Ni=1Mj=2N[Zk(xi,yj)Zk(xi,yj1)]2,CFk=1M×Ni=2Mj=1N[Zk(xi,yj)Zk(xi1,yj)]2.

Table 1 provides the comparison of objective criteria of DWT and BEMD fusion algorithms. From Table 1, it is clear that when three sets of images are fused, the values of Entropy, STD, AG and SF are all higher in the proposed algorithm at each decomposition level. The ability of extracting information of the proposed algorithm is illustrated by an obvious increase in Entropy. And the efficacy of image clarity of the proposed algorithm is illustrated by a significant rise in STD, AG and SF. On the other hand, the more decomposition level can obtain better fusion results.

Tables Icon

Table 1. Comparison of objective criteria of DWT and BEMD fusion algorithms

It is obvious that the conclusion of quantitative data evaluation consists with the above conclusion of subjectively qualitative analysis. Synthesized the conclusions of subjectively qualitative analysis and objectively quantitative evaluation, it is concluded that the proposed algorithm not only distinctly improves the spatial details but also preserves more focus information of the microscopic image compared with the fusion image using the DWT-based algorithm with the same decomposition level.

3.2 Experiment using different color models

In this section, impact of different color models on the proposed fusion algorithm of color microscopic images is discussed by an experiment. The experiment used the test insect color microscopic images as shown in Fig. 7(a) and Fig. 7(b). The proposed fusion algorithm is operating in RGB, HSI, YUV, YCbCr and YIQ color model, respectively. When fusing in RGB color model, the BEMD-based fusion algorithm (decomposition level = 2) is performed in each R, G and B component. When fusing in HSI, YUV, YCbCr and YIQ color model, for comparison of the proposed algorithm, only the luminance components are fused by BEMD-based fusion algorithm (decomposition level = 2), and the two chrominance components are fused by PCA-based fusion algorithm. The experimental results are shown in Fig. 8 .

 

Fig. 8 Comparison of different color models. (a) Entropy. (b) Standard Difference (STD). (c) Average Gradient (AG). (d) Spatial Frequency (SF).

Download Full Size | PPT Slide | PDF

Figure 8(a) shows that the image fused in YIQ color model has the greatest value in Entropy and contains most information. Figure 8(b), Fig. 8(c) and Fig. 8(d) show that the fused image in YIQ color model has the greatest value in STD, AG and SF, which mains the proposed algorithm implemented in YIQ model can gain best efficacy of image clarity. The experiment results demonstrate that YIQ color model is more suitable for using in the proposed algorithm.

4. Conclusion

In this paper, a novel color microscopic images fusion algorithm based on bidimensional empirical mode decomposition has been proposed. BEMD is a new form of multiresolution decomposition and can represent the details and smooth part of an image in fully two-dimension, compared with DWT which normally examines only horizontal, vertical and diagonal orthonormal details at each decomposed scale. Furthermore, BEMD is able to decompose images without complicated convolution processes and decomposes images into their components adaptively without using a priori basis. Thanks to the superior quality of BEMD in extracting high frequency data, the proposed algorithm can gain better fusion results not only in the aspect of in-focus information extraction but also in performance of blur elimination. On the other hand, the fusion of luminance components is performed at the decomposition level and the decompositions with different frequency ranges are processed differently. The adaptive fusion weight value of the Residue coefficients are resolved using PCA, the IMF coefficients are fused by local significance principle fusion rule. Moreover, in the proposed algorithm, YIQ color model is used for fusion operation of color microscopic images to achieve a balanced result between local feature enhancement and global tonality rendition. In conclusion, the proposed algorithm is proved to improve the visual quality of fused images and can gain better fusion results compared with the popular fusion algorithm based on DWT.

Acknowledgments

This work was supported by the Research Program of CSSAR and the National Natural Science Foundation of China (No. 40804032).

References and links

1. P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital Holography,” Opt. Express 13(18), 6738–6749 (2005), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-18-6738. [CrossRef]   [PubMed]  

2. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8670. [CrossRef]   [PubMed]  

3. P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

4. L. Li, J. Le, and J. Yang, “Improved Method of Multi-focal Plane Micro-image Fusion,” in Proceedings of IEEE 9th International Conference on Electronic Measurement & Instruments (Institute of Electrical and Electronics Engineers, Beijing, China, 2009), pp. 4–417–4–421.

5. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983). [CrossRef]  

6. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express 9(4), 184–190 (2001), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-9-4-184. [CrossRef]   [PubMed]  

7. T. Zaveri, M. Zaveri, V. Shah, and N. Patel, “A Novel Region Based Multifocus Image Fusion Method,” in Proceedings of IEEE International Conference on Digital Image Processing (Institute of Electrical and Electronics Engineers, Bangkok, Thailand, 2009), pp. 50–54.

8. L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001). [CrossRef]  

9. H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008). [CrossRef]  

10. H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001). [CrossRef]  

11. J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010). [CrossRef]  

12. Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008). [CrossRef]  

13. N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998). [CrossRef]  

14. M. B. Bernini, A. Federico, and G. H. Kaufmann, “Denoising of digital speckle pattern interferometry fringes by means of Bidimensional Empirical Mode Decomposition,” Proc. SPIE 7063, 70630D–1–70630D −7 (2008).

15. Z. X. Liu and S. L. Peng, “Directional EMD and its application to texture segmentation,” Sci. China Ser. F, Inf. Sci. 48(3), 354–365 (2005). [CrossRef]  

16. J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003). [CrossRef]  

17. S. Equis and P. Jacquot, “The empirical mode decomposition: a must-have tool in speckle interferometry?” Opt. Express 17(2), 611–623 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-2-611. [CrossRef]   [PubMed]  

18. Q. Yin, L. Shen, J. N. Kim, and Y. J. Jeong, “Scale-invariant pattern recognition using a combined Mellin radial harmonic function and the bidimensional empirical mode decomposition,” Opt. Express 17(19), 16581–16589 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-19-16581. [CrossRef]   [PubMed]  

19. W. Liu, J. Huang, and Y. Zhao, “Image Fusion Based on PCA and Undecimated Discrete Wavelet Transform,” in Proceedings of 13th International Conference on Neural Information Processing, ICONIP, I. King et al., eds. (Academic, Hong Kong, China, 2006), pp. 481–488.

20. T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).

21. B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).

22. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital Holography,” Opt. Express 13(18), 6738–6749 (2005), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-18-6738 .
    [Crossref] [PubMed]
  2. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8670 .
    [Crossref] [PubMed]
  3. P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).
  4. L. Li, J. Le, and J. Yang, “Improved Method of Multi-focal Plane Micro-image Fusion,” in Proceedings of IEEE 9th International Conference on Electronic Measurement & Instruments (Institute of Electrical and Electronics Engineers, Beijing, China, 2009), pp. 4–417–4–421.
  5. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983).
    [Crossref]
  6. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express 9(4), 184–190 (2001), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-9-4-184 .
    [Crossref] [PubMed]
  7. T. Zaveri, M. Zaveri, V. Shah, and N. Patel, “A Novel Region Based Multifocus Image Fusion Method,” in Proceedings of IEEE International Conference on Digital Image Processing (Institute of Electrical and Electronics Engineers, Bangkok, Thailand, 2009), pp. 50–54.
  8. L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001).
    [Crossref]
  9. H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
    [Crossref]
  10. H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
    [Crossref]
  11. J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
    [Crossref]
  12. Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008).
    [Crossref]
  13. N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
    [Crossref]
  14. M. B. Bernini, A. Federico, and G. H. Kaufmann, “Denoising of digital speckle pattern interferometry fringes by means of Bidimensional Empirical Mode Decomposition,” Proc. SPIE 7063, 70630D–1–70630D −7 (2008).
  15. Z. X. Liu and S. L. Peng, “Directional EMD and its application to texture segmentation,” Sci. China Ser. F, Inf. Sci. 48(3), 354–365 (2005).
    [Crossref]
  16. J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
    [Crossref]
  17. S. Equis and P. Jacquot, “The empirical mode decomposition: a must-have tool in speckle interferometry?” Opt. Express 17(2), 611–623 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-2-611 .
    [Crossref] [PubMed]
  18. Q. Yin, L. Shen, J. N. Kim, and Y. J. Jeong, “Scale-invariant pattern recognition using a combined Mellin radial harmonic function and the bidimensional empirical mode decomposition,” Opt. Express 17(19), 16581–16589 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-19-16581 .
    [Crossref] [PubMed]
  19. W. Liu, J. Huang, and Y. Zhao, “Image Fusion Based on PCA and Undecimated Discrete Wavelet Transform,” in Proceedings of 13th International Conference on Neural Information Processing, ICONIP, I. King et al., eds. (Academic, Hong Kong, China, 2006), pp. 481–488.
  20. T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).
  21. B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).
  22. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
    [Crossref]

2010 (2)

J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
[Crossref]

T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).

2009 (4)

B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).

S. Equis and P. Jacquot, “The empirical mode decomposition: a must-have tool in speckle interferometry?” Opt. Express 17(2), 611–623 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-2-611 .
[Crossref] [PubMed]

Q. Yin, L. Shen, J. N. Kim, and Y. J. Jeong, “Scale-invariant pattern recognition using a combined Mellin radial harmonic function and the bidimensional empirical mode decomposition,” Opt. Express 17(19), 16581–16589 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-19-16581 .
[Crossref] [PubMed]

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

2008 (4)

S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8670 .
[Crossref] [PubMed]

H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[Crossref]

Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008).
[Crossref]

2005 (2)

2003 (1)

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

2001 (3)

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express 9(4), 184–190 (2001), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-9-4-184 .
[Crossref] [PubMed]

L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001).
[Crossref]

1998 (1)

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

1983 (1)

P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983).
[Crossref]

Adelson, E. H.

P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983).
[Crossref]

Alfieri, D.

Alfonso, P. V.

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

Bogoni, L.

L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001).
[Crossref]

Bouaoune, Y.

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

Bunel, P.

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

Burt, P. J.

P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983).
[Crossref]

Carina, T. Q.

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

Cheng, H. D.

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

Coppola, G.

Corwin, A. D.

Dali, Z.

De Nicola, S.

Delechelle, E.

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

Dixon, E. L.

Equis, S.

Feng, H.

H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
[Crossref]

Ferraro, P.

Filkins, R. J.

Finizio, A.

Grilli, S.

Guihong, Q.

Hansen, M.

L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001).
[Crossref]

Huang, N. E.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Irwing, T. A.

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

Jacquot, P.

Javidi, B.

Jeong, Y. J.

Jiang, X. H.

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

Kenny, K. B.

Kim, J. N.

Li, B.

B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).

Li, Q.

H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
[Crossref]

Li, S.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[Crossref]

Liu, C.

J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
[Crossref]

Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008).
[Crossref]

Liu, H. H.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Liu, Z.

Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008).
[Crossref]

Liu, Z. X.

Z. X. Liu and S. L. Peng, “Directional EMD and its application to texture segmentation,” Sci. China Ser. F, Inf. Sci. 48(3), 354–365 (2005).
[Crossref]

Long, S. R.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Lv, H.

B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).

Niang, O.

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

Nunes, J. C.

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

Peng, S. L.

Z. X. Liu and S. L. Peng, “Directional EMD and its application to texture segmentation,” Sci. China Ser. F, Inf. Sci. 48(3), 354–365 (2005).
[Crossref]

Pierattini, G.

Pingfan, Y.

Santiago-Tepantlan, C.

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

Shen, L.

Shen, Z.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Shih, H. H.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Striano, V.

Sun, Y.

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

Tasimi, K.

Tung, C. C.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Wang, J.

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

Wu, M. C.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Yang, B.

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[Crossref]

Yang, J.

J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
[Crossref]

Yazdanfar, S.

Yen, N. C.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Yin, Q.

Zaveri, M.

T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).

Zaveri, T.

T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).

Zhang, L.

J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
[Crossref]

Zhao, H.

H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
[Crossref]

Zheng, Q.

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Comput. Vis. Image Underst. (1)

Z. Liu and C. Liu, “Fusion of the complementary Discrete Cosine Features in the YIQ color space for face recognition,” Comput. Vis. Image Underst. 111(3), 249–262 (2008).
[Crossref]

IEEE Trans. Commun. (1)

P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. 31(4), 532–540 (1983).
[Crossref]

Image Vis. Comput. (3)

H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and a coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).
[Crossref]

J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. 21(12), 1019–1026 (2003).
[Crossref]

S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008).
[Crossref]

Int. J. Comput. Electr. Eng. (1)

T. Zaveri and M. Zaveri, “A Novel Two Step Region Based Multifocus Image Fusion Method,” Int. J. Comput. Electr. Eng. 2, 86–91 (2010).

J. Commun. Comput. (1)

B. Li and H. Lv, “Pixel level image fusion scheme based on accumulated gradient and PCA transform,” J. Commun. Comput. 6, 49–54 (2009).

Opt. Express (5)

Pattern Recognit. (3)

H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern Recognit. 34(12), 2259–2281 (2001).
[Crossref]

J. Yang, C. Liu, and L. Zhang, “Color space normalization: Enhancing the discriminating power of color spaces for face recognition,” Pattern Recognit. 43(4), 1454–1466 (2010).
[Crossref]

L. Bogoni and M. Hansen, “Pattern-selective color image fusion,” Pattern Recognit. 34(8), 1515–1526 (2001).
[Crossref]

Proc. R. Soc. Lond. A (1)

N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proc. R. Soc. Lond. A 454(1971), 903–995 (1998).
[Crossref]

Proc. SPIE (1)

P. V. Alfonso, T. A. Irwing, T. Q. Carina, and C. Santiago-Tepantlan, “Multifocus microscope color image fusion based on Daub(2) and Daub(4) kernels of the Daubechies Wavelet family,” Proc. SPIE 7443, 744327 (2009).

Sci. China Ser. F, Inf. Sci. (1)

Z. X. Liu and S. L. Peng, “Directional EMD and its application to texture segmentation,” Sci. China Ser. F, Inf. Sci. 48(3), 354–365 (2005).
[Crossref]

Other (4)

M. B. Bernini, A. Federico, and G. H. Kaufmann, “Denoising of digital speckle pattern interferometry fringes by means of Bidimensional Empirical Mode Decomposition,” Proc. SPIE 7063, 70630D–1–70630D −7 (2008).

W. Liu, J. Huang, and Y. Zhao, “Image Fusion Based on PCA and Undecimated Discrete Wavelet Transform,” in Proceedings of 13th International Conference on Neural Information Processing, ICONIP, I. King et al., eds. (Academic, Hong Kong, China, 2006), pp. 481–488.

L. Li, J. Le, and J. Yang, “Improved Method of Multi-focal Plane Micro-image Fusion,” in Proceedings of IEEE 9th International Conference on Electronic Measurement & Instruments (Institute of Electrical and Electronics Engineers, Beijing, China, 2009), pp. 4–417–4–421.

T. Zaveri, M. Zaveri, V. Shah, and N. Patel, “A Novel Region Based Multifocus Image Fusion Method,” in Proceedings of IEEE International Conference on Digital Image Processing (Institute of Electrical and Electronics Engineers, Bangkok, Thailand, 2009), pp. 50–54.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic flowchart of the proposed algorithm.
Fig. 2
Fig. 2 Example of color model transformation from RGB to YIQ. (a) Lena standard image in RGB color model. (b) Y component. (c) I component. (d) Q component.
Fig. 3
Fig. 3 BEMD process.
Fig. 4
Fig. 4 Example of the BEMD process. (a) Lena standard gray level image. (b) IMF(1). (c) IMF(2). (d) IMF(3). (e) Residue.
Fig. 5
Fig. 5 The first set of experimental images and fusion results. (a) Source image focused on left portion. (b) Source image focused on right portion. (c) Fused image using DWT. (d) Fused image using BEMD.
Fig. 6
Fig. 6 The second set of experimental images and fusion results. (a) Source image focused on the left two cells. (b) Source image focused on the middle cell. (c) Source image focused on the right two cells. (d) Fused image using DWT. (e) Fused image using BEMD.
Fig. 7
Fig. 7 The third set of experimental images and fusion results. (a) Source image focused on inner portion. (b) Source image focused on outer portion. (c) Fused image using DWT. (d) Fused image using BEMD.
Fig. 8
Fig. 8 Comparison of different color models. (a) Entropy. (b) Standard Difference (STD). (c) Average Gradient (AG). (d) Spatial Frequency (SF).

Tables (1)

Tables Icon

Table 1 Comparison of objective criteria of DWT and BEMD fusion algorithms

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

[ Y I Q ] = [ 0.299 0.587 0.114 0.596 0.274 0.322 0.211 0.523 0.311 ] [ R G B ] .
[ R G B ] = [ 1.000 0.956 0.621 1.000 0.272 0.647 1.000 1.106 1.703 ] [ Y I Q ] .
S D = x = 0 X y = 0 Y [ I j 1 ( x , y ) I j ( x , y ) ] 2 I j 1 2 ( x , y ) .
S D = max | E m | max | Re s d u e | .
Im F = k = 1 n p k Im k .
I M F F j ( x , y ) = max n { a = l l b = l l | I M F i j ( x + a , y + b ) | } .
E n t r o p y = k ( i = 0 L 1 ( p k ) i log 2 ( p k ) i ) , k = R , G , B .
S T D = k ( i = 1 M j = 1 N ( Z k ( i , j ) μ k ) 2 / ( M × N ) ) , k = R , G , B .
A G = k i = 1 M 1 j = 1 N 1 ( ( Z k ( x i , y j ) x i ) 2 + ( Z k ( x i , y j ) y i ) 2 ) / 2 ( M 1 ) × ( N 1 ) , k = R , G , B .
S F = k ( R F k 2 + C F k 2 ) , k = R , G , B .
R F k = 1 M × N i = 1 M j = 2 N [ Z k ( x i , y j ) Z k ( x i , y j 1 ) ] 2 , C F k = 1 M × N i = 2 M j = 1 N [ Z k ( x i , y j ) Z k ( x i 1 , y j ) ] 2 .

Metrics