Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle noise reduction of multi-frame optical coherence tomography data using multi-linear principal component analysis

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is an important interferometric diagnostic technique extensively applied in medical sciences. However, OCT images inevitably suffer from speckle noise, which reduces the accuracy of the diagnosis of ocular diseases. To deal with this problem, a speckle noise reduction method based on multi-linear principal component analysis (MPCA) is presented to denoise multi-frame OCT data. To well preserve local image features, nonlocal similar 3D blocks extracted from the data are first grouped using k-means++ clustering method. MPCA transform is then performed on each group and the transform coefficients are shrunk to remove speckle noise. Finally, the filtered OCT volume is obtained by inverse MPCA transform and aggregation. Experimental results show that the proposed method outperforms other compared approaches in terms of both speckle noise reduction and fine detail preservation.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is an emerging and important imaging technique that provides high-resolution tomographic imaging and cross-sectional microstructure of biological tissues [1]. Micrometer scale resolution and layered structure of eye provided by OCT is handy for the detection of disease. Thus, OCT has a variety of clinical applications and is widely used to assist in the diagnosis of vision related diseases. Due to taking advantage of low-coherence interferometry, OCT images are always corrupted by speckle noise [2], which severely degrades the quality of OCT imaging, consequently makes it particulary challenging to identify the fine features of object for clinical examination. Hence, suppressing speckle noise in OCT images is important to improve image quality and to further increase the accuracy of clinical diagnosis. However, removing such noise while preserving the image fine features is a challenging task.

In the past few decades, great efforts have been taken for reducing speckle noise in OCT images, and a large number of approaches have been reported. Generally speaking, these methods can be mainly categorized as hardware-based approaches and software-based approaches [3]. The hardware-based approaches, such as frequency compounding [4,5]and spatial compounding [6–9], are robust to speckle noise suppression because uncorrelated noise properties vary across different illumination angles and wavelengths [1]. However, it will inevitably have to modify the existing imaging system hardware, which is not easily adapted to standard commercial OCT equipments. An efficient and economical way of speckle noise reduction is software-based approaches, which postprocess OCT images. These methods focus on filtering or numerical algorithms in spatial or a certain transform domain, such as the local statistics based filtering methods [10,11], the diffusion based methods [12–16], Bayesian estimation method [17] and the wavelet based methods [18–21]. The local statistics based filtering methods work fast, but blur detailed structures. The diffusion based methods provide better results with respect to speckle noise reduction, yet over-fitting is still a intrinsically problem because of the nature of diffusion equation that is the loss of information during the diffusion process. Wavelet based filters can obtain good performance on speckle noise reduction, however, they may introduce unwanted visual artifacts that will degrade the OCT images. The aforementioned approaches do not take spatial redundancy of images into account or use a fixed basis to represent OCT images, however, a rich amount of fine structures contained in OCT images cannot be well represented by one fixed wavelet basis. In contrast, self-similarity priori based nonlocal means methods [22–24] and adaptive basis based sparse representation methods [25–27] can obtain better speckle noise reduction. Several recent works have also applied the low-rank representation to speckle noise suppression [28–30]. However, these methods vectorize a local image patch, which breaks apart the local correlation between pixels and destroys the fine structures of objects [31].

The development of OCT imaging system increases the correlation between adjacent frames. Therefore, multi-frame OCT images are gradually used to reduce speckle noise and many multi-frame based methods have been reported. For example, Jian et al. [32] propose a 3D curvelet transform method. The coefficients of 3D curvelet transform are thresholded and inverse transform is used to obtain the final despeckled images. In [19], Mayer et al. first perform wavelet transform to each frame, then threshold the coefficients, and finally reconstruct the despeckled image from the weighted wavelet coefficients. Despite the impressive performance, these approaches lose many fine details of OCT images due to directly truncating the transform coefficients. Recently, low-rank matrix decomposition [33] and weighted nuclear norm minimization [34] are used to denoise multi-frame OCT images. However, each 3D block must be vectorized, consequently destroys the structure of the image cubic blocks.

To address the above-mentioned issues, a multi-linear principal component analysis (MPCA) based method is proposed to despeckle multi-frame OCT images. The proposed method takes advantage of inter-frame and intra-frame redundancy of OCT volume data and directly performs on nonlocal similar cubic blocks without vectorization. To well preserve image local features, nonlocal similar 3D blocks are first extracted by an overlapping way and grouped using k-means++ clustering method, which guarantees that only blocks with similar content are used to learn a nonlocal MPCA basis. The grouping procedure is beneficial to extracting the dominant information from the speckled OCT images and removing the noise. When obtaining the nonlocal MPCA basis for a certain group, MPCA transform is performed on the corresponding group and these transform coefficients are shrunk to remove speckle noise. The filtered cubic blocks are then obtained by inverse MPCA transform. Applying this procedure to each group, the depseckled OCT images are finally obtained by aggregating all filtered cubic blocks since each voxel might belong to several groups. Experimental results show that the proposed method can effectively reduce speckle noise while preserving fine details, and outperforms other popular approaches in terms of objective metrics and visual inspection.

The remainder of the paper is structured as follows: In section 2, we introduce some necessary notions and preliminaries. Section 3 briefly reviews the procedure of PCA denoising. Section 4 presents the proposed speckle noise reduction method in details. In section 5, we apply our method to denoise 3D OCT data, and compare it with several state-of-the-art methods from the perspectives of quantitative evaluation and visual quality. Section 6 concludes the paper.

2. Notions and preliminaries

In this section, we briefly introduce some notions and basic multilinear algebra. Throughout the paper, we denote scalars by non-bold letters, e.g., a; vectors by bold lower case letters, e.g., x; matrices by bold upper case letters, e.g., U; and tensors by calligraphic letters, e.g., 𝒜. For a Nth order tensor 𝒜 ∈ ℝI1×I2×···×IN, we denote its elements as ai1···in ···iN, where 1 ≤ inIn. The mode-n vectors of a Nth order tensor 𝒜 are the In dimensional vectors obtained from 𝒜 by varying index in while keeping the others fixed. The mode-n unfolding matrix of tensor 𝒜 is denoted as A(n) ∈ ℝIn×(I1×···×In−1×In+1×···×IN), which is composed by taking the mode-n vectors of 𝒜 as its columns.

The mode-n product of a tensor 𝒜 ∈ ℝI1×I2×···×IN by a matrix U ∈ ℝJn×In, denoted by 𝒜 ×n U, is a Nth tensor 𝒞 ∈ ℝI1×···×Jn×···×IN, whose entries are computed by ci1in1jnin+1iN=inai1in1inin+1iNujnin. The mode-n product 𝒞 = 𝒜 ×n U can also be calculated by the matrix multiplication C(n) = UA(n), followed by a re-tensorization of undoing the mode-n flattening. The scalar product of two tensors 𝒜, ∈ ℝI1×I2×···×IN is defined as <𝒜,>=i1i2iNai1i2iNbi1i2iN and the Frobenius norm of a tensor 𝒜 is defined as: 𝒜F=<𝒜,𝒜>=(i1,,iN|ai1iN|2)1/2.

Following multilinear algebra, tensor 𝒜 can be expressed as the product

𝒜=𝒮×1U(1)×2U(2)××NU(N)
where 𝒮 = 𝒜 ×1 U(1)T ×2 U(2)T × · · · ×N U(N)T is called score tensor, and U(n) ∈ ℝIn×In is an orthogonal matrix. A matrix representation of this decomposition can be obtained by unfolding 𝒜 and 𝒮 as
A(n)=U(n)S(n)(U(n+1)U(n+2)U(N)U(1)U(2)U(n1))T
where ⊗ is the Kronecker product.

3. PCA denoising

Many researches have shown that PCA based denoising methods [35, 36] can learn adaptive transform basis and obtain impressive achievement. In order to better describe the proposed method, in this section, we briefly review the procedure of PCA denoising method [35].

For a vectorized local image patch y1, we can find the first n − 1 most similar nonlocal image pathes yi(i = 2, 3, · · · n) in a search window, and group these vectors into a matrix Y denoted as

Y=[y11y21yn1y12y22yn2y1my2mynm]
where yij denotes the jth entry of vectorized image patch i. The mean value of yi is calculated as μ=(iyi)/n. The image vector is then centralized as ȳi = yiμ. Accordingly, the centralized matrix of Y is Ȳ = [ȳ1, ȳ2, · · · , ȳn]. The PCA transform is defined as B = UTȲ, where U is composed by all eigenvectors of the co-variance matrix ȲȲT/n.

The PCA transform can fully de-correlate the centralized dataset Ȳ. In the transform domain, the energy of latent clean signal will concentrate on a small subset, while the energy of noise will uniformly spread over the whole dataset. Therefore, the noise and signal can be better distinguished in the PCA domain [35]. Truncating the coefficients can be used to reconstruct the latent clean signal. Unfortunately, such a procedure leads in practice to poor denoising results compared to other alternatives such as the soft thresholding of the coefficients. The main reason is that all projection directions are relevant for modeling image patches [36]. In [35], Zhang et al. shrink the transform dataset to suppress noise by using linear minimum mean square-error estimation (LMMSE). The LMMSE of the kth row of B is defined as k = wkBk, (k = 1, 2, · · · , m), where wk=max(((i=1n(bik)2/n)σ2)/(i=1n(bik)2/n),0), bik is the (k, i)th entry of matrix B. By transforming back to the time domain, the denoised result of Ȳ is obtained as Y¯^=UB^. Adding the mean value back to Y¯^ gives the denoised dataset Ŷ. The final denoised image can be obtained by applying above procedure to each image patch.

PCA has a wide range of applications in the field of image processing, however, PCA based method can only process vectorized multi-dimensional data, which may break apart the structure information of data. To overcome this problem, many methods have been proposed. For example, Zhang et al. [37] proposed a 2D-PCA method, which calculates the bases in the column direction. Generalized 2D-PCA [38] calculates the bases in both row and column directions, therefore, 2D images can be represented not only accurately but efficiently. For multi-dimensional data, multi-linear principal component analysis (MPCA) [39,40] is proposed to calculate the bases in each mode subspace rather than 1D vector space. MPCA can represent the multi-dimensional data more efficiently.

4. Proposed method

Methods based on PCA transform can obtain impressive performance on image denoising. However, image-to-vector manner of PCA destroys the structure of the image patches. To address this problem, a multi-linear principal component analysis (MPCA) based method is proposed to denoise multi-frame images. The proposed algorithm mainly consists of the following steps: cubic blocks grouping, MPCA transform, shrinking transform dataset, inverse MPCA transform and aggregation. However, noise of OCT images is multiplicative problem. Therefore, a logarithmic transformation has to be performed prior to this procedure to convert the multiplicative speckle noise into additive one. On a logarithmic scale, the observed OCT volume data 𝒴 is modeled as latent clean data 𝒳 and additive noise 𝒩: 𝒴 = 𝒳 + 𝒩.

4.1. Cubic blocks grouping

Image denoising is an ill-posed problem, image priors are thus usually used to regularize this problem. It is well known that many repetitive patterns can be found in image domain and many blocks are similar to a local one. A large number of studies have shown that grouping these similar blocks together is beneficial to extracting the main signal and removing noise. Denote H, W and S as the height, width and number of 2D OCT images, therefore, 3D OCT data can be expressed as 𝒴 ∈ ℝH×W×S. By sweeping all across the 3D OCT data with overlaps, we can extract a set of cubic blocks Ω={𝒴ih×w×S}i=1N, where N = (Hh + 1) × (Ww + 1) is the total number of extracted cubic blocks over the whole 3D data. Then, k-means++ clustering method [41] is used to group Ω into K clusters denoted by {Pk}k=1K, where Pk={𝒴ik}i=1nk and k=1Knk=N, 𝒴ik is the ith 3D block in the kth cluster. The aim of the grouping procedure is to obtain adaptive tensor bases so that all blocks in one group can be better approximated.

4.2. MPCA transform

Suppose that all 3D blocks in the given group Pk={𝒴ikh×w×S}i=1nk are centered by subtracting the average block 𝒴¯k=i=1nk𝒴ik/nk, our aim is to compute U(1), U(2), U(3), and 𝒮ik, such that 𝒮ik×1U(1)×2U(2)×3U(3) gives the best approximation of tensor object 𝒴ik. That can be done by solving the following minimization problem

minU(1),U(2),U(3),𝒮iki=1nk𝒴ik𝒮ik×1U(1)×2U(2)×3U(3)F2s.t.U(1)TU(1)=Ih,U(2)TU(2)=Iw,U(3)TU(3)=IS
Theorem 1 [39]: Given fixed matrices U(n)(n = 1, 2, 3), the score tensors 𝒮ik that minimize the Eq. (3) are given by 𝒴ik×1U(1)T×2U(2)T×3U(3)T.

Theorem 2 [39]: If the score tensors 𝒮ik are chosen as 𝒮ik=𝒴ik×1U(1)T×2U(2)T×3U(3)T, the minimization of the problem (3) is equivalent to the following maximization problem

maxU(1),U(2),U(3)i=1nk𝒴ik×1U(1)T×2U(2)T×3U(3)TF2s.t.U(1)TU(1)=Ih,U(2)TU(2)=Iw,U(3)TU(3)=IS
There is no closed-form solution of the problem (4). However, according to the following theorem, an iterative algorithm can be used to solve the maximization problem (4).

Theorem 3 [40]: Let {U(n), n = 1, · · · , N} be the solution to Eq. (4). Then, for given all other projection matrices U(1), · · · , U(n−1), U(n+1), · · · , U(N), the matrix U(n) consists of the eigenvectors of the matrix

Φ(n)=i=1nkYi(n)kUΦ(n)UΦ(n)TYi(n)kT
where Yi(n)k is the mode-n unfolding of tensor 𝒴ik and
UΦ(n)=(U(n+1)U(n+2)U(N)U(1)U(2)U(n1))
when we obtain the projection matrices {U(n), n = 1, · · · , 3}, the score tensor 𝒮ik can be computed by 𝒴i××1U(1)T×2U(2)T×3U(3)T.

The pseudo-code is given in Algorithm 1. Theoretically, the local optimal solutions of the matrices U(i)(i = 1, 2, 3) are obtained because of their dependence on the initialization of U(2) and U(3). The relative reduction of the mean reconstruction error, i.e. ξ=i=1nk𝒴ik𝒮ik×1U(1)×2U(2)×3U(3)F2/nk, is adopted to check the convergence of the algorithm. When |ξ(j) − ξ(j − 1)| ≤ μξ(j − 1) is satisfied, the algorithm is terminated, where ξ(j) is the mean reconstruction error at the jth iteration, μ is a small positive number. In this paper, U(2) and U(3) are initialized to the identity matrix for the sake of simplicity, and the algorithm usually converges within two iterations in the following experiments.

Tables Icon

Algorithm 1. MPCA transform

4.3. Shrinkage, inverse MPCA transform and aggregation

Given {𝒮ik}i=1nk, U(1), U(2), U(3) by applying Algorithm 1 to the group Pk={𝒴ik}i=1nk, we aim to shrink transform tensor data {𝒮ik}i=1nk and apply inverse MPCA transform to obtain the filtered OCT volume data. In fact, the decomposition 𝒴ik=𝒮ik×1U(1)×2U(2)×3U(3) can be rewritten as

𝒴ik=i1=1hi2=1wi3=1S𝒮ik(i1,i2,i3)ui1(1)ui2(2)ui3(3)
where ○ represents the vector outer product, and U(n)=(u1(n),u2(n),,uIN(n)), (I1 = h, I2 = w, I3 = S). 𝒮ik(i1,i2,i3) is the coefficient imposed on the Kronecker basis ui1(1)ui2(2)ui3(3). In the MPCA transform domain, the energy of true signal will concentrate on a small subset corresponding to large coefficients, while the energy of noise will uniformly spread over the whole dataset corresponding to small coefficients.

Inspired by [35], we define the following shrinkage operator

𝒮^ik=𝒲k𝒮ik
where ⊙ is Hadamard product (i.e. entry-wise product of two tensors) and the shrinkage coefficient 𝒲k is defined as
𝒲k(i1,i2,i3)=max(i=1nk(𝒮ik(i1,i2,i3))2/nkσ2i=1nk(𝒮ik(i1,i2,i3))2/nk,0)
where 1 ≤ i1h, 1 ≤ i2w, 1 ≤ i3S. Therefore, in flat zones, i=1nk(𝒮ik(i1,i2,i3))2/nk is much smaller than σ2 so that 𝒲k(i1, i2, i3) is close to 0. Hence, most of noise will be suppressed by operator 𝒮^ik=𝒲k𝒮ik.

By transforming 𝒮^ik back to time domain, we obtain the denoised result of 𝒴ik as

𝒳^ik=𝒮^ik×1U(1)×2U(2)×3U(3),i=1,2,,nk
The final denoised results of clusters {Pk}k=1K can be obtained by applying above procedure to each cluster. Then all of the estimated cubic blocks are aggregated to form the estimation 𝒳̂ in logarithmic domain.

For N = 1, the input image patches are vectors, where only one projection matrix is computed and MPCA is reduced to PCA. This means that MPCA generalizes PCA. Moreover, PCA based methods cannot directly process multi-dimensional data, and the image-to-vector method ignores the local correlation inside the image and destroys the structure information of image. On the contrary, the image-to-tensor method of MPCA can better preserve the structure of image cubic blocks and the features of local OCT cubic blocks can be efficiently exploited by the computed three projection matrices.

4.4. Iterative regularization

In order to obtain better denoising performance, iterative regularization method is used to update the noisy image by adding some method noise in iteration

𝒴t=𝒳^t1+η(𝒴𝒳^t1)
where η is a relaxation parameter, and t denotes the tth iteration. The noise variance σ is updated as
σ^t=γσ2𝒴𝒴t22
where γ is a scaling factor controlling the re-estimation of noise variance. The noise variance σ is estimated by the method in [33]. The pseudo-code of the proposed method is given in Algorithm 2.

Tables Icon

Algorithm 2. Proposed multi-frame OCT images denoising method

5. Experimental results

In this section, we test the proposed method on two OCT image datasets and compare it with several state-of-the-art methods, such as nonlocal mean method (PNLM) [24], 2D adaptive complex diffusion method [15], Bayesian method [17], sparse representation method (MSBTD) [26] and multi-frame wavelet method [19]. In order to obtain well denoising performance and reduce the computational cost, we keep w and h fixed, i.e. w = h = 9, for all experiments. But other parameters of the proposed method are tentatively chosen as values that obtained better peak signal-to-noise ratio (PSNR). As shown in Fig. 1, the PSNR of human retinal data is insensitive to parameters. However, parameters have a greater impact on the PSNR of pig eye data, which reaches a maximum at η = 0.2 when keeping the scaling factor (γ = 0.65) fixed and reaches a maximum at γ = 0.65 when keeping the relaxation parameter (η = 0.2) fixed. Therefore, in this paper, we set η = 0.2 and γ = 0.65.

 figure: Fig. 1

Fig. 1 The PSNR of the proposed method under different parameters. (a) influence of parameter η. (b) influence of parameter γ.

Download Full Size | PDF

5.1. Image quality metrics

In order to quantitatively evaluate and analysis the despeckled performance of the proposed method, several metrics are used to quantify the despeckled image quality. These metrics include the cross-correlation (XCOR), the PSNR, the average contrast-to-noise ratio (CNR) and the average equivalent number of looks (ENL). The XCOR depicts the similarity between the despeckled image and the reference one (e.g. Figs. 2(a) and 2(b)), the larger value implies the recovered image is more similar to the reference image. The PSNR measures the intensity difference between two images, and would be higher for high quality despeckled images. The CNR measures the contrast between foreground objects (e.g. the areas in the blue rectangles in Fig. 3(a)) and background region (e.g. the areas in the white rectangles in Fig. 3(a)). The ENL accounts for the smoothness in homogeneous areas (e.g. the areas in the green rectangles in Fig. 3(a)). These metrics are defined as follows:

XCOR=j=1NIjI^j[j=1NIj2][j=1NI^j2]
PSNR=10×log10(MAX21Nj=1N(IjI^j)2)
CNR=|μfμb|0.5(σf2+σb2)
ENL=μf2σf2
where Î is the recovered image with respect to its ground truth image I. N is the total number of pixels and MAX is the maximum intensity of the images. σf and μf are standard deviation and mean of the foreground regions. σb and μb are standard deviation and mean of the background region.

 figure: Fig. 2

Fig. 2 Reference images. (a) pig eye data. (b) human retinal data.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 The despeckled results of deferent methods. (a) Noisy image. (b) Averaged image (three frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and the proposed method.

Download Full Size | PDF

5.2. Results on the pig eye data

In following experiments, we use the public pig eye OCT dataset [19], which is acquired by scanning a pig eye in high speed mode with 768 A-scans, using a Spectralis HRA & OCT (Heidelberg Engineering). The dataset includes 35 sets at different eye positions, corresponding to a complete 0.384-mm shift in the transversal direction. Each set contains 13 frames (768 × 496 pixels) sharing the same imaging position. The pixel spacing of each frame is 14 μm in the transversal direction and 3.87 μm in the axial direction. To assess the quality of the despeckled OCT images, a reference image (Fig. 2(a)) is obtained by averaging all preregistered frames. For multi-frame based methods, we take 3 frames, from “35_6” to “35_8”, as input. For single-frame based method, we take the average of 3 frames (Fig. 3(b)) as their input.

The despeckled images of the proposed method and the compared methods are shown in Fig. 3. It can be found that the proposed algorithm realizes delicate edge preservation. Stronger comparison is presented in the closeups. The enlarged red rectangle regions are shown in Fig. 4. We can see that sufficient speckle noise is suppressed and more details are preserved. The quantification results of different methods are listed in Table 1, and the best result is marked in bold. As can be seen, our algorithm harvests overall best results. From Fig. 3 and Table 1, we can find that our algorithm is superior in these metrics, while has good speckle reduction and visual effect. However, as shown in Fig. 4(h), some foreground information is to a certain extent oversmoothed. The main reason for this phenomenon is that the noise level of the input image is high. Therefore, the size of cubic block must be set to a relatively larger constant, which will reduce the number of similar blocks in the corresponding group. Consequently, most of the transform coefficients are shrunk to 0, which means only the main structure can be preserved.

 figure: Fig. 4

Fig. 4 Closeups of the red rectangle region in Fig. 3(a). (a) Noisy image. (b) Averaged image (three frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and the proposed method.

Download Full Size | PDF

Tables Icon

Table 1. Performance metrics for different methods operating on the pig eye data.

5.3. Results of human retinal data

To further test the despeckling ability of the proposed methods, we have implemented the algorithm on human retinal OCT images. We use the public dataset [25]. Each image is acquired by 840-nm wavelength spectral domain OCT imaging system from Bioptigen, Inc. with an axial resolution of ∼4.5μm. Each OCT volume data contains 5 frames and the averaged image (Fig. 5 (b)) is the average of the 5 fames. More information about the image acquisition process can be found in paper [25].

 figure: Fig. 5

Fig. 5 The despeckled results of different methods. (a) Noisy image. (b) Averaged image (five frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and proposed method.

Download Full Size | PDF

Figure 5 shows the results of different methods. Comparing the despeckling results, we can see that multi-frame wavelet method blur the fine edges, but the proposed method preserve image details. To better observe the denoising effect, further comparison is presented and the enlarged regions are shown in Fig. 6. It can be found that the proposed method not only obtains better performance on speckle noise suppression but also preserves thin structures. The quantification results listed in Table 2 also show the superior performance of the proposed method on human retinal images compared with other speckle removal methods.

 figure: Fig. 6

Fig. 6 Closeups of the red rectangle region in Fig. 5(a). (a) Noisy image. (b) Averaged image (five frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and proposed method.

Download Full Size | PDF

Tables Icon

Table 2. Performance metrics for different methods operating on the human retinal data.

6. Conclusion

In this paper, we propose a new method for denoising multi-frame OCT images. The good performance of the proposed method is mainly attributed to three reasons: nonlocal self-similarity prior is fully utilized by the proposed algorithm; MPCA transform works directly on the image blocks in each mode space rather than 1D vector space, therefore, image cubic blocks can be efficiently represented by MPCA bases; the proposed shrinkage operator is a data driven method. From the results of the experiments, we can see the proposed method outperforms other popular methods in terms of speckle noise reduction and fine details preservation.

Funding

National Natural Science Foundation of China (61272239, 61671276); Science and Technology Development Project of Shandong Province of China (2014GGX101024).

Acknowledgments

Authors are grateful to L Fang and M A Mayer for the release of the OCT dataset.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. Y. Du, G. Liu, G. Feng, and Z. Chen, “Speckle reduction in optical coherence tomography images based on wave atoms,” J. Biomed. Opt. 19(5), 056009 (2014). [CrossRef]   [PubMed]  

2. G. Gong, H. Zhang, and M. Yao, “Speckle noise reduction algorithm with total variation regularization in optical coherence tomography,” Opt. Express 23(19), 24699–24712 (2015). [CrossRef]   [PubMed]  

3. J. Xu, H. Ou, C. Sun, P. C. Chui, V. X. Yang, E. Y. Lam, and K. K. Wong, “Wavelet domain compounding for speckle reduction in optical coherence tomography,” J. Biomed. Opt. 18(9), 096002 (2013). [CrossRef]   [PubMed]  

4. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–105 (1999). [CrossRef]   [PubMed]  

5. M. Pircher, E. Götzinger, R. A. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8(3), 565–569 (2003). [CrossRef]   [PubMed]  

6. A. E. Desjardins, B. J. Vakoc, W. Y. Oh, S. M. R. Motaghiannezam, G. J. Tearney, and B. E. Bouma, “Angle-resolved optical coherence tomography with sequential angular selectivity for speckle reduction,” Opt. Express 15(10), 6200–6209 (2007). [CrossRef]   [PubMed]  

7. T. Bajraszewski, M. Wojtkowski, M. Szkulmowski, A. Szkulmowska, R. Huber, and A. Kowalczyk, “Improved spectral optical coherence tomography using optical frequency comb,” Opt. Express 16(6), 4163–4176 (2008). [CrossRef]   [PubMed]  

8. B. F. Kennedy, T. R. Hillman, A. Curatolo, and D. D. Sampson, “Speckle reduction in optical coherence tomography by strain compounding,” Opt. Lett. 35(14), 2445–2447 (2010). [CrossRef]   [PubMed]  

9. T. Klein, R. André, W. Wieser, T. Pfeiffer, and R. Huber, “Joint aperture detection for speckle reduction and increased collection efficiency in ophthalmic MHz OCT,” Biomed. Opt. Express 4(4), 619–634 (2013). [CrossRef]   [PubMed]  

10. J. Rogowska and M. E. Brezinski, “Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images,” Phys. Med. Biol. 47(4), 641–655 (2002). [CrossRef]   [PubMed]  

11. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24(7), 1901–1910 (2007). [CrossRef]  

12. Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process. 11(11), 1260–1270 (2002). [CrossRef]  

13. S. Aja-Fernández and C. Alberola-López, “On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering,” IEEE Trans. Image Process. 15(9), 2694–2701 (2006). [CrossRef]   [PubMed]  

14. P. Puvanathasan and K. Bizheva, “Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images,” Opt. Express 17(2), 733–746 (2009). [CrossRef]   [PubMed]  

15. R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24084–24059 (2010). [CrossRef]  

16. H. M. Salinas and D. C. Fernández, “Comparison of PDE-Based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography,” IEEE Trans. Med. Imaging 26(6), 761–771 (2007). [CrossRef]   [PubMed]  

17. A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express 18(8), 8338–8352 (2010). [CrossRef]   [PubMed]  

18. D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004). [CrossRef]  

19. M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and P. R. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012). [CrossRef]   [PubMed]  

20. Z. Jian, Z. Yu, L. Yu, B. Rao, Z. Chen, and B. J. Tromberg, “Speckle attenuation in optical coherence tomography by curvelet shrinkage,” Opt. Lett. 34(10), 1516–1518 (2009). [CrossRef]   [PubMed]  

21. J. Xu, H. Ou, E. Y. Lam, P. C. Chui, and K. K. Wong, “Speckle reduction of retinal optical coherence tomography based on contourlet shrinkage,” Opt. Lett. 38(15), 2900–2903 (2013). [CrossRef]   [PubMed]  

22. P. Coupé, P. Hellier, C. Kervrann, and C. Barillot, “Nonlocal means-based speckle filtering for ultrasound images,” IEEE Trans. Image Process. 18(10), 2221–2229 (2009). [CrossRef]   [PubMed]  

23. J. Aum, J. H. Kim, and J. Jeong, “Effective speckle noise suppression in optical coherence tomography images using nonlocal means denoising filter with double Gaussian anisotropic kernels,” Appl. Opt. 54(13), D43–D50 (2015). [CrossRef]  

24. H. Yu, J. Gao, and A. Li, “Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,” Opt. Lett. 41(5), 994–997 (2016). [CrossRef]   [PubMed]  

25. L. Fang, S. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation,” IEEE Trans. Med. Imaging 32(11), 2034–2049 (2013). [CrossRef]   [PubMed]  

26. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]   [PubMed]  

27. R. Kafieh, H. Rabbani, and I. Selesnick, “Three dimensional data-driven multi scale atomic representation of optical coherence tomography,” IEEE Trans. Med. Imaging 34(5), 1042–1062 (2015). [CrossRef]   [PubMed]  

28. H. Chen, S. Fu, H. Wang, H. Lv, and C. Zhang, “Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography,” J. Biomed. Opt. 23(3), 036014 (2018). [CrossRef]  

29. C. Tang, L. Cao, J. Chen, and X. Zheng, “Speckle noise reduction for optical coherence tomography images via non-local weighted group low-rank representation,” Laser Phys. Lett. 14(5), 056002 (2017). [CrossRef]  

30. A. Baghaie, R. M. D’souza, and Z. Yu, “Sparse and low rank decomposition based batch image alignment for speckle reduction of retinal OCT images,” in Proceedings of IEEE International Symposimum on Biomedical Imaging (IEEE, 2015), pp. 226–230.

31. H. Lv, S. Fu, C. Zhang, and L. Zhai, “Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary,” Laser Phys. Lett. 15(5), 055401 (2018). [CrossRef]  

32. Z. Jian, L. Yu, B. Rao, B. J. Troberg, and Z. Chen, “Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform,” Opt. Express 18(2), 1024–1032 (2010). [CrossRef]   [PubMed]  

33. L. Bian, J. Suo, F. Chen, and Q. Dai, “Multi-frame denoising of high speed optical coherence tomography data using inter-frame and intra-frame priors,” J. Biomed. Opt. 20(3), 036006 (2015). [CrossRef]   [PubMed]  

34. D. Thapa, K. Raahemifar, and V. Lakshminarayanan, “Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method,” J. Mod. Opt. 62(21), 1856–1864 (2015). [CrossRef]  

35. L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising by principal component analysis with local pixel grouping,” Pattern Recognit. 43(4), 1531–1549 (2010). [CrossRef]  

36. C. A. Deledalle, J. Salmon, and A. S. Dalalyan, “Image denoising with patch-based PCA: local versus global,” in Proceedings of British Machine Vision Conference (BMVA Press, 2011), pp. 25.1–25.10.

37. J. Yang, D. Zhang, A. F. Frangi, and J. Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004). [CrossRef]   [PubMed]  

38. H. Kong, L. Wang, E. K. Teoh, X. Li, J. G. Wang, and R. Venkateswarlu, “Generalized 2D principal component analysis for face image representation and recognition,” Neural Networks 18(5–6), 585–594 (2005). [CrossRef]   [PubMed]  

39. R. Xu and Y. W. Chen, “Generalized N-dimensional principal component analysis (GND-PCA) and its application on construction of statistical appearance models for medical volumes with fewer samples,” Neurocomputing 72(10–12), 2276–2287 (2009). [CrossRef]  

40. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “MPCA: Multilinear principal component analysis of tensor objects,” IEEE Trans. Neural Networks. 19(1), 18–39 (2008). [CrossRef]   [PubMed]  

41. D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” in Proceedings of ACM-SIAM symposium on Discrete algorithms (SIAM, 2007), pp. 1027–1035.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 The PSNR of the proposed method under different parameters. (a) influence of parameter η. (b) influence of parameter γ.
Fig. 2
Fig. 2 Reference images. (a) pig eye data. (b) human retinal data.
Fig. 3
Fig. 3 The despeckled results of deferent methods. (a) Noisy image. (b) Averaged image (three frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and the proposed method.
Fig. 4
Fig. 4 Closeups of the red rectangle region in Fig. 3(a). (a) Noisy image. (b) Averaged image (three frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and the proposed method.
Fig. 5
Fig. 5 The despeckled results of different methods. (a) Noisy image. (b) Averaged image (five frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and proposed method.
Fig. 6
Fig. 6 Closeups of the red rectangle region in Fig. 5(a). (a) Noisy image. (b) Averaged image (five frames). (c)–(h) Results of MSBTD method, PNLM method, Baysian method, complex diffusion method, multi-frame wavelet method and proposed method.

Tables (4)

Tables Icon

Algorithm 1 MPCA transform

Tables Icon

Algorithm 2 Proposed multi-frame OCT images denoising method

Tables Icon

Table 1 Performance metrics for different methods operating on the pig eye data.

Tables Icon

Table 2 Performance metrics for different methods operating on the human retinal data.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

𝒜 = 𝒮 × 1 U ( 1 ) × 2 U ( 2 ) × × N U ( N )
A ( n ) = U ( n ) S ( n ) ( U ( n + 1 ) U ( n + 2 ) U ( N ) U ( 1 ) U ( 2 ) U ( n 1 ) ) T
Y = [ y 1 1 y 2 1 y n 1 y 1 2 y 2 2 y n 2 y 1 m y 2 m y n m ]
min U ( 1 ) , U ( 2 ) , U ( 3 ) , 𝒮 i k i = 1 n k 𝒴 i k 𝒮 i k × 1 U ( 1 ) × 2 U ( 2 ) × 3 U ( 3 ) F 2 s . t . U ( 1 ) T U ( 1 ) = I h , U ( 2 ) T U ( 2 ) = I w , U ( 3 ) T U ( 3 ) = I S
max U ( 1 ) , U ( 2 ) , U ( 3 ) i = 1 n k 𝒴 i k × 1 U ( 1 ) T × 2 U ( 2 ) T × 3 U ( 3 ) T F 2 s . t . U ( 1 ) T U ( 1 ) = I h , U ( 2 ) T U ( 2 ) = I w , U ( 3 ) T U ( 3 ) = I S
Φ ( n ) = i = 1 n k Y i ( n ) k U Φ ( n ) U Φ ( n ) T Y i ( n ) k T
U Φ ( n ) = ( U ( n + 1 ) U ( n + 2 ) U ( N ) U ( 1 ) U ( 2 ) U ( n 1 ) )
𝒴 i k = i 1 = 1 h i 2 = 1 w i 3 = 1 S 𝒮 i k ( i 1 , i 2 , i 3 ) u i 1 ( 1 ) u i 2 ( 2 ) u i 3 ( 3 )
𝒮 ^ i k = 𝒲 k 𝒮 i k
𝒲 k ( i 1 , i 2 , i 3 ) = max ( i = 1 n k ( 𝒮 i k ( i 1 , i 2 , i 3 ) ) 2 / n k σ 2 i = 1 n k ( 𝒮 i k ( i 1 , i 2 , i 3 ) ) 2 / n k , 0 )
𝒳 ^ i k = 𝒮 ^ i k × 1 U ( 1 ) × 2 U ( 2 ) × 3 U ( 3 ) , i = 1 , 2 , , n k
𝒴 t = 𝒳 ^ t 1 + η ( 𝒴 𝒳 ^ t 1 )
σ ^ t = γ σ 2 𝒴 𝒴 t 2 2
XCOR = j = 1 N I j I ^ j [ j = 1 N I j 2 ] [ j = 1 N I ^ j 2 ]
PSNR = 10 × log 10 ( MAX 2 1 N j = 1 N ( I j I ^ j ) 2 )
CNR = | μ f μ b | 0.5 ( σ f 2 + σ b 2 )
ENL = μ f 2 σ f 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.