Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High quality of an absolute phase reconstruction for coherent digital holography with an enhanced anti-speckle deep neural unwrapping network

Open Access Open Access

Abstract

It is always a challenge how to overcome speckle noise interference in the phase reconstruction for coherent digital holography (CDH) and its application, as this issue has not been solved well so far. In this paper, we are proposing an enhanced anti-speckle deep neural unwrapping network (E-ASDNUN) approach to achieve high quality of absolute phase reconstruction for CDH. The method designs a special network-based noise filter and embeds it into a deep neural unwrapping network to enhance anti-noise capacity in the image feature recognition and extraction process. The numerical simulation and experimental test on the phase unwrapping reconstruction and the image quality evaluation under the noise circumstances show that the E-ASDNUN approach is very effective against the speckle noise in realizing the high quality of absolute phase reconstruction. Meanwhile, it also demonstrates much better robustness than the typical U-net neural network and the traditional phase unwrapping algorithms in reconstructing high wrapping densities and high noise levels of phase images. The E-ASDNUN approach is also examined and confirmed by measuring the same phase object using a commercial white light interferometry as a reference. The result is perfectly consistent with that obtained by the E-ASDNUN approach.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As digital holography uses a charge-coupled device (CCD) instead of traditional films to record the hologram and reconstructs the 3-dimensional image of an object rapidly and quantitatively through a computer, it has been a very interesting technique and widely investigated for various applications from biomedicine to industrial measurement for decades [15]. The recent development in digital holography has shown that digital holography can be classified into coherent and incoherent approaches [69]. In coherent digital holography (CDH), laser is usually used as the light source because its excellent coherence makes it possible to achieve holographic imaging in more expansive domains, e.g. in a larger phase space. However, it also brings a negative effect, that is, laser speckle is incorporated into the holographic imaging process, which can lead to image distortion and failure of the hologram reconstruction [1013]. In order to overcome the speckle interference in the hologram reconstruction, a number of anti-noise phase unwrapping algorithms were proposed in the past [1417]. The algorithms include path-dependent and path-independent approaches, in which the two well-known methods are the phase unwrapping max-flow (PUMA) and the discrete cosine transform-based least squares (DCT-LS) algorithms. These two algorithms have demonstrated a certain anti-noise capacity in the phase unwrapping reconstruction [15,16]. Although the traditional algorithms may reduce the speckle interference more or less in the phase reconstruction, these algorithms are only applicable to relatively easy cases, such as the phase reconstruction for lower noise levels or lower wrapping densities of phase images. Recently, with the development of deep learning technique, the network-based phase reconstruction approaches have started being applied for this area, as well [1823]. In comparison to the traditional anti-noise unwrapping algorithms, the network-based approaches demonstrate the improved effect on the noise suppression and phase reconstruction due to their excellent learning and feature extracting capacity. A typical deep-learning network approach is the U-net neural network. But the earlier network approaches also show some weakness or helplessness in achieving the high quality of phase unwrapping reconstruction for those high noise levels and high wrapping densities of phase images, as these network architectures are not specifically designed for the anti-speckle digital holography.

In order to overcome these problems and develop a more powerful phase reconstruction approach, we are proposing an enhanced anti-speckle deep neural unwrapping network (E-ASDNUN) approach here. The method designs a special network-based speckle-noise filter and combines it with a deep neural unwrapping network. In this way, the anti-speckle capacity of the network is remarkably enhanced in the phase unwrapping reconstruction process. It is embodied in the perfect phase reconstruction quality without the speckle noise-induced image distortions even for the higher noise levels and higher wrapping densities of phase images. The network architecture, numerical simulation and experimental test are described as follows.

2. Principle of the E-ASDNUN approach

As the speckle is generally regarded as a sort of multiplicative noise in CDH, the complex-amplitude of the object wave-field with the speckle noise can be theoretically written as [24,25],

$$\begin{aligned} \tilde U({x,y} )&= A({x,y} ){e^{j[{\varphi ({x,y} )+ 2\pi n({x,y} )} ]}}\\ \begin{array}{cc} {}&{} \end{array}\begin{array}{{c}} {} \end{array}& = {A_0}({x,y} ){e^{j[{{\varphi_0}({x,y} )+ 2\pi n({x,y} )} ]}} \cdot [{1 + \tilde N({x,y} )} ]\end{aligned}$$
$$\tilde N({x,y} )= {A_{\rm{n}}}({x,y} ){e^{j[{{\varphi_n}({x,y} )} ]}}$$
where A(x, y) and φ(x, y) ∈[-π,π] are the real amplitude and wrapped phase of the object wave-field with the noise, respectively. A0(x, y) and φ0(x, y) ∈[-π,π] are the real amplitude and the wrapped phase of the object wave-field without noise, respectively. n(x, y) is the wrapping number of the phase object, which is pixel location-dependent. Ñ(x, y) is the complex-valued noise term, in which An(x, y) and φn (x, y) are the real noise amplitude and the noise phase, respectively. They are written as An= σnR(x, y) and φn= σnI(x, y) in the random white Gaussian noise here. σ denotes the noise standard deviation (NSD). With Eqs. (1) and (2), the wrapped phase, φ(x, y) with the multiplicative noise can be obtained in the following form
$$\varphi ({x,y} )= arctan\left\{ {\frac{{sin{\varphi_0}({x,y} )+ \sigma {n_R}({x,y} )\cdot sin[{{\varphi_0}({x,y} )+ \sigma {n_I}({x,y} )} ]}}{{cos{\varphi_0}({x,y} )+ \sigma {n_R}({x,y} )\cdot cos[{{\varphi_0}({x,y} )+ \sigma {n_I}({x,y} )} ]}}} \right\}$$

Therefore, the absolute phase, $\psi ({x,y} )= \varphi ({x,y} )+ 2\pi n({x,y} )$ can be obtained as long as the wrapped phase, φ(x, y) and the wrapping number, n(x, y) are known. For a finite wrapping number of phase object (it is usually the case in digital holographic microscopy), the solution to the absolute phase can be regarded as a dense multi-classification issue in the deep learning approach. In this case, the pixels of the phase image are spatially classified according to the corresponding wrapping numbers, n(x, y) to achieve the prediction on the absolute phase of the object. It is similar to the semantic segmentation treatment [26]. Therefore, the wrapping number, n(x, y) and the absolute phase can be solved by utilizing the convolutional neural network (CNN) approach, such as the typical U-net or the residual neural networks [27,28]. Our experiment has shown that the typical CNN networks have a certain anti-noise capacity in the absolute phase reconstruction if the wrapping density of the phase object is not high. However, the result will become poor with the typical network approaches if the reconstruction targets are the high wrapping densities and high noise levels of phase objects because it will be more difficult to learn and extract the object features from a complex background. The E-ASDNUN network approach can overcome the shortcomings of the existing network approaches well and make sure the high quality of absolute phase reconstruction for CDH. The network architecture is described below.

The E-ASDNUN network consists of a special encoder and a decoder. In the encoder, a couple of 3 × 3 convolution layers (orange blocks in Fig. 1) containing BN and ReLU treatments are firstly used to preserve the original object information as much as possible. These two convolution layers are followed by four feature extraction (FE) modules (green, pink and blue blocks in Fig. 1) and a network-based noise filter (red block in Fig. 1) for the input wrapped phase image. Each FE module involves two layers of 3 × 3 depthwise (DW) separable convolutions (green blocks in Fig. 1) and one 3 × 3 max-pooling layer (pink block in Fig. 1) as well as one ordinary 1 × 1 convolution (blue block in Fig. 1). Between the layers are the BN and ReLU treatments. The 3 × 3 max-pooling layer with two strides is used to compress the image size and speed up the network calculation. The output of each pooling layer will overlap and cover with stride size smaller than the pooling kernel size. It therefore can improve richness of the input image features and reduce loss of the image information. A special anti-noise module (noise filter, red block in Fig. 1) is designed and embedded between the third and the fourth FE modules to remove the noise influence in the process of the image feature extraction, which consists of eight identical anti-noise units connected end to end. Each anti-noise unit involves three layers of 3 × 3 DW separable convolutions with the BN and ReLU treatments between them and a shortcut connection. The repeated convolution operations are applied in the anti-noise module to play a role in filtering noise and obtain the features of a clean image. The eight identical anti-noise units are used in the anti-noise module to maintain a balance between the de-noising effect and the computational complexity, as a very deep filter structure would bring higher computational cost, though it might increase the anti-noise effect [29,30]. The anti-noise module location within the network depends on the optimization result based on the experiment.

 figure: Fig. 1.

Fig. 1. A diagram of the E-ASDNUN network architecture, in which Conv, BN and ReLU denote the convolution, the batch normalization and the rectified linear unit, respectively.

Download Full Size | PDF

In order to understand the anti-noise module effect, we randomly select a wrapped phase image incorporated by the multiplicative Gaussian noise with a mean of zero and a standard deviation of σ = 0.5 as the input of the network to test the performance of the anti-noise module in the network structure, as is shown in Fig. 2. The test wrapped-phase image are roughly divided into four parts: the areas A, B, C and D, in which the areas A, B, C show larger phase fluctuations and the area D shows less phase fluctuation. Then, we observe the variations before and after the feature maps of the input image undergo the anti-noise module when the network runs at Epoch 1, Epoch 50, and Epoch 100, respectively. The network extracts 512 feature maps in total and we select the first five feature maps (feature maps numbered 0, 5, 10, 15, 20), according to the rule of taking one every five maps. It can be seen from Fig. 2 that the color change in the areas A, B, and C of each feature map after undergoing the anti-noise module still show obvious fluctuation, while the color variation in the area D is gentler, i.e. relatively smooth. It implies that the whole shape of the feature map after undergoing the noise filter emerges a high similarity to the test phase image in feature. For all the feature maps before undergoing the anti-noise module, the color distributions of them are chaotic and noise points with color jumps can be obviously observed. These results indicate that the anti-noise module is playing a role in filtering noise or smoothing the image, as there are larger and irregular phase fluctuations for a noisy image than a noise-free one.

 figure: Fig. 2.

Fig. 2. A test wrapped-phase image with σ = 0.5 and its feature maps before and after undergoing the anti-noise module at Epoch 1, Epoch 50 and Epoch 100 of network training.

Download Full Size | PDF

Furthermore, we also calculate the variance values of the five feature maps before they undergo the noise filter at Epoch 1, Epoch 50 and Epoch 100 of network training. The results are shown in Table 1. The data in Table 1 show that there are the largest variance values of the feature maps at Epoch 1 for major cases. Since the feature maps have not undergone the anti-noise module yet at Epoch1, the feature map quality is certainly poor. The feature map quality is gradually improved at Epoch 50 and Epoch 100 of network training and their variance values are therefore downward, accordingly, as the feature maps undergo the anti-noise module 49 and 99 times (Each Epoch contains eight times of filter processing from the eight anti-noise units), respectively. The variance data at Epoch 50 and Epoch 100 have been very close to each other. This indicates that the feature maps by network training also tend to a stable status. Although the variance value of the feature map numbered 20 shows a slight fluctuation at Epoch 50, it does not affect the total effect. Therefore, it is proved that the anti-noise module incorporated in the network is effective to the noise suppression. The decoder is composed of five image expansion modules and one 1 × 1 convolution kernel to obtain the reconstructed absolute phase image. Each module in the decoder includes a 2 × 2 upsampling layer for image size restoration. It is followed by two 3 × 3 convolution layers with BN processing between them to enhance and integrate the extracted image features. The 1 × 1 convolution kernel is used following the image expansion modules in the decoder to make sure the same channel numbers of the output as the input ones.

Tables Icon

Table 1. The variance of feature maps before undergoing the anti-noise module at Epoch 1, Epoch 50 and Epoch 100.

3. Numerical simulation and experiment test

The numerical simulation on filtering the multiplicative noise and reconstructing the absolute phase of the object with the E-ASDNUN network approach is firstly conducted, where a random wrapped phase image incorporating the Gaussian noise is used for the absolute phase reconstruction based on Eq. (3). The E-ASDNUN network operates with the training datasets consisting of 8000 pairs of wrapped and unwrapped phase image samples with the size of 256 × 256, where 7840 pairs of them are used for training and the rest are used for validation. The samples for training are made up of different and randomly-superposed 2-dimensional Gaussian functions, which are used as the objects of the network output. The wrapped phase images of them are obtained with Eq. (3) as the network input. Since laser speckle exists in the wrapped phase image for the actual CDH, the noise with various noise standard deviations of 0.1-0.5 are randomly applied for all the trained samples, and meanwhile, the samples are constructed with different wrapping densities of n = 2-10 to obtain multiple groups of training sets for network training. The network parameters are obtained based on the above simulation training sets and used for both of numerical and experimental phase reconstructions. All the results are consistent with the prediction. This avoids the difficulty in collecting a large amount of experimental samples for the network training dataset construction. The mean square error (MSE) of loss function and the SGDM optimizer [31] are used in the E-ASDNUN network. The initial learning rate of the network is 0.01. The network runs at most 100 Epochs with the mini-batch size of 4, which is established using TensorFlow 1.13.1 and Keras2.1.3. The training time is approximately 12 h in total based on the workstation (involving an Intel Xeon E5-2620 v4 CPU and two NVIDIA Tesla K80 GPU).

Figure 3 shows the comparative phase reconstruction results for a random phase object given by Fig. 3(a) using the E-ASDNUN and the U-net networks as well as the PUMA and the DCT-LS algorithms in numerical simulation, where the multiplicative noise of σ=0.1 is incorporated into the wrapped phase image of the object in Fig. 3(b). The max wrapping number of the object is nM= 4. Figure 3(c) shows the phase distribution within one cross-section located at 80 pixels. The four groups of the phase reconstruction results given by Figs. 3(d)–3(f), 3(g)–3(i), 3(j)–3(l) and 3(m)–3(o) are obtained by the E-ASDNUN, the PUMA, the DCT-LS and the U-net approaches, respectively under the same imaging conditions. Figures 3(e), 3(h), 3(k) and 3(n) are the error maps between the original object of Fig. 3(a) and the reconstructed phase images of Figs. 3(d), 3(g), 3(j), 3(m) corresponding to the four reconstruction approaches, respectively. Figures 3(f), 3(i), 3(l) and 3(o) show the corresponding phase distributions within the cross-sections at the same location of 80 pixels, respectively, in which the blue curves are the phases of the object and the red ones are the phase data from the reconstructed images. The results in Fig. 3 clearly indicate that the E-ASDNUN network approach can not only remove the noise interference very well in the phase reconstruction, but also reconstruct the phase image perfectly, in comparison to the other approaches used here. This initially reflects the powerful capacity of the E-ASDNUN network approach in the noise control and the achievement of high quality of absolute phase reconstruction. Meanwhile, it also reflects that the typical U-net neural network is limited in this aspect under the same network training conditions. It will be more clearly seen in the subsequent analysis that the U-net network approach is only suitable for the cases of reconstructing low wrapping densities and low noise levels of phase objects, and the superiority of the E-ASDNUN network approach will become more remarkable for the reconstruction of higher wrapping densities and higher noise levels of phase objects.

 figure: Fig. 3.

Fig. 3. Numerical simulation of the phase reconstruction with the E-ASDNUN, the U-net, the PUMA and the DCT-LS approaches. (a), (b), (c) are a random phase object, its wrapped phase image and the phase distribution within one cross-section located at 80 pixels, respectively. The max wrapping number of the object is nM= 4 and the noise level is σ= 0.1. (d)-(f), (g)-(i), (j)-(l), (m)-(o) are the phase reconstruction results with the E-ASDNUN, the PUMA, the DCT-LS and the U-net approaches, respectively. (e), (h), (k). (n) are the error maps between the reconstructed phase images and the object. The blue and red curves in (f), (i), (l), (o) are the phase distributions within one fixed cross-section at 80 pixels from the object and the reconstructed images, respectively.

Download Full Size | PDF

In order to deeply evaluate the E-ASDNUN capacity on the anti-noise and the phase reconstruction, we first introduce two assessment indices of the image reconstruction quality here, which are the peak signal-noise ratio (PSNR) and the structural similarity index measure (SSIM) [32,33],

$$PSNR = 10 \cdot {\log _{10}}\left( {\frac{{MAX_{{\varphi_{orig}}}^2}}{{MSE}}} \right)$$
and
$$SSIM({{\varphi_{{\mathop{\rm Re}\nolimits} }},{\varphi_{orig}}} )= {\textstyle{{({2{\mu_{{\mathop{\rm Re}\nolimits} }}{\mu_{orig}} + {C_1}} )({2{\sigma_{{\mathop{\rm Re}\nolimits} orig}} + {C_2}} )} \over {({\mu_{{\mathop{\rm Re}\nolimits} }^2 + \mu_{orig}^2 + {C_1}} )({\sigma_{{\mathop{\rm Re}\nolimits} }^2 + \sigma_{orig}^2 + {C_2}} )}}}$$
where $MAX_{{\varphi _{orig}}}^2$ and MSE denote the maximum of reconstruction phases and the mean square error between the original object and the reconstructed image, respectively. φorig and φRe denote the phases of the original object and the reconstructed phase image, respectively. µorig(µRe) is the empirical mean between the original and the reconstructed phases of the object, respectively. σorig and σRe are the standard deviation between the original and the reconstructed phases of the object, respectively. σReorig is the empirical correlation coefficient between the original and the reconstructed phases of the object. C1 = (K1 L)2 and C2 = (K2L)2 are the constants, which are generally determined by taking K1= 0.01, K2= 0.03 and L = 255(the dynamic range of the pixel values) in the calculation, according to the Refs. [32,33].

Then, we perform a large amount of phase reconstructions for the random-generated phase objects and their wrapped phase images with higher wrapping densities (bigger wrapping numbers) and higher noise levels in numerical simulation using the E-ASDNUN and the U-net network approaches as well as the PUMA and the DCT-LS algorithms, respectively, where the different levels of multiplicative noise are incorporated into the wrapped phase images with Eq. (3). All the reconstructed phase images are subsequently evaluated by calculating their PSNR and SSIM values with Eqs. (4) and (5). The results are shown in Fig. 4, in which the wrapping number of the phase image varies from n = 2 to 10 and the noise level changes from σ = 0.1 to 0.5.

 figure: Fig. 4.

Fig. 4. PSNRs and SSIMs of the reconstructed phase images for different noise levels of σ = 0.1-0.5 and different wrapping densities of n = 2-10 of the object, where the red, black, blue and green curves are corresponding to the reconstruction approaches of the E-ASDNUN network, the U-net network, the PUMA and the DCT-LS algorithms, respectively.

Download Full Size | PDF

Figures 4(a) and 4(b) show the comparative PSNR and SSIM data of the reconstructed phase images obtained by the E-ASDNUN (red curves) and the U-net (black curves) approaches, respectively. Figures 4(c) and 4(d) are the comparative PSNR and SSIM data of the reconstructed phase images obtained by the E-ASDNUN network (red curves) and the PUMA algorithm (blue curves), respectively. Similarly, the Figs. 4(e) and 4(f) give the compared PSNR and SSIM data from the phase reconstruction results with the E-ASDNUN network (red curves) and the DCT-LS algorithm (green curves), respectively.

These data in Fig. 4 clearly indicate that the E-ASDNUN network approach demonstrates much more powerful capacity than the other approaches used here in the anti-noise and achievement of high quality of absolute phase reconstruction for the high noise levels and high wrapping densities of phase objects. Therefore, the E-ASDNUN network will be a very promising phase reconstruction approach for the speckle-incorporated CDH.

In order to further verify the universal capability of the E-ASDNUN network against severe noise in the image reconstruction, we tested the network using the data of a complicated stem cell image as the ground truth (the first image on the left in Fig. 5). In the test, the different levels of multiplicative noise with σ = 0.5, 0.7 and 0.9 (the maximum of noise level is σ = 1.0) are incorporated into the cell image, apart from its inherent noise. Figures 5(a)–5(c) show the wrapped images of the stem cell with noise of σ = 0.5, 0.7 and 0.9. These images are unwrapped with the E-ASDNUN approach still based on the same simulation network training datasets and operation conditions. The result is shown in Figs. 5(d)–5(f). The error plots of the reconstructed cell images are given by Figs. 5(g)–5(i). It can be seen from these results that the cell still can be reconstructed well when the noise level is up to σ = 0.7. For the extreme noise case of σ = 0.9, the reconstructed image shows a certain distortion. This is understandable because it should be attributed to the limitation of a network technique, including the network training design, sample numbers for training, the noise levels applied for the samples, etc. In the case of Fig. 5, we considered only the noise with σ = 0.5 for network training. The reconstruction quality for the stem cell is evaluated by calculating their PSNR and SSIM values. The result is given in Table 2. The data in Table 2 indicate that the image reconstruction quality is still good enough for noise circumstances of σ ≤ 0.7.

 figure: Fig. 5.

Fig. 5. The noise suppression and image reconstruction of a stem cell with severe noise interference using the E-ASDNUN network approach based on the same network training sets and conditions. (a)-(c) show the wrapped cell images with noise standard deviation of σ = 0.5, 0.7 and 0.9, respectively. (d)-(f) show the reconstructed cell images. (g)-(i) show the error plots between the reconstructed cell image and the ground truth.

Download Full Size | PDF

Tables Icon

Table 2. The PSNR and SSIM of the reconstructed cell images

In the experimental demonstration, we uses a commercial micro-lens array product (Thorlabs, model MLA150-7AR) as the object. The holographic imaging system is aligned in the off-axis geometry with the Mach-Zehnder style, in which a beam of laser with λ = 632.8nm is used for object and reference waves to generate the hologram. The laser beam is expanded through a 4 × microscopic objective. The hologram is recorded using a CCD camera (MV-1310FM with 1280 × 1024 pixels). The object wave-field is reconstructed by the diffraction angular spectrum integral (ASI) approach for it is the digital holographic microscopy. Thus, the complex amplitude of the reconstructed wave-field in the discrete form is expressed as [34,35]

$$\tilde U({m,n} )= IFFT\{{T({\xi ,\eta } )FFT[{H({k,l} )} ]} \}$$
$$H({k,l} )= H({x,y} )rect\left( {\frac{x}{{M\varDelta x}},\frac{y}{{N\varDelta y}}} \right) \times \sum\nolimits_k^M {\sum\nolimits_l^N {\delta ({x - k\varDelta x,y - l\varDelta y} )} }$$
$$T({\xi ,\eta } )= \exp \left\{ {j\frac{{2\pi }}{\lambda }d{{[{1 - {{({\lambda ({\xi - M/2} )} )}^2} - {{({\lambda ({\eta - N/2} )} )}^2}} ]}^{1/2}}} \right\}$$
where FFT and IFFT are the fast Fourier transform and the inverse Fourier transform operators, respectively. H(x, y) and H(k, l) denote the hologram and its discrete expression in the spatial domain. T(ξ, η) is the discrete frequency-transfer function of the free space propagation of the wave-field. λ denotes the wavelength and d is the reconstruction distance. M and N are the integers, which denote the pixel numbers of the CCD sensor in horizontal and vertical directions, respectively. Δx and Δy denote the pixel sizes. (k, l) and (m, n) are the discrete spatial coordinates in the hologram and the reconstructed image planes, respectively. (ξ, η) are the discrete frequency coordinates in the reconstructed image plane. The wrapped phase of the object can be obtained by
$$\varphi ({m,n} )= \arctan \left\{ {\frac{{{\mathop{\rm Im}\nolimits} [{\tilde U({m,n} )} ]}}{{{\mathop{\rm Re}\nolimits} [{\tilde U({m,n} )} ]}}} \right\}$$

Thus, the wrapped phase image of the micro-lens array is obtained by combining Eqs. (6), (8) and (9) with the experimental hologram. Subsequently, the absolute phase of the micro-lens array is reconstructed by applying the wrapped phase data as the input of the E-ASDNUN network based on the same simulation training datasets and operation conditions. The result is shown in Figs. 6(g) and 6(h), in which the red curve in Fig. 6(h) is the truth height plots of the phase cross-section of the reconstructed image with the E-ASDNUN network approach and the black one is the measurement result from a commercial white light interferometer. In order to have a comparative reference, we also measure the same micro-lens array using a commercial white light interferometer (model: SmartWLI-prime). Meanwhile, the object is reconstructed using the PUMA and DCT-LS algorithms as well. The measurement and reconstruction results as well as the truth height plots of the phase cross-section are shown in Figs. 6(a)–6(f), in which the measurement result obtained by the SmartWLI-prime is shown in Figs. 6(a) and 6(b), the reconstruction results with the DCT-LS and PUMA algorithms are shown in Figs. 6(c)–6(d) and 6(e)–6(f), respectively. In Figs. 6(d) and 6(f), the red curves denote the truth height plots of the phase cross-section of the reconstructed image and black ones are the measurement result from the SmartWLI-prime. A common point of these comparative results is that they show very close micro-lens profiles. However, there are still some noise-induced defects on the image surface, which are observed by amplifying the image, from the use of the traditional PUMA and DCT-LS algorithms. In fact, a certain noise remains on the bottom of the image still can be seen by the careful observation to the measurement result from the SmartWLI-prime. In a comparison, the reconstructed image obtained by the E-ASDNUN network looks cleaner and clearer.

 figure: Fig. 6.

Fig. 6. Experimental demonstration of phase reconstruction and speckle noise suppression in laser digital holography. The sample is a micro-lens array product (Thorlabs, model MLA150-7AR) and laser wavelength is 632.8 nm (a)-(b) are the reconstructed image and its truth height plot of phase cross-section using the white light interferometer (model: SmartWLI-prime). (c)-(d), (e)-(f) and (g)-(h) are the reconstructed images and their cross-section phase plots with DCT-LS, PUMA and E-ASDNUN approaches, respectively, where the red curves in (d), (f), (h) are the truth height plots of a phase cross-section (marked by black dash lines). The black curves in (b), (d), (f), (h) are the truth height plots of the same cross-section phase of the image measured with SmartWLI-prime.

Download Full Size | PDF

Moreover, the height and width parameters of the reconstructed micro-lens arrays with different approaches are calculated and listed in Table 3 to have a careful comparison. We can see from these data that both height and width of the image as well as their ratio obtained with the E-ASDNUN approach match the original parameters of the micro-lens array product very well. Meanwhile, the reconstructed image profile is also highly consistent with the measured one from the SmartWLI-prime, as is shown in Fig. 6. These clearly confirm that the micro-lens array is exactly reconstructed by the E-ASDNUN approach for laser digital holography. Therefore, this once more verifies the effectiveness of the E-ASDNUN network approach in removing speckle-noise and realizing high quality of absolute phase reconstruction of the object.

Tables Icon

Table 3. Height and width parameters of the reconstructed micro-lens array with different methods

4. Conclusions

An enhanced anti-speckle deep neural unwrapping network approach for high quality of absolute phase reconstruction in CDH is proposed in this paper. This method designs a special network-based noise filter with a proper depth of eight times of iterations of the three layers of 3 × 3 DW separable convolutions, and embeds it into the network encoder to enhance the anti-noise capacity in the image feature recognition and extraction process. The numerical simulation and experiment on the phase unwrapping reconstruction and the image quality evaluation under the noise circumstances show that the E-ASDNUN network approach is very effective against the speckle noise for the high quality of absolute phase reconstruction. The method also demonstrates much better phase reconstruction quality than the typical U-net neural network and some traditional phase unwrapping algorithms for high wrapping densities and high noise levels of phase images. In the CDH experiment, the phase reconstruction quality for the object is examined by measuring the same object using a commercial white light interferometry as a reference. The measurement result is highly consistent with that obtained by the E-ASDNUN approach. It once more verifies the effectiveness of the E-ASDNUN network approach in suppressing the speckle-noise and realizing the high quality of absolute phase reconstruction of the object.

Funding

National Natural Science Foundation of China (61874117).

Acknowledgments

The authors would like to gratefully thank Mr. Zhaobo Mei for his beneficial help and advice in the network architecture design and programming, and also acknowledge the financial support of the National Natural Science Foundation of China (Grant numbers: 61874117) for this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72(1), 156–160 (1982). [CrossRef]  

2. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

3. D. Voit, L. Tautz, J. Frahm, A. Joseph, and A. and Untenbergerm, “Spatiotemporal phase unwrapping for real time phase contrast flow MRI,” Magn. Reson. Med. 74(4), 964–970 (2015). [CrossRef]  

4. H. Liu, M. Xing, and B. Zheng, “A cluster-analysis-based noise-robust phase-unwrapping algorithm for multibaseline interferograms,” IEEE Geosci. Remote Sensing Lett. 11(2), 494–498 (2014). [CrossRef]  

5. M. Franz Epple, Sebastian Ehn, Pierre Thibault, Thomas Koehler, Guillaume Potdevin, Julia Herzen, David Pennicard, Heinz Graafsma, B. Peter Noel, and Franz Pfeiffer, “Phase unwrapping in spectral X-ray differential phase-contrast imaging with an energy-resolving photon-counting pixel detector,” IEEE Trans. Med. Imaging 34(3), 816–823 (2015). [CrossRef]  

6. D. G. Abdelsalam and D. Kim, “Coherent noise suppression in digital holography based on flat fielding with apodized apertures,” Opt. Express 19(19), 17951–17959 (2011). [CrossRef]  

7. P. Feng, X. Wen, S. Liu, F. J. Wang, and R. Li, “Coherent noise reduction in digital holographic phase contrast microscopy by slightly shifting object,” Opt. Express 19(5), 3862 (2011). [CrossRef]  

8. A. Vijayakumar, Yuval Kashter, Roy Kelner, and Joseph Rosen, “Coded aperture correlation holography–a new type of incoherent digital holograms,” Opt. Express 24(11), 12430–12441 (2016). [CrossRef]  

9. J. P. Liu, S. Y. Wang, P. W. M. Tsang, and T. C. Poon, “Nonlinearity compensation and complex-to-phase conversion of complex incoherent digital holograms for optical reconstruction,” Opt. Express 24(13), 14582–14588 (2016). [CrossRef]  

10. D. Claus, M. Fritzsche, D. Iliescu, B. Timmerman, and P. Bryanston-Cross, “High-resolution digital holography utilized by the subpixel sampling method,” Appl. Opt. 50(24), 4711–4719 (2011). [CrossRef]  

11. Y. M. Wang, D. Huang, Y. Su, and X. S. Yao, “Two-dimensional phase unwrapping in Doppler Fourier domain optical coherence tomography,” Opt. Express 24(23), 26129–26145 (2016). [CrossRef]  

12. H. Wang and K. M. Qian, “Local orientation coherence based segmentation and boundary-aware diffusion for discontinuous fringe patterns,” Opt. Express 24(14), 15609–15619 (2016). [CrossRef]  

13. J. C. Estrada, M. Servin, and J. A. Quiroga, “Noise robust linear dynamic system for phase unwrapping and smoothing,” Opt. Express 19(6), 5126–5133 (2011). [CrossRef]  

14. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011). [CrossRef]  

15. J. M. Bioucas-Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. on Image Process. 16(3), 698–709 (2007). [CrossRef]  

16. D. C. Ghiglia and L. A. Romero, “Minimum LP-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996). [CrossRef]  

17. C. W. Chen and H. A. Zebker, “Two-dimensional phase unwrapping with use of statistical models for cost functions in nonlinear optimization,” J. Opt. Soc. Am. A 18(2), 338–351 (2001). [CrossRef]  

18. K. Wang, Y. Li, K. Qian, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100 (2019). [CrossRef]  

19. J. Zhang, X. Tian, J. Shao, H. Luo, and R. Liang, “Phase unwrapping in optical metrology via denoised and convolutional segmentation networks,” Opt. Express 27(10), 14903 (2019). [CrossRef]  

20. G. E. Spoorthi, Subrahmanyam Gorthi, and Rama Krishna Sai Subrahmanyam Gorthi, “PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019). [CrossRef]  

21. G. E. Spoorthi, RKSS Gorthi, and S. Gorthi, “PhaseNet 2.0: Phase Unwrapping of Noisy Data Based on Deep Learning Approach,” IEEE Trans. on Image Process. 29, 4862–4872 (2020). [CrossRef]  

22. “The PHU-NET: A robust phase unwrapping method for MRI ased on deep learning,” Magn. Reson. Med. (2021).

23. C. Bai, T. Peng, J. W. Min, R. Z. Li, Y. Zhou, and B. L. Yao, “Dual-wavelength in-line digital holography with untrained deep neural networks,” Photonics Res. 9(12), 2501–2510 (2021). [CrossRef]  

24. L. Rudin, P. L. Lions, and S. Osher, “Multiplicative Denoising and Deblurring: Theory and Algorithms, Geometric Level Set Methods in Imaging, Vision, and Graphics,” Springer New York, 103–119 (2003).

25. W. Lu, Y. Shi, J.H. Yue, M. Zheng, M. Q. Wang, and J Wu, “Complex-valued speckle effect and its suppression for high quality of phase unwrapping reconstruction in coherent digital holographic microscopy,” Opt. Commun. 472, 125837 (2020). [CrossRef]  

26. R. Bai, S. Jiang, H. Sun, Y. Yang, and G. Li, “Deep neural network-based semantic segmentation of microvascular decompression images,” Sensors 21(4), 1167 (2021). [CrossRef]  

27. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015) pp. 234–241

28. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (IEEE, 2016), pp. 770–778.

29. Z. Kai, W. Zuo, Y. Chen, D. Meng, and Z. Lei, “Beyond a gaussian denoiser: residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

30. Y. Li, Z. Miao, R. Zhang, and J. B. Wang, “DenoisingNet: An Efficient Convolutional Neural Network for Image Denoising,” 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD).

31. H. Yuan and T. Ma, “Federated Accelerated Stochastic Gradient Descent,” arXiv e-prints (2020).

32. Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44(13), 800–801 (2008). [CrossRef]  

33. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

34. L. Yu and M K. Kim, “Wavelength-scanning digital interference holography for tomographic three-dimensional imaging by use of the angular spectrum method,” Opt. Lett. 30(16), 2092–2094 (2005). [CrossRef]  

35. M Leon, R Rodriguez-Vera, J. A Rayas, and S. Calixto, “Amplitude and phase recovering from a micro-digital hologram using angular spectrum,” Revista mexicana de física 57(4), 315–321 (2011).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. A diagram of the E-ASDNUN network architecture, in which Conv, BN and ReLU denote the convolution, the batch normalization and the rectified linear unit, respectively.
Fig. 2.
Fig. 2. A test wrapped-phase image with σ = 0.5 and its feature maps before and after undergoing the anti-noise module at Epoch 1, Epoch 50 and Epoch 100 of network training.
Fig. 3.
Fig. 3. Numerical simulation of the phase reconstruction with the E-ASDNUN, the U-net, the PUMA and the DCT-LS approaches. (a), (b), (c) are a random phase object, its wrapped phase image and the phase distribution within one cross-section located at 80 pixels, respectively. The max wrapping number of the object is nM= 4 and the noise level is σ= 0.1. (d)-(f), (g)-(i), (j)-(l), (m)-(o) are the phase reconstruction results with the E-ASDNUN, the PUMA, the DCT-LS and the U-net approaches, respectively. (e), (h), (k). (n) are the error maps between the reconstructed phase images and the object. The blue and red curves in (f), (i), (l), (o) are the phase distributions within one fixed cross-section at 80 pixels from the object and the reconstructed images, respectively.
Fig. 4.
Fig. 4. PSNRs and SSIMs of the reconstructed phase images for different noise levels of σ = 0.1-0.5 and different wrapping densities of n = 2-10 of the object, where the red, black, blue and green curves are corresponding to the reconstruction approaches of the E-ASDNUN network, the U-net network, the PUMA and the DCT-LS algorithms, respectively.
Fig. 5.
Fig. 5. The noise suppression and image reconstruction of a stem cell with severe noise interference using the E-ASDNUN network approach based on the same network training sets and conditions. (a)-(c) show the wrapped cell images with noise standard deviation of σ = 0.5, 0.7 and 0.9, respectively. (d)-(f) show the reconstructed cell images. (g)-(i) show the error plots between the reconstructed cell image and the ground truth.
Fig. 6.
Fig. 6. Experimental demonstration of phase reconstruction and speckle noise suppression in laser digital holography. The sample is a micro-lens array product (Thorlabs, model MLA150-7AR) and laser wavelength is 632.8 nm (a)-(b) are the reconstructed image and its truth height plot of phase cross-section using the white light interferometer (model: SmartWLI-prime). (c)-(d), (e)-(f) and (g)-(h) are the reconstructed images and their cross-section phase plots with DCT-LS, PUMA and E-ASDNUN approaches, respectively, where the red curves in (d), (f), (h) are the truth height plots of a phase cross-section (marked by black dash lines). The black curves in (b), (d), (f), (h) are the truth height plots of the same cross-section phase of the image measured with SmartWLI-prime.

Tables (3)

Tables Icon

Table 1. The variance of feature maps before undergoing the anti-noise module at Epoch 1, Epoch 50 and Epoch 100.

Tables Icon

Table 2. The PSNR and SSIM of the reconstructed cell images

Tables Icon

Table 3. Height and width parameters of the reconstructed micro-lens array with different methods

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

U ~ ( x , y ) = A ( x , y ) e j [ φ ( x , y ) + 2 π n ( x , y ) ] = A 0 ( x , y ) e j [ φ 0 ( x , y ) + 2 π n ( x , y ) ] [ 1 + N ~ ( x , y ) ]
N ~ ( x , y ) = A n ( x , y ) e j [ φ n ( x , y ) ]
φ ( x , y ) = a r c t a n { s i n φ 0 ( x , y ) + σ n R ( x , y ) s i n [ φ 0 ( x , y ) + σ n I ( x , y ) ] c o s φ 0 ( x , y ) + σ n R ( x , y ) c o s [ φ 0 ( x , y ) + σ n I ( x , y ) ] }
P S N R = 10 log 10 ( M A X φ o r i g 2 M S E )
S S I M ( φ Re , φ o r i g ) = ( 2 μ Re μ o r i g + C 1 ) ( 2 σ Re o r i g + C 2 ) ( μ Re 2 + μ o r i g 2 + C 1 ) ( σ Re 2 + σ o r i g 2 + C 2 )
U ~ ( m , n ) = I F F T { T ( ξ , η ) F F T [ H ( k , l ) ] }
H ( k , l ) = H ( x , y ) r e c t ( x M Δ x , y N Δ y ) × k M l N δ ( x k Δ x , y l Δ y )
T ( ξ , η ) = exp { j 2 π λ d [ 1 ( λ ( ξ M / 2 ) ) 2 ( λ ( η N / 2 ) ) 2 ] 1 / 2 }
φ ( m , n ) = arctan { Im [ U ~ ( m , n ) ] Re [ U ~ ( m , n ) ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.