Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Learning-based complex field recovery from digital hologram with various depth objects

Open Access Open Access

Abstract

In this paper, we investigate a learning-based complex field recovery technique of an object from its digital hologram. Most of the previous learning-based approaches first propagate the captured hologram to the object plane and then suppress the DC and conjugate noise in the reconstruction. To the contrary, the proposed technique utilizes a deep learning network to extract the object complex field in the hologram plane directly, making it robust to the object depth variations and well suited for three-dimensional objects. Unlike the previous approaches which concentrate on transparent biological samples having near-uniform amplitude, the proposed technique is applied to more general objects which have large amplitude variations. The proposed technique is verified by numerical simulations and optical experiments, demonstrating its feasibility.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the recent development of display and semiconductor technologies, the need to measure micro-objects has been increasing. Digital holography (DH) records interference patterns of the object and reference light instead of the direct intensity image of the object. The amplitude and phase of the object field included in the interference pattern provides additional information over conventional intensity imaging, making DH versatile for various applications [1]. However, an interference pattern contains not only the complex field of the object but also DC and conjugate components. These DC and conjugate components act as artifacts which hinder the reconstruction of the object information in the holographic imaging. To recover the complex field without DC and conjugate components, additional techniques such as off-axis or phase shifting hologram have been frequently used [2]. However, the off-axis technique sacrifices the effective bandwidth of the hologram, reducing the maximum resolution of the object. The phase shifting technique requires multiple capturing of the holograms, making it sensitive to the system vibration and only applicable to static objects.

Recently, deep learning technology is emerging in a wide variety of scientific fields [3]. The hologram reconstruction problems have also been tackled using the deep learning techniques [423]. Some studies use the deep learning to find the axial position of an object for its automatic focusing [79]. For instance, Ren et al. utilized a convolutional neural network (CNN) to estimate the object distance from a hologram [7]. Pitkäaho et al. reported that two CNN-based architectures succeed to find the focal position of the cell cluster in an off-axis digital holographic microscopy [8].

Another application of the deep learning is the suppression of the DC and conjugate noise in holographic reconstructions [1014]. Y. Rivenson et al. has proposed a CNN based phase recovery [10]. For phase objects, they demonstrated that the deep learning network can eliminate the conjugate noise to recover the phase information from an intensity hologram. In [11], the effective depth-of-field (DOF) where the phase information can be recovered without the conjugate noise was further extended using a deep learning network. Moon et al. proposed a conditional generative adversarial network (C-GAN) to eliminate the conjugate image noise from phase images captured by a Gabor holographic setup [12]. Li et al. proposed a deep learning method based on auto-encoder [13]. Their method implements an auto-encoder based learning network from a single hologram image by iteratively comparing forward propagated and captured holograms to reconstruct the conjugate image-free hologram. Untrained deep neural network for dual-wavelength DH was also proposed [14]. This method recovers the optical thickness distribution of an object directly without prior training. Wu et al. presented a bright-field holographic imaging by implementing a GAN which transforms the digitally backpropagated holograms to the equivalent bright-field microscopy images of the object at the corresponding depth without the conjugate noise [15].

End-to-end learning methods to obtain the complex field information in the object plane from holograms have also been proposed [1623]. Ren et al. proposed deep learning networks to reconstruct intensity or phase directly from a hologram for amplitude and phase objects [16]. Wang et al. proposed a network with two up-sampling paths for the intensity and phase of an object wavefront [17]. The network was later extended to a dual-wavelength digital hologram [18]. It was reported that the deep learning network with four output branches improves the reconstruction efficiency and accuracy by solving the spectral overlap problem in the dual-wavelength DH.

In many approaches introduced above, the holographic imaging focuses on reconstruction of transparent biological objects like cells. In this application, useful information of the object is usually contained in the phase information in the object plane which can be translated into the refractive index of the sample. It is also beneficial to increase the DOF to collect information over the entire sample volume, rather than discriminating individual depth slice by reducing the DOF. Therefore, many methods place importance on the precise phase reconstruction with a long DOF in the object side. In contrast, in inspecting a semiconductor or a display, it is necessary to measure the topological shape of the object with a shallow DOF. The object is also not transparent making both amplitude and phase information of the object important.

To face these issues, we present a novel deep-learning-based complex field extraction technique from a single hologram in this paper. In our contribution, we focus on preserving the depth information of non-transparent objects. Unlike previous approaches which extract the complex field directly in the object plane from an interference pattern in the hologram plane, the proposed method utilizes a deep learning network to extract the object complex field in the hologram plane. The extracted object complex field is then propagated to any distance, reconstructing the object without DC and conjugate terms. Note that in the proposed approach the preparation of the network training dataset does not require the numerical propagation, which makes the trained network independent from the numerical propagation algorithms. After the object complex field extraction in the hologram plane, the proposed approach allows any numerical propagation algorithms for the object reconstructions in their depths. To the contrary, in the previous approaches extracting the object complex field in the object plane directly, the training dataset needs to be prepared after numerically propagating the optically obtained object complex field to the object plane, which makes the trained network dependent on the specific numerical propagation algorithm. Moreover, the complex field in the hologram plane has higher similarity to the interference pattern in the same plane than the complex field in the object plane. This higher similarity usually gives higher complex field extraction quality to the proposed approach over the conventional one.

We validate our approach using dataset generated by both numerical simulations and optical experiments of 4-phase shifting digital holography. The simulation and experimental results show that the proposed network can provide the complex field reconstruction of objects at various depths only from a single hologram without requiring conventional 4-phase shifted holograms.

2. Hologram imaging setup

We adopt a polarization-based lens-free Mach-Zehnder interferometer as a hologram capturing setup as shown in Fig. 1 [1]. A 660 nm red laser is used as a light source. The laser light is first linearly polarized along 45° and divided into the 0°-polarized object beam and 90°-polarized reference beam by a polarizing beam splitter (PBS). After the object beam passes through the object sample, two beams are combined again by the second PBS and transformed into two orthogonal circular polarizations by a quarter wave plate (QWP). A neutral density (ND) filter and two linear polarizers are additionally used in the reference and object beam paths to adjust the intensity and to ensure the polarization states as shown in Fig. 1(a). During the implementation of our experimental setup, the object beam was unintentionally tilted by around 0.7° with respect to the reference beam as shown in Fig. 1(b), giving slight off-axis effect. Note that this off-axis tilt is small and does not separate the object spectrum from the DC and conjugate terms. This tilt was considered in the dataset generation used for the network training. In some experiments, the object is intentionally tilted as shown in Fig. 1(c) to test the proposed network performance for continuous depth distribution of the object. The combined reference and object beams in the orthogonal circular polarizations are finally captured by a polarized image sensor (GO-5100MP-PGE, JAI). Each pixel of the polarized image sensor consists of 4 sub-pixels which have linear micro polarizers of 0°, 45°, 90°, and 135°, respectively. The linear micro polarizers in the sub-pixels give different phase delay to the reference and object beams, enabling the 4-phase-shifting digital hologram in a single capture [24,25].

 figure: Fig. 1.

Fig. 1. Optical setup for capturing digital hologram. (a) Overall configuration, (b) off-axis angle, and (c) objects at the normal and tilted positions.

Download Full Size | PDF

The captured raw image data is de-mosaiced according to the polarizer directions of the sub-pixels. The resulting 4 interference patterns, i.e., ${I_{0^\circ }}$, ${I_{45^\circ }}$, ${I_{90^\circ }}$, and ${I_{135^\circ }}$ respectively correspond to $0,\; \; \pi /2,\; \; \pi $, and $3\pi /2$ phase shift between the reference and object beam. The object complex field U is obtained by

$$U = ({{I_{0^\circ }} - {I_{90^\circ }}} )+ i({{I_{45^\circ }} - {I_{135^\circ }}} )$$

Figure 2 shows an example of the captured interference patterns (${I_{0^\circ }},\; {I_{45^\circ }},\; {I_{90^\circ }},\; {I_{135^\circ }})$ and the extracted object complex field U in the hologram plane. The pixel pitch and the resolution of the interference patterns and the object complex field are 6.9 um and 1232 × 1028, respectively.

 figure: Fig. 2.

Fig. 2. Example of the captured interference patterns and extracted complex field.

Download Full Size | PDF

In the proposed method, the network is trained to predict the real and imaginary parts of the object complex field in the hologram plane, i.e., ${U_{real}}$ and ${U_{imaginary}}$ from a single interference pattern ${I_{0^\circ }}$. The complex field calculated by Eq. (1) is used as the ground truth for the training. In our experiment, the object distance from the hologram plane ranges from 117 mm to 142 mm. To reduce the memory requirement of the deep learning network, the 1232 × 1028 resolution interference pattern and the complex field are divided into 128 × 128 resolution patches in the training and prediction as shown in Fig. 3. The total number of the patches used in the training is 38,517. Note that the end-to-end networks which reconstructs the object complex field from a digital hologram are usually weak for data differences. The use of small patches instead of whole captured images can contribute to reducing this data dependency. Because each patch does not have information of entire objects, this approach enhances robustness against the object variations. In the simulation and experimental results presented in Sections 4 and 5, it is shown that the proposed network reconstructs the complex field of not only the trained objects but also different types of objects with high similarity to the ground truth.

 figure: Fig. 3.

Fig. 3. Data tiling for (a) training and (b) prediction.

Download Full Size | PDF

3. Network architecture for complex field recovery

A deep learning network is an estimator that finds an unknown mathematic model through a training with a large amount of data. In the proposed object complex field recovery, the mathematical model F that needs to be estimated by the network is given by

$$F({{I_{0^\circ }}} )= F({{{|{U + R} |}^2}} )= U$$
where R is the complex field of a reference wave. The training of the deep learning network can be considered as an energy minimization problem for a corresponding cost
$$w = argmin\{{{{|{F({{I_{0^\circ }}} )- U} |}^2}} \}$$
where w represents the parameter set of the network. Such a minimization problem may be solved through a forward and backward propagation process of a deep learning network [3].

In this work, we utilize a CNN network based on a U-net architecture. U-net is a well-known deep learning network originally designed for segmentation of biometric objects [26,27]. Due to its simple structure and fast learning speed, U-net has been used in many vision applications, and also in holographic applications.

The U-net architecture consists of encoding and decoding layers as shown in Fig. 4. We implement the encoding part using convolution layers with max-pooling for the down-sampling. For the decoding part, up-convolution layers are utilized to increase the resolution. For non-linear activation of the convolution layers, a ReLU function is chosen. In order to regularize the network, a 50% dropout is used in the last convolution layer of the encoder.

 figure: Fig. 4.

Fig. 4. U-net based regression network architecture.

Download Full Size | PDF

The energy function of the original U-net is defined by a pixel-wise soft-max with the cross-entropy loss function over the output of the final convolution layers. In the proposed technique, the energy function is modified to be the sum of mean-square-errors (MSEs) and the mean-absolute-deviations (MADs) between the ground-truth complex field and the recovered complex field. For its implementation, we replace the soft-max and the segmentation layers after the last convolution layer of the original U-net with regression layers that computes the complex MSE and MAD. For the complex field representation, the number of the output layer channels is set to be 2, i.e., one for the real part and the other one for the imaginary part. This makes it easy to calculate the complex MSE and MAD losses using the conventional functions. The input layer is also set to have two channels where the input interference pattern is simply duplicated. The resolution of the input and output is 128 × 128 as explained in the previous Section 2, and the depth of the network layers is set to be four. The channel number and the size of each layer are indicated in Fig. 4.

4. Numerical simulation

For the validation of the proposed network structure, we first train and test the network using the simulation data generated by numerical propagation as shown in Fig. 5. In the simulation, multiple object images at different distances from the hologram plane are used as a three-dimensional (3D) object scene. The light from the object scene is numerically propagated and interfered with a reference plane wave in the hologram plane. The object complex field in the hologram plane is used as the ground truth for the training of the network, and the interference pattern as the input of the network. For numerical propagation, the angular spectrum method is used [28]. The simulation parameters are set to be similar to the experimental condition explained in Section 2. The interference patterns and the object complex fields are first calculated in 1232 × 1028 resolution, and then divided into 128 × 128 size patches for the training and evaluation. The wavelength of the light, off-axis angle of the setup, and pixel pitch of the interference pattern and complex field are set to be 660 nm, 0.7°, and 6.9um, respectively. In the training of the network, the object images are selected from the MNIST dataset and the distance is randomly set within a range from 110 mm to 140 mm. In the evaluation of the network, the Fashion-MNIST dataset is additionally used and the distance range is also expanded over the trained range. The quality of the predicted object complex field is evaluated both in the hologram plane and the reconstructed object plane.

 figure: Fig. 5.

Fig. 5. Generation flow of training datasets.

Download Full Size | PDF

Figures 6 and 7 show examples of the complex field predicted by the proposed network. In Figs. 6 and 7, all objects in each scene are in the same distance. The interference pattern ${I_{0^\circ }}$ used as the input to the network is shown in the first row of Figs. 6 and 7. The object complex field predicted in the hologram plane and the amplitude in the object plane which is reconstructed by numerically propagating the predicted complex field from the hologram to the object plane are shown in the following rows. In the case of the object complex field, the amplitude ${U_{amplitude}}$, and phase ${U_{phase}}$ which are calculated from the originally predicted real ${U_{real}}$ and imaginary ${U_{imaginary}}$ parts are also shown for better visual comparison of the results. In Fig. 6, the objects are from MNIST dataset that was used in the training. In Fig. 7, the objects are from a new dataset, Fashion-MNIST. In Figs. 6(a) and 7(a), the object distance is 120 mm, which is within the training range, i.e., from 110 mm to 140 mm. In Figs. 6(b) and 7(b), it is 170 mm which is outside of the training range.

 figure: Fig. 6.

Fig. 6. Simulation results for MNIST images at the same distance. The object distance is (a) 120 mm, which is within the training range, and (b) 170 mm, outside of the training range.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Simulation results for Fashion-MNIST images at the same distance. The object distance is (a) 120 mm which is within the training range, and (b) 170 mm outside of the training range.

Download Full Size | PDF

The results in Figs. 6 and 7 demonstrate that the proposed network can recover the object complex field from the input interference pattern successfully. The recovered object complex field in the hologram plane and the reconstructed amplitude in the object plane are highly close to the ground truth regardless of the object shape and the object distances. For a quantitative evaluation, the PSNRs and SSIMs are calculated in Table 1. Four cases according to the source dataset and the distance range are tested. For each case, the PSNR and SSIM values are averaged over 100 tests. Table 1 shows that the predicted complex field and the reconstructed amplitude exhibit high PSNR values and SSIM values. Although PSNRs of Fashion-MNIST cases which are not used in the training show slightly lower values, the overall PSNR is still higher than 50 dB in all cases. The quantitative evaluation demonstrated the high performance of the proposed network when all objects are in the same distance.

Tables Icon

Table 1. Simulation results when the object images are at the same distance in each scene.a

Figure 8 shows the result when the objects in a scene are distributed in different distances from the hologram plane. As shown in Figs. 8(a) and (b), 9 objects are located at 5 different distances. Figure 8(c) is the interference pattern which is input to the network. The object complex field which is predicted by the proposed network from the interference pattern in Fig. 8(c) is numerically propagated to different distances for the reconstruction. In Fig. 8(d), the numerical reconstructions of the predicted complex field ($|{Pro{p_z}({{U_{predicted}}} )} |$) are compared with those of the ground truth ($|{Pro{p_z}({{U_{GT}}} )} |$) and the input interference pattern ($|{Pro{p_z}({{I_{0^\circ }}} )} |$) at different distances. Visualization 1 shows that numerical reconstructions are shown continuously along the distances. As expected, the reconstructions of the input interference pattern are much degraded due to the DC and conjugate components. To the contrary, the reconstructions of the complex field predicted by the proposed network show the original objects clearly at the corresponding distances without the DC and conjugate noises. The simulation results confirm that the proposed network can recover the complex field of the objects without the DC and conjugate components successfully even when the objects are distributed in different distances.

 figure: Fig. 8.

Fig. 8. Simulation results for Fashion-MNIST images at different distances. (a) Object amplitude image, (b) distance map, (c) interference pattern ${I_{0^\circ }}$ in the hologram plane, and (d) numerical reconstructions of the input interference pattern ${I_{0^\circ }}$, ground truth complex field ${U_{GT}}$ and the complex field predicted by the proposed network ${U_{predicted}}$ (Visualization 1).

Download Full Size | PDF

In all simulations up to now, the objects are completely isolated and arranged regularly. Figure 9 and Visualization 2 show the test results for a hologram of irregularly overlapped objects. 9 objects are arranged irregularly, and partially overlapped each other. The depths are different for each object in the range of 110 mm ≤ z ≤ 170 mm. The reconstruction results show that the predicted complex field can reconstruct overlapped objects at their true depths like the ground truth. Comparison with the direct reconstruction of the input interference pattern also verifies that our network eliminates the DC and conjugate components successfully even in the overlapped objects case like previous simulation results. This shows that our network is robust against the variation of objects arrangement.

 figure: Fig. 9.

Fig. 9. Simulation results for irregularly overlapped objects. (a) Object amplitude image, (b) distance map, (c) interference pattern ${I_{0^\circ }}$ in the hologram plane, and (d) numerical reconstructions of the input interference pattern ${I_{0^\circ }}$, ground truth complex field ${U_{GT}}$ and the complex field predicted by the proposed network ${U_{predicted}}$ (Visualization 2).

Download Full Size | PDF

Finally, we compare the proposed approach with conventional approaches. Unlike the most conventional approaches which reconstruct the object complex field directly in the object plane, the proposed approach extracts the object complex field in the hologram plane using the network, then the extracted complex field is numerically propagated to various object planes for the reconstruction. Figure 10 and Table 2 show the comparison result. The objects are distributed in the depth range from z = 110 mm to z = 160 mm. In the conventional direct reconstruction method, the network is trained to reconstruct the object complex field at z = 125 mm plane. The reconstructed complex field at z = 125 mm is then numerically propagated to different adjacent planes for the reconstruction of individual objects at different planes. In the proposed method, the complex field is reconstructed in the hologram plane by the network and numerically propagated to various object planes. In two approaches, the network has the same structure and the number of the learnable parameters. As shown in Fig. 10 and Table 2, the proposed approach shows better performance, which originates from the fact that the similarity between the network input, i.e., the interference pattern in the hologram plane and the network output, i.e., the object complex field in the hologram plane in the proposed approach and the one in the z = 125 mm in the conventional approach, is higher in the proposed approach.

 figure: Fig. 10.

Fig. 10. Comparison of simulation results between out method and direct reconstruction method. Output of each network is numerically propagated to each depth.

Download Full Size | PDF

Tables Icon

Table 2. Comparison of simulation results in Fig. 10 between our method and direct reconstruction method.a

5. Experimental results

For the experimental verification of the proposed network, the network is trained and tested using the interference patterns obtained by the optical experiments. As described in Section 2, the captured interference patterns and the extracted object complex field are divided into 128 × 128 resolution patches and used in the training and test. For the training, a negative test target (R1L1S1N, Thorlabs) located at a distance ranging from 117 mm to 142 mm is used. In the test, a positive test target (R1L1S1P, Thorlabs) and a part of micro-display circuit on the glass (SLMoG) [29] which are significantly different from the trained objects are also used. Various object distances are also tested to validate the performance of the proposed network. The pictures of the objects used in the training and test are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Objects for training and test. (a) Negative test target, (b) positive test target, and (c) micro-display circuit.

Download Full Size | PDF

In Fig. 12, we compare the reconstruction results of (a) input interference pattern ${I_{0^\circ }}$, (b) ground truth complex field ${U_{GT}}$, and (c) predicted complex field ${U_{predicted}}$. When a single interference pattern is numerically propagated to the object distance, the object reconstruction is overlapped by the DC and the conjugate term, making the reconstruction blurred and noisy as shown in Fig. 12(a). The ground truth object complex field calculated from 4 interference patterns gives clear reconstruction shown in Fig. 12(b). The proposed network predicts the object complex field from a single interference pattern. Its numerical propagation result in Fig. 12(c) shows that the proposed network successfully predicts the object complex field such that its reconstruction at the object plane is clear without the DC and conjugate noise.

 figure: Fig. 12.

Fig. 12. Comparison of the reconstruction examples of (a) a single interference pattern, (b) ground truth object complex field, and (c) object complex field predicted by the proposed network.

Download Full Size | PDF

Figure 13 shows the results for different object distances. In Figs. 13(a), 13(b), and in the associated Visualization 3 and Visualization 4, the object distance is 119 mm and 138 mm, respectively, which is within the training range. The predicted results show that the real and imaginary parts of the object complex field, i.e. ${U_{real}}$ and ${U_{imaginary}}$, and also its amplitude and phase, i.e. $|U |$ and $\angle U$ are close to the ground truth. The resultant reconstructions in the bottom row of Fig. 13 shows the objects clearly without the DC and conjugate noise. In Fig. 13(c) and Visualization 5, the object distance is 171 mm which is outside of the training range. The results in Fig. 13(c) shows that the proposed network successfully predicts the object complex field even when the object distance deviates from the range used in the training. Figure 14 shows a cross-section of the reconstruction amplitude of Fig. 13(a). Figure 14 illustrates that the reconstructed object image of the proposed method has high quality close to reconstructed image of the ground truth. Note that although the reconstructions in Figs. 13 and 14 only show the amplitudes of the objects, the phase distribution in the object plane is also obtained simultaneously as the object complex field in the hologram plane is numerically propagated to the object plane in the proposed method. This object phase distribution is wrapped with 2π interval and any existing phase unwrapping techniques can be applied to obtain the unwrapped phase distribution if required.

 figure: Fig. 13.

Fig. 13. Experimental results. The same negative test target object is used in the training and test. The object is located at (a) 119 mm (Visualization 3) and (b) 138 mm (Visualization 4) which are within the distance range used in the training. The object is located at (c) 171 mm (Visualization 5) which is outside of the training range.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Cross-section of the reconstructed amplitude images of Fig. 12(a). (a) Ground truth, (b) predicted output.

Download Full Size | PDF

Figure 15 and Visualization 6 and Visualization 7 show the results for different objects. Test results for the positive test target are in Fig. 15(a) and for a part of micro-display circuit are in Fig. 15(b). Although the objects are considerably different from the transmissive negative test target which is used in the network training, the prediction results shown in Fig. 15 demonstrate that the proposed network estimates the real and imaginary parts of the object complex field in the hologram plane successfully. Table 3 lists the PSNR and SSIM values calculated from Figs. 13 and 15. In our experiments, SSIMs of predicted complex field are about 0.9 for all objects, and PSNRs are higher than 23 dB in all cases. Although there is some attenuation depending on the characteristics of the object and the photographing distance, the overall PSNR and SSIM values show the feasibility of the proposed network.

 figure: Fig. 15.

Fig. 15. Experimental results. Different objects are used in the training and the test. (a) Transmissive positive test target (Visualization 6), and (b) a part of micro-display circuit (Visualization 7).

Download Full Size | PDF

Tables Icon

Table 3. Experimental results.a

In the final experiment, we recover the complex field of the tilted object with continuous depth changes as illustrated in Fig. 1(c). Figure 16 shows the reconstructions for the tilted object at the different object distances. Here, the difference of distance between the left and right parts of the tilted object is 2 mm. The red squares in Fig. 16 indicate the focused part in each reconstruction distance. Figure 16 shows that the left and right parts of the single tilted object are focused at the corresponding distances by numerically propagating the object complex field which is predicted by the proposed network. This supports that the proposed network works not only for a single distance object but also for a tilted object of continuous distance distribution.

 figure: Fig. 16.

Fig. 16. Experimental result for a tilted object.

Download Full Size | PDF

6. Conclusion

In this paper, we proposed a learning-based complex field extraction technique from a single hologram. Unlike previous studies which suppress the DC and conjugate terms in the object plane reconstructions, the proposed technique directly extracts the object complex field in the hologram plane from a captured single hologram. The direct extraction of the object complex field in the hologram plane makes the proposed technique robust to the object depth variation, enabling the detection of the 3D structure of objects from a single hologram.

The proposed technique was demonstrated by simulation datasets and optically captured datasets. Simulation was conducted using the MNIST dataset in the training and the Fashion-MNIST dataset in the test. The experiment dataset was captured by a polarization-based lens-free Mach-Zehnder interferometer. It was proved that the proposed network can cover the object distance variation and be applied to different types of objects. The simulation and experimental results show that the complex field extracted by the proposed network can be used to reconstruct individual objects clearly at their corresponding depths even when there are multiple objects with different depths in the single captured hologram. The typical object depth used in our simulations and experiments is hundreds of millimeters, which means the objects that cannot be located close to the image sensors can be detected by using a lens-free hologram capturing setup and the proposed network.

Using the proposed approach, it is possible to recover complex field directly from a single hologram without requiring the conventional 4-phase shifting system. The current implementation of our recovery system is based on transmissive optical imaging system, which has a limitation in recovering the complex field of thick 3D volumetric objects. We plan to address this issue in the future work by expanding our optical setup to diffusive, reflective, and volumetric objects.

Funding

Institute of Information & Communications Technology Planning & Evaluation (2020-0-00981).

Acknowledgments

This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00981, Development of Digital Holographic Metrology Technology for Phase Retrieval).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photon. Energy 1(1), 018005 (2010). [CrossRef]  

2. S. A. Benton and V. M. Bove Jr., Holographic Imaging (Wiley, 2008).

3. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

4. B. Lee, J. Lee, D. Yoo, and E. Lee, “Deep learning in holography,” Proc. SPIE 11703, 29–62 (2021). [CrossRef]  

5. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019). [CrossRef]  

6. T. Zeng, Y. Zhu, and E. Y. Lam, “Deep learning for digital holography: a review,” Opt. Express 29(24), 40572–40593 (2021). [CrossRef]  

7. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

8. T. Pitkäaho, A. Manninen, and T. J. Naughton, “Focus prediction in digital holographic microscopy using deep convolutional neural networks,” Appl. Opt. 58(5), A202–A208 (2019). [CrossRef]  

9. L. Huang, T. Liu, X. Yang, Y. Luo, Y. Rivenson, and A. Ozcan, “Holographic Image Reconstruction with Phase Recovery and Autofocusing Using Recurrent Neural Networks,” ACS Photonics 8(6), 1763–1774 (2021). [CrossRef]  

10. Y. Rivenson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

11. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

12. I. Moon, K. Jaferzadeh, Y. Kim, and B. Javidi, “Noise-Free Quantitative Phase Imaging in Gabor Holography with Conditional Generative Adversarial Network,” Opt. Express 28(18), 26284–26301 (2020). [CrossRef]  

13. H. Li, X. Chen, Z. Chi, C. Mann, and A. Razi, “Deep DIH: Single-Shot Digital In-Line Holography Reconstruction by Deep Learning,” IEEE Access 8, 202648–202659 (2020). [CrossRef]  

14. C. Bai, T. Peng, J. Min, R. Li, Y. Zhou, and B. Yao, “Dual-wavelength in-line digital holography with untrained deep neural networks,” Photon. Res. 9(12), 2501–2510 (2021). [CrossRef]  

15. Y. Wu, Y. Luo, G. Chaudhari, G. Rivenson, Y. Calis, K. de Haan, and A. Ozcan, “Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram,” Light: Sci. Appl. 8(1), 25 (2019). [CrossRef]  

16. Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photon. 1(01), 1–12 (2019). [CrossRef]  

17. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019). [CrossRef]  

18. K. Wang, Q. Kemao, J. Di, and J. Zhao, “Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction,” Opt. Lett. 45(15), 4220–4223 (2020). [CrossRef]  

19. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

20. T. Zeng, H. K.-H. So, and E. Y. Lam, “RedCap: residual encoder-decoder capsule network for holographic image reconstruction,” Opt. Express 28(4), 4876–4887 (2020). [CrossRef]  

21. T. Liu, Z. Wei, Y. Rivenson, K. De Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019). [CrossRef]  

22. J. Li, Q. Zhang, L. Zhong, and X. Lu, “Hybrid-net: a two-to-one deep learning framework for three-wavelength phase-shifting interferometry,” Opt. Express 29(21), 34656–34670 (2021). [CrossRef]  

23. G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018). [CrossRef]  

24. M. Tsuruta, T. Fukuyama, T. Tahara, and Y. Takaki, “Fast Image Reconstruction Technique for Parallel Phase-Shifting Digital Holography,” Appl. Sci. 11(23), 11343 (2021). [CrossRef]  

25. J. Millerd, N. Brock, J. Hayes, M. North-Morris, M. Novak, and J. Wyant, “Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, 304–314 (2004). [CrossRef]  

26. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

27. S. Shao, K. Mallery, S. S. Kumar, and J. Hong, “Machine learning holography for 3D particle field imaging,” Opt. Express 28(3), 2987–2999 (2020). [CrossRef]  

28. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2017). [CrossRef]  

29. J. H. Choi, J.-E. Pi, C.-Y. Hwang, J.-H. Yang, Y.-H. Kim, G. H. Kim, H.-O. Kim, K. Choi, J. Kim, and C.-S. Hwang, “Evolution of spatial light modulator for high-definition digital holography,” ETRI J. 41(1), 23–31 (2019). [CrossRef]  

Supplementary Material (7)

NameDescription
Visualization 1       Simulation results for Fashion-MNIST images at different distances
Visualization 2       Simulation results for irregularly overlapped objects
Visualization 3       Experimental results. The same negative test target object is used in the training and test. The object is located at 119mm which is within the distance range used in the training.
Visualization 4       Experimental results. The same negative test target object is used in the training and test. The object is located at 138mm which is within the distance range used in the training.
Visualization 5       Experimental results. The same negative test target object is used in the training and test. The object is located at 171mm which is outside of the training range.
Visualization 6       Experimental results. Negative test target object is used in the training, and transmissive positive test target is used in test.
Visualization 7       Experimental results. Negative test target object is used in the training, and a part of micro-display circuit is used in test.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Optical setup for capturing digital hologram. (a) Overall configuration, (b) off-axis angle, and (c) objects at the normal and tilted positions.
Fig. 2.
Fig. 2. Example of the captured interference patterns and extracted complex field.
Fig. 3.
Fig. 3. Data tiling for (a) training and (b) prediction.
Fig. 4.
Fig. 4. U-net based regression network architecture.
Fig. 5.
Fig. 5. Generation flow of training datasets.
Fig. 6.
Fig. 6. Simulation results for MNIST images at the same distance. The object distance is (a) 120 mm, which is within the training range, and (b) 170 mm, outside of the training range.
Fig. 7.
Fig. 7. Simulation results for Fashion-MNIST images at the same distance. The object distance is (a) 120 mm which is within the training range, and (b) 170 mm outside of the training range.
Fig. 8.
Fig. 8. Simulation results for Fashion-MNIST images at different distances. (a) Object amplitude image, (b) distance map, (c) interference pattern ${I_{0^\circ }}$ in the hologram plane, and (d) numerical reconstructions of the input interference pattern ${I_{0^\circ }}$, ground truth complex field ${U_{GT}}$ and the complex field predicted by the proposed network ${U_{predicted}}$ (Visualization 1).
Fig. 9.
Fig. 9. Simulation results for irregularly overlapped objects. (a) Object amplitude image, (b) distance map, (c) interference pattern ${I_{0^\circ }}$ in the hologram plane, and (d) numerical reconstructions of the input interference pattern ${I_{0^\circ }}$, ground truth complex field ${U_{GT}}$ and the complex field predicted by the proposed network ${U_{predicted}}$ (Visualization 2).
Fig. 10.
Fig. 10. Comparison of simulation results between out method and direct reconstruction method. Output of each network is numerically propagated to each depth.
Fig. 11.
Fig. 11. Objects for training and test. (a) Negative test target, (b) positive test target, and (c) micro-display circuit.
Fig. 12.
Fig. 12. Comparison of the reconstruction examples of (a) a single interference pattern, (b) ground truth object complex field, and (c) object complex field predicted by the proposed network.
Fig. 13.
Fig. 13. Experimental results. The same negative test target object is used in the training and test. The object is located at (a) 119 mm (Visualization 3) and (b) 138 mm (Visualization 4) which are within the distance range used in the training. The object is located at (c) 171 mm (Visualization 5) which is outside of the training range.
Fig. 14.
Fig. 14. Cross-section of the reconstructed amplitude images of Fig. 12(a). (a) Ground truth, (b) predicted output.
Fig. 15.
Fig. 15. Experimental results. Different objects are used in the training and the test. (a) Transmissive positive test target (Visualization 6), and (b) a part of micro-display circuit (Visualization 7).
Fig. 16.
Fig. 16. Experimental result for a tilted object.

Tables (3)

Tables Icon

Table 1. Simulation results when the object images are at the same distance in each scene.a

Tables Icon

Table 2. Comparison of simulation results in Fig. 10 between our method and direct reconstruction method.a

Tables Icon

Table 3. Experimental results.a

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

U = ( I 0 I 90 ) + i ( I 45 I 135 )
F ( I 0 ) = F ( | U + R | 2 ) = U
w = a r g m i n { | F ( I 0 ) U | 2 }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.