Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Large depth-of-field fluorescence microscopy based on deep learning supported by Fresnel incoherent correlation holography

Open Access Open Access

Abstract

Fluorescence microscopy plays an irreplaceable role in biomedicine. However, limited depth of field (DoF) of fluorescence microscopy is always an obstacle of image quality, especially when the sample is with an uneven surface or distributed in different depths. In this manuscript, we combine deep learning with Fresnel incoherent correlation holography to describe a method to obtain significant large DoF fluorescence microscopy. Firstly, the hologram is restored by the Auto-ASP method from out-of-focus to in-focus in double-spherical wave Fresnel incoherent correlation holography. Then, we use a generative adversarial network to eliminate the artifacts introduced by Auto-ASP and output the high-quality image as a result. We use fluorescent beads, USAF target and mouse brain as samples to demonstrate the large DoF of more than 400µm, which is 13 times better than that of traditional wide-field microscopy. Moreover, our method is with a simple structure, which can be easily combined with many existing fluorescence microscopic imaging technology.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Depth of field (DoF) is an important physical quantity in optical imaging systems, which mainly represents the actual depth range of clear information that the system can obtain. The optical imaging systems with larger DoF can well overcome imaging quality degeneration caused by unevenness of object surface or movement error along depth direction, especially for those thick biological samples. Meanwhile, systems with larger DoF can also avoid restrictions caused by insufficient working distance in compact imaging systems. However, in traditional optical microscope, DoF and spatial resolution are always a pair of quantities that need to be trade-off [13]. To improve the spatial resolution of the microscopic system, we need to collect optical signals with higher spatial frequency. That is, to generate a larger wave vector angle in the light field. The increasing in angle leads to the decreasing of waist radius of the Gaussian beam, which accelerates the divergence rate of the light spot and reduce the DoF of the system. Therefore, high spatial resolution always comes along with small DoF in traditional optical microscope.

Effective extending of DoF without spatial resolution degeneration has been addressed in previous studies. And a variety of methods were used to solve this problem. These techniques can be divided into two categories in general: One is to optimize the hardware of traditional microscope and realize the acquisition of clear images at different axial depths by transforming traditional system, such as couple structured illumination with cubic phase pupil encoding in light-sheet microscopy [4], thin needle-like beam in two-photon microscopy [5], the translation shift of the two pinhole-modulated images [6]. However, the hardware modification used in these methods may be complicated, like adding one more camera with same light path. It may also place constraints on other aspects of the imaging system, like needle beam always accompanies strong side lobe that will reduce lateral resolution and imaging contrast. The other one is mainly depending on computation with minor hardware modification, which infers focused relationship at different layers and reconstructed image by deconvolution. Wiener deconvolution correct the blurring image [78]. A total variation regularized deconvolution can restore transverse resolution [9]. These software-based methods may sacrifice time resolution and introduce serious artifacts.

Holography imaging can recover the three-dimensional information of imaging object [1013]. The principle of holography is to interfere through two coherent light path, one path as object wave from sample and the other path as reference light from infinity plane wave. The intensity of the interference is recorded by the imaging sensor. The intensity and phase information constitute complete object wave which can be used to realize the reconstruction from defocusing images at different depth. Therefore, it inherently owns the advantage of large DoF. In recent years, learning based holography with large DoF for lensless imaging or phase contrast microscopy have developed [1418]. However, these approaches were aiming at imaging of those samples with contrast of light scattering. They cannot acquire those biological samples labeled with fluorescent contrast agent. Moreover, these methods normally require transmission mode to maintain high signal-to-noise ratio, since the imaging sensor needs to be close to the imaging object.

To realize the fluorescence holography imaging, many researches proposed different incoherent digital holography technology to achieve three-dimensional (3D) biological imaging with different applications. Ferraro team used Mach–Zehnder interferometer for digital holographic microscopy to track the particles in 3D space [19]. Nobukawa team further extend the DoF based on this system [20]. Quan team proposed off-axis incoherent digital holographic microscopy to achieve 3D stimulation and imaging with single-shot [21]. Rosen team proposed to use the spatial light modulator (SLM) to obtain a non-scanning fluorescence holography microscopy named Fresnel incoherent correlation holography (FINCH) [2223]. Although these techniques are regarded as incoherent digital holography, they are based on different principles. In addition to the methods mentioned, there is a series of optimization incoherent digital holography systems such as COACH, I-COACH and non-linear correlation methods [2426] have been developed recently. They adopted coded phase masks (CPM) and a library of point spread holograms (PSHs) of each axial plane. And the images are reconstructed by autocorrelation between image and PSHs. Although the I-COACH and non-linear correlation methods are simple and can realize single shot to reconstruct the image, it is not clear if these methods are suitable for fluorescence imaging of biological samples. Due to the biological fluorescence imaging is always noisy, cross correlation of the object hologram and PSH may magnify the noise level. Moreover, the PSH at each depth must be acquired in advanced for each objective, which made these methods more complicated for biologist.

FINCH assumes each fluorophore is a point light source that each can be divided into two paths to achieve self-interference by SLM modulation. They follow three reconstruction steps: At first, the intensity image with interference fringes is recorded on the imaging sensor. And the phase information is predicted using a single hologram by deep learning algorithm or demodulated from three images with different phase shifts by physical calculation. Secondly, the distance between different defocus plane and focal plane in the field of view is calculated with the automatic focusing method. At last, each image plane near the focal plane is reconstructed by auto angular spectral propagation (Auto-ASP) to recover defocus images. FINCH itself own a large DoF compared with wide-file microscopy, which has been demonstrated by Brooker team [27]. However, the process of image reconstruction will introduce artifacts to the reconstructed images [2829]. On one hand, the artifacts come from the twin image that cannot be separated in the frequency spectrum by in-line holography. On the other hand, it is from the Auto-ASP algorithm itself. Although three-step phase shift method can remove most of the twin image, the artifact from Auto-ASP algorithm itself becomes more and more serious along with the increasing of defocusing distance, which seriously degrades the image quality and makes DoF of FINCH far from the theoretical calculation limit.

In this manuscript, based on our previous developed double-spherical wave FINCH with high collection efficiency of fluorescence [30], we propose a new method—deep depth-of-field fluorescence holography microscopy (DDFHM). We introduced deep learning into image reconstruction of FINCH to depress the artifacts and extend the DoF of fluorescence microscope. We validated our method using fluorescent beads, USAF target and brain slice of mice. Moreover, the structural similarity (SSIM) of reconstructed images is used for evaluation of the artifacts.

2. FINCH system

We used the double-spherical wave FINCH to obtain defocus holograms that contained information from different depths of the sample. Then, we adopt Auto-ASP to acquire preliminary reconstructed images. At last, a generative adversarial network (GAN) was trained and used to retrieve the final reconstructed images at different depths with similar spatial resolution as in-focus images.

The DoF of wide field optical microscopy is normally defined as:

$$DoF = \frac{{n\cdot \lambda }}{{N{A^2}}} + \frac{n}{{M\cdot NA}}e$$
where n represents the refractive index of imaging media. $\lambda $ is the imaging wavelength and the NA is numerical aperture of the objective. M is magnification factor of the imaging system. e represents the pixel pitch of imaging detector. Substituting the parameters of our system into Eq. (1), the original DoF of wide field microscope can be calculated to be about 30µm.

The DoF and spatial resolution of FINCH is different with wide field imaging. They are related to the focal length of two light beams, and the maximum spatial overlap area can be determined by the focal length of two light beams. In the same system, the optimal hologram quality is achieved when the two waves obey the condition of maximal spatial overlap, where we can get the maximum DoF and spatial resolution. The calculation formula is as follows:

$${Z_h} = \frac{{2{f_1}{f_2}}}{{{f_1} + {f_2}}} = ({1 + s} ){f_1} = ({1 - s} ){f_2}$$
$$\textrm{s} = |{({{f_1} - {f_2}} )/({{f_1} + {f_2}} )} |$$
where ${Z_h}$ is the axial position of the hologram, s is the spatial overlap factor, ${f_1}$ and ${f_2}$ are focal lengths with differentially focused beams. As factor s increases, the point hologram at plane ${Z_h}$ also increases in size. This size increasing makes the hologram more easily to be resolved by camera, but decreases the contrast of the hologram. The lateral resolution of FINCH is normally 1.35-1.5 times better than that of wide field fluorescence microscopy. The axial resolution of FINCH is improved as the object moves out of the working plane, and it is always worse than that of a wide field imaging system [31]. The DoF of FINCH is different from that of wide field microscopy in which the DoF is almost determined by the axial resolution. In FINCH it is more related to lateral resolution.

Figure 1(b) shows the schematic of the holographic microscope setup. The laser (Cobolt) delivers excitation beam with wavelength of 488 nm. Objective lens (4x Olympus) with numerical aperture of 0.16 is used to collect the excited fluorescence. A band-pass filter (520/10 nm, Semrock) with central wavelength of 520 nm and bandwidth of 10 nm is selected. The raw images are recorded by an sCMOS camera (Flash 4.0 Hamamatsu) with a pixel pitch of 6.5µm. To capture the hologram of fluorescence, we introduce a subunit in front of the camera which can be easily embedded in a fluorescence microscope. The subunit consists of an SLM (GAEA-2, Holoeye), two polarizers and a beam splitter. The SLM separates the incident fluorescence into two spherical waves by the fast and slow axis phase modulation of the liquid crystal, which propagates forward with different radius of curvature, respectively. The focal length of tube lens is 200 mm. And the Fresnel lens loaded on SLM can adjust the focal length freely. We set it as 330 mm in our experiments. The two beams interfere at the imaging plane and the generated hologram will be recorded. The scheme of fluorescence hologram generation can be seen in Fig. 1(C) Two polarizer are added in the light path to improve the coherent contrast of the hologram. To get the best reconstructed images, the difference between polarization directions of the input/output polarizers and the active axis of SLM was set within 45° to 60°, as suggested by previous research [32]. And in our system, we set them are both 45° with x direction in Fig. 1(b). It is worthy to mention that when we acquire the wide field images, we set the SLM as black screen. At this time, the SLM can be regarded as a mirror and the whole system can be regarded as a wide field microscope. Therefore, only one image will be acquired.

 figure: Fig. 1.

Fig. 1. Schematic and principle of DDFHM (a) Comparison of DoF between traditional wide field microscope and DDFHM. (b) The DDFHM is based on a fluorescence microscopy with a SLM and two polarizers. The active axis of the SLM is parallel with x direction, and the input polarizer and output polarizer are both 45 degree with x direction, respectively. (c) The fluorescence signal from defocus plane incidents into the SLM and is divided into two different radius spherical waves.

Download Full Size | PDF

3. Large DoF hologram reconstruction methods

3.1 Hologram reconstruction using Auto-ASP

After the data acquisition, the hologram is preliminary reconstructed by Auto-ASP method. Information from different axial depths is reconstructed to recover the in-focus images. The specific expression of angular spectrum propagation can be expressed as follows:

$${E_f}({x,y,z} )= C\mathop {\smallint\!\!\!\smallint }\nolimits_{{k_x},{k_y}} {E_{defocus}}({{k_x},{k_y}} )\frac{1}{{{k_z}}}{e^{j({{k_x}x + {k_y}y + {k_z}z} )}}d{k_x}d{k_y}$$
$$C = \frac{{jf{e^{ - jkf}}}}{{2\pi }}$$
where x or y is the Cartesian coordinate at the imaging plane, and z is the beam propagation direction, k and f denote wave vector and focal length, respectively. The defocusing positions of different depths in the images belong to unknown parameters that cannot be prior acquired. There are various autofocusing methods have been successfully used to restore hologram images without depth information, including classical autofocusing [3336] and deep learning autofocusing [3738]. In our research, we utilize the Auto-ASP for the reconstruction. First, the hologram is reconstructed for each defocus position in a certain range. And then the Tamura coefficient can be calculated by:
$${C_{Tamura}} = \; \sqrt {{\raise0.7ex\hbox{${\sigma \langle{\boldsymbol I}\rangle}$} \!\mathord{\left/ {\vphantom {{\sigma {\boldsymbol I}} \langle{\boldsymbol I}\rangle}} \right.}\!\lower0.7ex\hbox{${\langle\boldsymbol I}\rangle $}}} $$
where $\sigma \langle{\boldsymbol I}\rangle$ and $\langle{\boldsymbol I}\rangle$ represent the image intensity standard deviation and mean, respectively. Finally, the series of Tamura coefficient are recorded as distance information to predicate the reconstruction the real distance z. When we obtain the distance z from Tamura coefficient, we can use ASP method to reconstruct in-focus image.

3.2 Reduce artifacts by generative adversarial network (GAN)

GAN [39] architecture has two different networks, and uses the adversarial mode to train the dataset. It can avoid the difficulty of loss function selection and enhance the reconstruction quality in image training. The specific structure of GAN is shown in Fig. 2. The training of GAN is different from convolutional neural network (CNN). CNN is to define a specific loss function, and uses gradient descent or its improved algorithm to optimize hyperparameters as far as possible, using the local optimal solution to approximate the global optimal solution. However, the training of GAN is a dynamic process, which is the game between generator G and discriminator D, and finally realizes Nash equilibrium. We use U-Net [40] as the generator G in GAN. U-net is a symmetric network with down-sampling layers and up-sampling layers. It contains two repeated 3×3 convolutions, a ReLU, and a max pooling with step size of 2 for down-sampling. Similarly, the extended path contains one up-sampling (2×2 up convolution), then two 3×3 convolutions each followed a ReLU, and finally 1×1 convolution. Meanwhile, the feature map obtained by the down-sampling layer is added into the restored image at the same time to ensure the information recovery completely. Tanh is token as the activation function to output the predicted result. We use MSE loss as the cost function for generator G and calculate the loss in backward propagation to update the weights in each epoch. Next, the result predicted by generator G is regarded as a fake dataset, and in-focus image obtained by actual traditional wide-field microscope is regarded as a real dataset, which is input into discriminator D for game:

$$\mathop {\min }\limits_G \mathop {\max }\limits_D \; V({D,G} )= {\mathrm{\mathbb{E}}_{x\sim {p_{data}}(x )}}[{logD(x )} ]+ {\mathrm{\mathbb{E}}_{z\sim {p_z}(z )}}[{\log ({1 - D({G(z )} )} )} ]$$
where ${p_{data}}(x )$ and ${p_z}(z )$ represents the real data and generated data, respectively. In discriminator D, we use five layers convolutional block and two linear layers. Each convolutional block includes three parts which are convolutional layer, ReLU and dropout layer. In our study, we give up the batch normalization layer, due to that it will make the result unstable in our GAN task. And the # of Conv block means the number of filters. By alternately training the two neural networks G and D, they will eventually converge to a stable state. At this moment, output of discriminator D is almost 0.5 whatever images are generated by generator G or ground truth. It means the discriminator D cannot accurately identify whether the data is from the real dataset or fake dataset, then the predicted data is the final result to be output.

 figure: Fig. 2.

Fig. 2. framework of GAN used in our research. This network consists of two parts: generator and discriminator. The generator uses U-net that has a down-sampling decomposition path (orange arrows) and a symmetric up-sampling expansion path (purple arrows). Discriminator is a binary network. The image generated by the generator is taken as negative samples output 0, and the real image is taken as positive samples output 1. We use Auto-ASP to recover in-focus image with artifact. Then GAN is adopted to eliminate artifact for the reconstructed image.

Download Full Size | PDF

In our manuscript, we use the down-sampling layers in U-net to extract image features include artifact and object signal. Next, the up-sampling layer is to restore the removed artifact image to its original size. By modifying the network weight, the artifacts in the image are removed and the signal information is retained. It is worth to mention that the artifacts and the object detail both belong to the high-frequency components, sometimes the U-net network will regard the details information as artifact and remove them which may cause resolution degradation. Therefore, we design an ancillary discriminator D to help distinguish reconstruction image is correct or not. This GAN architecture is implemented using Pytorch, an open-source deep learning software package. For iteratively updating the weights and biases, we use the adaptive moment estimation (Adam) optimizer [41] during the training of U-Net. For each image dataset, the ratio of the training, validation and test is set to be 17:2:1. The specifications of PC hardware used for network training and blind testing are with 8 cores, 16 threads, 3.70Gz CPU, 64GB of RAM and NVIDIA GeForce GTX 3090 GPU (24GB RAM). In this situation, too large depth range of training state will reduce the resolution of reconstruction image, while too small depth range cannot reach the extend limit of DoF. We randomly obtain 41 holograms with 2048 × 2048 pixels at different depths (including one image in the focal plane) in the axial range of ±180µm. Because biological samples and USAF target are different on spatial distributions with different contrast. We made two different datasets: one has about 16000 pairs of 512 × 512 datasets of mouse brain images, and the training takes ∼ 36h under ∼ 60 epochs in average; And the other dataset has about 1700 pairs of 256 × 256 datasets of USAF target and the training takes ∼ 1.8h under ∼ 100 epochs in average. After the training, the network inference time is less than 0.3s for a single hologram patch with 2048 × 2048 pixels.

4. Experimental results

We selected the fluorescent beads and USAF target as the standard sample to determine the DoF and imaging artifacts. Then, we selected green fluorescence protein (GFP) labeled mouse brain slices [42] with thickness of 50µm as the practical biological samples to validate the advantages of DDFHM method in DoF extending.

4.1 Determine the DoF using fluorescent beads

Fluorescent beads are used to quantify the extended range of DoF. We select the fluorescent beads with a diameter of 1µm (FSDG004, Bangs Laboratories, Inc.) to be imaged within 400µm axial range. The images of the fluorescent beads were acquired with traditional wide field microscopy, Auto-ASP and DDFHM. When we acquire the data from wide field imaging, the SLM will work as a mirror. In DDFHM, we do not generate dataset of fluorescent beads for training, the weighting factors used for brain slices are also used in this experiment. As shown in Fig. 3, there are seven representative positions at different depth from -200µm to 200µm, including 0µm, ± 15µm, ± 100µm and ±200µm, will be chose for the evaluation.

 figure: Fig. 3.

Fig. 3. Using fluorescent beads for DoF comparison. (a) Odd row: images of beads acquired with three methods were measured within depth range of 400µm. Even row: profile of each image of bead, the profile is normalized with in focus image. The red dotted curve is raw data and the blue curve is Gaussian fitted result. Horizontal label is pixel number (each pixel pitch is 1.625µm) (b) Close-up profiles of the bead acquired with DDFHM at each depth. Calculated FWHM of profiles are as follow: Ground truth 3.54µm; DDFHM 4.39µm (100µm); DDFHM 6.05µm (200µm); Wide-field 6.06µm. (c) Close-up profiles of the bead acquired with Auto-ASP at each depth. Calculated FWHM of profiles are as follow: Ground truth 3.54µm; Auto-ASP 5.76µm (100µm); Auto-ASP 6.83µm (200µm); Wide-field 6.06µm.

Download Full Size | PDF

We evaluated three imaging results to compare the extended range of DoF by these three methods. The bead becomes vague in the images as the object far away from the foci of objective. The profiles of the beads following each image also indicates that the signal of the beads is fading into the background. The blue curve in the profile of the bead is the Gaussian fitted result. We take the full width at half maximum (FWHM) of the Gaussian fitted curve as the measure. In Fig. 3(b) and Fig. 3(c), we compare the FWHM of bead with Auto-ASP and DDFHM methods at each depth. The FWHM of bead from wide-field microscope at 0µm is 3.54µm, and it becomes 6.06µm at the margin of its DoF (15µm). On the one hand, the FWHM of bead from Auto-ASP at 100µm is 5.76µm and close to the results at the margin of DoF in wide-field microscope. While the FWHM at 200µm is 6.83µm, far beyond the 6.06µm. On the other hand, the FWHM of bead from DDFHM at 200µm is 6.05µm which is close to 6.06µm. Therefore, we regard the DoF of FINCH and DDFHM are slightly larger than 200µm and 400µm, respectively. The result indicates the DDFHM improve the DoF of wide field microscope 13 times, which is better than other methods. Furthermore, the contrast of the image of the bead at 200µm is higher than that acquired with wide-field microscope at 15µm shown in Fig. 3(b). Although the image artifacts induced by the Auto-ASP are not obvious as that in Fig. 4 and Fig. 5, DDFHM still highly depressed those artifacts that inherently prevented the holography from better DoF.

 figure: Fig. 4.

Fig. 4. Using USAF resolution target for evaluation of DoF. Image from each row represents hologram, wide-field microscopy, Auto-ASP method and DDFHM, respectively. And each column represents different axial depth position from -200µm to 200µm. We select the 6th element of 7th group that corresponds to 228-line pairs per millimeter to compare the performance of each method on extending DoF. The profile of 7-6 element has plotted by red curve.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Comparison of DDFHM, Auto-ASP and wide-field microscopy results at different defocus distances using brain slices. The first two columns are real part and imagery part of hologram, third column is image of wide-field microscopy, fourth column is reconstruction results by Auto-ASP, the last column is result of DDFHM. The test sample is mouse brain slice with (a) thalamus and (b) hippocampus. Wide-field image at Z = 0µm is used as ground truth. Two adjacent neurons are selected as the comparison that framed by orange and purple squares, respectively.

Download Full Size | PDF

4.2 Evaluate DoF using USAF target

Due to the USAF target is a non-fluorescence sample, the filter will be removed from the system and a noncoherent illumination will be adopted when we acquired the data. We demonstrate the USAF target located at 7 different axial positions the same with that in Fig. 3. The in-focus image from the traditional wide-field microscopy at each depth will be used as the ground truth for all of the other methods. The comparison of the images acquired with different methods and hologram can be seen in Fig. 4. In DDFHM, the image is preliminary reconstructed by Auto-ASP method, which is also used in the comparison in Fig. 3. The data processing pipeline was mentioned before in section 3.2

We select the 6th element of 7th group (228 pairs per millimeter) of the target to plot the profile from each method for DoF evaluation. In our experiments, the image acquired by traditional wide field microscope at 0µm is used as ground truth for comparison. We made profile of 7-6 element in each image to demonstrate the modulation contrast at different focal depth by red curve. The image contrast of the three methods is similar at the focal plane. According to our previous calculation of DoF, the image of the traditional wide field microscope becomes blurred at ± 15µm of depth position. The defocus also leads to the decline of image contrast. Therefore, we can take the profile of the traditional wide field microscope at the imaging depth of ± 15µm as the standard to evaluate the DoF of other imaging methods. The reconstructed image by Auto-ASP method can keep some high frequency information at the imaging depth of ± 200µm. But as well as we observe the figure, we can find out it has obvious difference from our standard. Furthermore, noticeable artifacts will appear even at the depth of ± 100µm, which is an intrinsic limitation for holography. As we can see in Fig. 4, DDFHM method optimizes the results after Auto-ASP algorithm through GAN to eliminate the artifacts. The image reconstructed by DDFHM method can maintain good resolution in the depth range of ± 200µm. The image contrast at ± 200µm is almost the same with that of wide field microscope at 15µm.

4.3 Imaging of mouse brain slice at different depth

In the following experiments, we choose about 100 mouse brain slices to build the dataset, and input it into the GAN for training. In order to objectively show the results of DDFHM, we took thalamus and hippocampus as examples [42], and ensured that no training data would be used in the reconstruction dataset. The data processing pipeline was also mentioned before in section 3.2.

We reconstruct the hologram obtained from the five different depth position of these two samples, as shown in Fig. 5. It is worth mentioning that, according to the content shown in Fig. 3 and Fig. 4, the reconstruction results of positive and negative axial defocus position have strong symmetry. Therefore, we only select 0µm (in-focus) and 4 positive axial positions, including 15µm, 100µm, 200µm and 250µm. The reconstruction results at 200µm and 250µm is beyond the depth range of our training of GAN. Fig.5a and Fig.5b shows the results from traditional wide field microscopy, Auto-ASP method, and DDFHM at different depth of thalamus and hippocampus, respectively. The in-focus image obtained by the traditional wide field microscope is still used as the ground truth. And two adjacent neurons in the reconstructed images are selected for the comparison (framed by orange and purple squares, respectively). The local enlarged images of the neurons are at the lower left corner of each sub-image. We can see from Fig. 5 that the image quality of the three methods is similar at 0µm. The image obtained at 15µm by traditional wide-field microscope becomes blurred, while the image quality reconstructed by Auto-ASP method and DDFHM is almost the same as ground truth. At 100µm, the two neurons obtained by the traditional wide field microscope could not be distinguished. Due to the artifacts brought by its own algorithm, the image reconstructed at 200µm using Auto-ASP has become vague, which cannot distinguish the two neurons. However, the two adjacent neurons still can be distinguished at 200µm, especially in Fig.5b, when the image is reconstructed by DDFHM. It can be regarded that DDFHM is superior to the Auto-ASP reconstruction method in practical biological applications. It is worth noting that, the image quality at 250µm do not degrade seriously in DDFHM, even though it is far beyond the DoF limitation we used for training of GAN. We further quantitatively compare the SSIM of reconstructed image at the depth of 200µm and 250µm with Auto-ASP and DDFHM, respectively, as shown in Fig. 6. It can be seen that, the quality of image within the DoF limitation (Z = 200µm) is better than that of image beyond the DoF limitation (Z = 250µm). Some noticeable artifacts can be observed in the image with Z = 250µm, which cannot be with Z = 200µm. However, compared with the two images reconstructed by Auto-ASP, the reconstructed image beyond the DoF limitation still has less artifacts and higher SSIM [43]. This implies that the useful information can be extracted from further defocusing position with DDFHM when the image quality requirement is not that rigorous.

 figure: Fig. 6.

Fig. 6. Images of thalamus at the axial position of 200µm and 250µm reconstructed by (a) DDFHM and (b) Auto-ASP. Their SSIM have been calculated and demonstrated. The in-focus image of wide field microscopy (c) is used as ground truth. The representative area has been enlarged and marked with different color. Scale bar: 100µm

Download Full Size | PDF

In order to quantitatively evaluate the imaging quality at different depth, we selected 100 depths from -200µm to +200µm for calculation. The reconstructed mouse brain images at each depth with image size of 256256 and element 6 group 7 of USAF target (smallest bar in the target, image size of 66×28) were used to calculate the SSIM, when we took the in-focus image from wide field microscopy as the ground truth. Then pixels in each mouse brain image would be averaged to represent the SSIM for each depth. We draw SSIM curve of the images reconstructed by DDFHM method, auto-ASP method, respectively, for comparison. These SSIM curves are shown in Fig. 7, and the DoF of traditional microscopy has been marked with purple dotted line as comparison. At the focal plane, the quality of the image reconstructed are similar with the ground truth. And the two methods almost perform the same within ± 30µm. However, as the increasing of defocus distance, the artifacts generate by Auto-ASP becomes severer. The image quality of Auto-ASP method is worse than that of DDFHM method beyond the defocus distance of ± 30µm. On the contrary, the SSIM of DDFHM method keeps around 0.8, except that the image quality decreases at the beginning, which may be due to the image intensity have slight deviation during the reconstruction. The results indicate DDFHM can maintain a better image quality than other methods in the depth range from -200µm to +200µm. The slowly changing of the SSIM of image reconstructed by DDFHM method also suggest that the image quality maybe acceptable to some extent in the defocus range beyond ± 200µm. It is worth mentioning that the difference between SSIM of USAF target with Auto-ASP and DDFHM is not as much as that of biological samples. It is probably due to the image of element 6 group 7 of USAF target has too few pixels. And few artifacts will be involved in the calculation of SSIM.

 figure: Fig. 7.

Fig. 7. SSIM value of USAF target (red line) and mouse brain (blue line) at different axial positions. For mouse brain, each point is averaged from 100 images of new regions. Compared the results from DDFHM and Auto-ASP, we can find out that DDFHM method maintains clear and high-resolution images in a very deep defocusing range, and is superior to the Auto-ASP.

Download Full Size | PDF

4.4 Imaging of one mouse brain slice within a large depth range

At last, we used a GFP labeled brain slice of mouse to demonstrate the ultra-high DoF of DDFHM for the biological sample with inclined surface. The brain slice was sectioned by a vibratome and fixed on a slide. The sample was placed with a certain incline angle to get a difference of about 400µm of imaging depth at different position of the brain slice. Fig.6(a) is the image acquired with wide field microscopy, where the central part is located at the focal plane. We select one in focus area and two defocus areas, including the three areas of cerebral cortex with positive defocus and negative defocus. Fig. 8(b), 8(c) and 8(d) represents the enlarge image from wide field microscopy, Auto-ASP, and DDFHM, respectively. The representative neurons and nerve fibers have been signed by two different methods and their profile as shown in last row. The obvious difference can be found in the images from different methods out of the focus. The neurons in the area indicated by dotted line are difficult to recognize by wide-field microscopy due to the defocusing. While they can be separated in the image reconstructed by Auto-ASP method and DDFHM. However, the image reconstructed by Auto-ASP method have strong artifacts around the axons and soma of the neuron, which can clearly observe in Fig. 8(b2) and Fig. 8(b4). The original neurons are distorted due to defocusing at white arrow are reconstructed by DDFHM to obtain clearer image. It is worth noting that DDFHM seems more blur than Auto-ASP when comparing Fig. 8(c2) and Fig. 8(c3). And in Fig. 8(c2), the artifact of soma and axons are not so obviously than Fig.8(b2). We compare the profile in Fig. 8(c4). It can be seen that there exist some artifacts in red curve, while there are no artifacts in blue curve. Although the image of c2 is sharper than c3, the sharpness is due to the artifacts. After using DDFHM, only a slight decreasing of resolution can be observed from the profile of somas, which is the cost of the high depression of artifacts. Moreover, the contrast of the image is much better after using our method. Thus, it is verified that for a biological sample with uneven surface, DDFHM can recover the clear images from different depth position without axial scanning acquisition.

 figure: Fig. 8.

Fig. 8. GFP labeled mouse brain sample with inclined angle. (a) Image obtained by traditional wide-field microscopy. It includes different axial depth from positive defocused to negative defocused. The maximum defocusing distance of the upper and lower edges is more than ± 200µm. (b, c) results from defocus position reconstructed by Auto-ASP and DDFHM method, respectively. (d) results from in-focus position of wide-field are used as ground truth. The profiles of representative neurons and nerve fibers have been mapped in last row by red dotted line (Auto-ASP) and blue dotted line (DDFHM) on the right side. The original neurons are distorted due to defocusing at white arrow are reconstructed by DDFHM to obtain clearer image.

Download Full Size | PDF

5. Conclusion

In this work, we developed an ultra-high DoF fluorescence microscope called DDFHM. By introducing a SLM in front of the camera of a wide field microscopy, local fluorescent signal was divided into two spherical waves with different curvature radius which interfered and was recorded by the camera. Then high-quality image could be reconstructed based on GAN to extend the DoF to approach the limit of FINCH system. In order to validate the quality of reconstructed image and DoF, we selected the fluorescent beads, USAF target and brain slice of mouse as sample. The results suggested that the DoF of fluorescence microscopy could be extended from 30µm to about 400µm that the spatial resolution only slightly degraded in this depth range. Furthermore, the errors of reconstructed images kept at a low level through the whole 400µm depth, which was similar with that in the DoF of wide field microscopy. At last, we shown that the images at arbitrary imaging depth can be recovered in the DoF of DDFHM when we observe a brain slice with uneven surface. The somas and axons could be clearly imaged without noticeable artifacts at each depth.

It is worth noting that, the image contrast degeneration of DDFHM from 100µm to 200µm is moderate as we demonstrate in Fig. 3. The similar situation is also happening to the USAF target imaging, which is shown in Fig. 4. In Fig. 7, we compare SSIM of images of mice brain (blue lines) and USAF target (red lines), reconstructed with DDFHM and Auto-ASP. The changing of SSIM in Fig. 7 has been verified this point again. These results imply that the image quality beyond 400µm depth range maybe degrades slowly along with the increasing of the depth, instead of the rapid degeneration in Auto-ASP method shown in Fig. 7. Therefore, a larger DoF maybe also acceptable as well as the requirement of image quality is not so high.

There is still limitation of our method which need to be addressed in the future works. First, due to the reconstruction in the hologram requires three images, the imaging speed of DDFHM will not reach a high level. However, it is possible to further develop a new reconstruction method based on phase recovery from intensity images using deep learning to reduce the requirement of number of images. Second, we will try to apply this method to microscopy with higher N.A. to obtain higher resolution images. However, it will face the great challenge to get the ground truth and reconstruct the image details.

Funding

National Natural Science Foundation of China (81827901, 81871082).

Acknowledgments

The authors would like to thank Prof. Xiangning Li for providing the mouse brain samples.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. E. J. Botcherby, R. Juskaitis, M. J. Booth, and T. Wilson, “Aberration-free optical refocusing in high numerical aperture microscopy,” Opt. Lett. 32(14), 2007–2009 (2007). [CrossRef]  

2. M. Duocastella, B. Sun, and C. B. Arnold, “Simultaneous imaging of multiple focal planes for three-dimensional microscopy using ultra-high-speed adaptive optics,” J. Biomed. Opt. 17(5), 050505 (2012). [CrossRef]  

3. C. Zhang, W. B. Foster, R. D. Downey, C. L. Arrasmith, and D. L. Dickensheets, “Dynamic performance of microelectromechanical systems deformable mirrors for use in an active/adaptive two-photon microscope,” J. Biomed. Opt. 21(12), 121507 (2016). [CrossRef]  

4. S. Quirin, N. Vladimirov, C. T. Yang, D. S. Peterka, R. Yuste, and M. B. Ahrens, “Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy,” Opt. Lett. 41(5), 855–858 (2016). [CrossRef]  

5. H. He, C. H. Kong, K. Y. Chan, W. L. So, H. K. Fok, Y. X. Ren, S. W. Lai, K. K. Tsia, and K. Y. Wong, “Resolution enhancement in an extended depth of field for volumetric two-photon microscopy,” Opt. Lett. 45(11), 3054–3057 (2020). [CrossRef]  

6. K. Guo, J. Liao, Z. Bian, X. Heng, and G. Zheng, “InstantScope: a low-cost whole slide imaging system with instant focal plane detection,” Biomed. Opt. Express 6(9), 3210–3216 (2015). [CrossRef]  

7. W. J. Shain, N. A. Vickers, A. Negash, T. Bifano, A. Sentenac, and J. Mertz, “Dual fluorescence-absorption deconvolution applied to extended-depth-of-field microscopy,” Opt. Lett. 42(20), 4183–4186 (2017). [CrossRef]  

8. W. J. Shain, N. A. Vickers, B. B. Goldberg, T. Bifano, and J. Mertz, “Extended depth-of-field microscopy with a high-speed deformable mirror,” Opt. Lett. 42(5), 995–998 (2017). [CrossRef]  

9. R. N. Zahreddine and C. J. Cogswell, “Total variation regularized deconvolution for extended depth of field microscopy,” Appl. Opt. 54(9), 2244–2254 (2015). [CrossRef]  

10. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

11. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98(20), 11301–11305 (2001). [CrossRef]  

12. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005, Chap.8).

13. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45(5), 836–850 (2006). [CrossRef]  

14. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

15. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

16. H. Pinkard, Z. Phillips, A. Babakhani, D. A. Fletcher, and L. Waller, “Deep learning for single-shot autofocus microscopy,” Optica 6(6), 794–797 (2019). [CrossRef]  

17. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

18. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

19. P. Memmolo, L. Miccio, M. Paturzo, G. Di Caprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photon. 7(4), 713–755 (2015). [CrossRef]  

20. T. Nobukawa, Y. Katano, T. Muroi, N. Kinoshita, and N. Ishii, “Bimodal Incoherent Digital Holography for Both Three-Dimensional Imaging and Quasi-Infinite–Depth-of-Field Imaging,” Sci. Rep. 9(1), 1–10 (2019). [CrossRef]  

21. X. Quan, M. Kumar, O. Matoba, Y. Awatsuji, Y. Hayasaki, S. Hasegawa, and H. Wake, “Three-dimensional stimulation and imaging-based functional optical microscopy of biological cells,” Opt. Lett. 43(21), 5447–5450 (2018). [CrossRef]  

22. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]  

23. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

24. A. Vijayakumar, Y. Kashter, R. Kelner, and J. Rosen, “Coded aperture correlation holography—a new type of incoherent digital holograms,” Opt. Express 24(11), 12430–12441 (2016). [CrossRef]  

25. M. R. Rai, A. Vijayakumar, and J. Rosen, “Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 26(14), 18143–18154 (2018). [CrossRef]  

26. A. Vijayakumar, T. Katkus, S. Lundgaard, D. P. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, “Fresnel incoherent correlation holography with single camera shot,” Adv. OptoElectron. 3(8), 08200004 (2020). [CrossRef]  

27. N. Siegel, J. Rosen, and G. Brooker, “Reconstruction of objects above and below the objective focal plane with dimensional fidelity by FINCH fluorescence microscopy,” Opt. Express 20(18), 19822–19835 (2012). [CrossRef]  

28. M. K. Kim, “Incoherent digital holographic adaptive optics,” Appl. Opt. 52(1), A117–A130 (2013). [CrossRef]  

29. A. Greenbaum, W. Luo, T. W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]  

30. X. M. Lai, Y. Zhao, X. H. Lv, Z. Q. Zhou, and S. Q. Zeng, “Fluorescence holography with improved signal-to-noise ratio by near image plane recording,” Opt. Lett. 37(13), 2445–2447 (2012). [CrossRef]  

31. J. Rosen and R. Kelner, “Modified Lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22(23), 29048–29066 (2014). [CrossRef]  

32. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]  

33. P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36(10), 1945–1947 (2011). [CrossRef]  

34. P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39(16), 4719–4722 (2014). [CrossRef]  

35. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14(13), 5895–5908 (2006). [CrossRef]  

36. P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47(19), D176–D182 (2008). [CrossRef]  

37. Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” In Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics), Vol. 10499, p. 104991 V (2018).

38. T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural networkbased regression for depth prediction in digital holography,” In 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE) IEEE1323–1326 (2018).

39. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Adv. Neural. Infom. Process. Syst. 27 (2014).

40. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 234–241 (2015).

41. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv: 1412.6980 (2014).

42. L. Silvestri, A. L. A. Mascaro, J. Lotti, L. Sacconi, and F. S. Pavone, “Advanced optical techniques to explore brain structure and function,” J. Innov. Opt. Health Sci. 06(01), 1230002 (2013). [CrossRef]  

43. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic and principle of DDFHM (a) Comparison of DoF between traditional wide field microscope and DDFHM. (b) The DDFHM is based on a fluorescence microscopy with a SLM and two polarizers. The active axis of the SLM is parallel with x direction, and the input polarizer and output polarizer are both 45 degree with x direction, respectively. (c) The fluorescence signal from defocus plane incidents into the SLM and is divided into two different radius spherical waves.
Fig. 2.
Fig. 2. framework of GAN used in our research. This network consists of two parts: generator and discriminator. The generator uses U-net that has a down-sampling decomposition path (orange arrows) and a symmetric up-sampling expansion path (purple arrows). Discriminator is a binary network. The image generated by the generator is taken as negative samples output 0, and the real image is taken as positive samples output 1. We use Auto-ASP to recover in-focus image with artifact. Then GAN is adopted to eliminate artifact for the reconstructed image.
Fig. 3.
Fig. 3. Using fluorescent beads for DoF comparison. (a) Odd row: images of beads acquired with three methods were measured within depth range of 400µm. Even row: profile of each image of bead, the profile is normalized with in focus image. The red dotted curve is raw data and the blue curve is Gaussian fitted result. Horizontal label is pixel number (each pixel pitch is 1.625µm) (b) Close-up profiles of the bead acquired with DDFHM at each depth. Calculated FWHM of profiles are as follow: Ground truth 3.54µm; DDFHM 4.39µm (100µm); DDFHM 6.05µm (200µm); Wide-field 6.06µm. (c) Close-up profiles of the bead acquired with Auto-ASP at each depth. Calculated FWHM of profiles are as follow: Ground truth 3.54µm; Auto-ASP 5.76µm (100µm); Auto-ASP 6.83µm (200µm); Wide-field 6.06µm.
Fig. 4.
Fig. 4. Using USAF resolution target for evaluation of DoF. Image from each row represents hologram, wide-field microscopy, Auto-ASP method and DDFHM, respectively. And each column represents different axial depth position from -200µm to 200µm. We select the 6th element of 7th group that corresponds to 228-line pairs per millimeter to compare the performance of each method on extending DoF. The profile of 7-6 element has plotted by red curve.
Fig. 5.
Fig. 5. Comparison of DDFHM, Auto-ASP and wide-field microscopy results at different defocus distances using brain slices. The first two columns are real part and imagery part of hologram, third column is image of wide-field microscopy, fourth column is reconstruction results by Auto-ASP, the last column is result of DDFHM. The test sample is mouse brain slice with (a) thalamus and (b) hippocampus. Wide-field image at Z = 0µm is used as ground truth. Two adjacent neurons are selected as the comparison that framed by orange and purple squares, respectively.
Fig. 6.
Fig. 6. Images of thalamus at the axial position of 200µm and 250µm reconstructed by (a) DDFHM and (b) Auto-ASP. Their SSIM have been calculated and demonstrated. The in-focus image of wide field microscopy (c) is used as ground truth. The representative area has been enlarged and marked with different color. Scale bar: 100µm
Fig. 7.
Fig. 7. SSIM value of USAF target (red line) and mouse brain (blue line) at different axial positions. For mouse brain, each point is averaged from 100 images of new regions. Compared the results from DDFHM and Auto-ASP, we can find out that DDFHM method maintains clear and high-resolution images in a very deep defocusing range, and is superior to the Auto-ASP.
Fig. 8.
Fig. 8. GFP labeled mouse brain sample with inclined angle. (a) Image obtained by traditional wide-field microscopy. It includes different axial depth from positive defocused to negative defocused. The maximum defocusing distance of the upper and lower edges is more than ± 200µm. (b, c) results from defocus position reconstructed by Auto-ASP and DDFHM method, respectively. (d) results from in-focus position of wide-field are used as ground truth. The profiles of representative neurons and nerve fibers have been mapped in last row by red dotted line (Auto-ASP) and blue dotted line (DDFHM) on the right side. The original neurons are distorted due to defocusing at white arrow are reconstructed by DDFHM to obtain clearer image.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

D o F = n λ N A 2 + n M N A e
Z h = 2 f 1 f 2 f 1 + f 2 = ( 1 + s ) f 1 = ( 1 s ) f 2
s = | ( f 1 f 2 ) / ( f 1 + f 2 ) |
E f ( x , y , z ) = C k x , k y E d e f o c u s ( k x , k y ) 1 k z e j ( k x x + k y y + k z z ) d k x d k y
C = j f e j k f 2 π
C T a m u r a = σ I / σ I I I
min G max D V ( D , G ) = E x p d a t a ( x ) [ l o g D ( x ) ] + E z p z ( z ) [ log ( 1 D ( G ( z ) ) ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.