Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-slice ptychographic imaging with multistage coarse-to-fine reconstruction

Open Access Open Access

Abstract

The ability to image 3D samples with optical sectioning is essential for the study of tomographic morphology in material and biological sciences. However, it is often hampered by limitations of acquisition speed and equipment complexity when performing 3D volumetric imaging. Here, we propose, to the best of our knowledge, a new method for 3D reconstruction from a minimum of four intensity-only measurements. The complementary structured patterns provided by the digital micromirror device (DMD) irradiate the outermost layer of the sample to generate the corresponding diffraction intensities for recording, which enables rapid scanning of loaded patterns for fast acquisition. Our multistage reconstruction algorithm first extracts the overall coarse-grained information, and then iteratively optimizes the information of different layers to obtain fine features, thereby achieving high-resolution 3D tomography. The high-fidelity reconstruction in experiments on two-slice resolution targets, unstained Polyrhachis vicina Roger and freely moving C. elegans proves the robustness of the method. Compared with traditional 3D reconstruction methods such as interferometry-based methods or Fourier ptychographic tomography (FPT), our method increases the reconstruction speed by at least 10 times and is suitable for label-free dynamic imaging in multiple-scattering samples. Such 3D reconstruction suggests potential applications in a wide range of fields.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Quantitative phase imaging (QPI) is a powerful technique for non-invasive and label-free observation, which reconstructs a quantitative image of the phase delay induced by the sample [1,2]. The phase map evaluates the total delay of the sample along the integrated direction over a thin layer, but it cannot provide volumetric information about the internal structure. The fact that a single 2D integrated phase is a product of thickness and average refractive index (RI) may result in inaccurate interpretation of QPI data when imaging complex 3D structures. In order to gain more accurate morphological information and evaluate thicker samples, depth-resolved or tomographic versions of QPI are usually desired [3].

Optical diffraction tomography (ODT) is a further developed technique based on QPI for measuring the 3D complex field of a sample. It can be roughly categorized into two classes: interferometry-based and intensity-only methods [4]. As the most widely applied 3D QPI technology, interferometry-based methods use a rotating sample or a scanning laser beam to collect the angle-specific scattering arising from the sample [510]. The 2D complex scattered fields containing the absorption and phase information of the sample are directly recorded as digital interferograms under various illumination angles, and the tomographic reconstruction algorithm is applied to recover the 3D information of the sample [Fig. 1(a)]. However, interferometric systems are sensitive to phase instabilities and complicated mechanical design [11,12]. Furthermore, their reconstruction framework typically uses a weakly scattering approximation (first Born or Rytov [13]), which works well for thin samples with index matching, but causes errors for thick multiple-scattering samples [1416].

 figure: Fig. 1.

Fig. 1. Ptychographic tomography from intensity-only measurements (d) compared to current optical diffraction tomography (ODT) techniques (a)-(c). (a) Interferometry-based methods capture the interference between the object beam and the reference beam under various illumination angles to reconstruct the 3D complex field of the sample. (b) Ptychographic iterative engine (PIE) translates the sample through a grid of positions and records the diffraction pattern at each position. (c) Fourier ptychographic tomography (FPT) allows 3D complex field recovery using angled illumination-only intensity measurements. (d) The proposed ptychographic tomography uses a sequence of illumination patterns and a single focusing lens. The 3D complex field can be retrieved from the intensity distribution measured in the focal plane of the lens. The same operation principle can be applied, if the light from the sample is modulated by a set of patterns generated on a spatial light modulator (SLM) instead of the configuration shown in the figure.

Download Full Size | PDF

As an alternative, a second group of techniques has emerged to recover the tomographic information without the need for a reference beam. These non-interferometric approaches rely on several assumptions about the object beam and use computational algorithms to solve 3D information [17,18]. Examples of these types of techniques are transport of intensity equation (TIE) [1921], ptychographic iterative engine (PIE) [22,23], and Fourier ptychographic tomography (FPT) [24,25]. TIE acquires through-focus intensity stacks to extract the 2D phase information at multiple focus planes, and then uses gradient phase information to reconstruct the 3D complex field. PIE is a recently developed coherent diffraction imaging (CDI) method that retrieves the 3D complex field of a sample from a set of diffraction patterns. The ptychographic experiment involves translating the sample through a grid of positions and recording the diffraction pattern at each position [Fig. 1(b)]. However, the above methods usually require mechanical scanning of the sample or objective lens to capture intensity-only images. This scanning limits the acquisition speed and introduces inherent mechanical vibrations into the system. Recently, other approaches such as FPT have been proposed, which allow 3D complex field recovery using angled illumination-only intensity measurements without any mechanical scanning [Fig. 1(c)]. Iterative algorithms based on multiple scattering models using the multi-slice beam propagation method have also been developed to recover strongly scattering samples [14,26]. One disadvantage of these techniques is the requirement for hundreds of intensity measurements per volume [27,28], making dynamic observation infeasible. The iterative reconstruction algorithm for hundreds of measurements further prolongs the whole 3D imaging process and becomes computationally intensive.

In this paper, we realized 3D reconstruction from a minimum of four intensity-only measurements. The ptychographic tomography method acquires diffraction intensities from different illumination patterns provided by a digital micromirror device (DMD), and then solves a nonlinear optimization problem to recover the 3D complex field of the sample [Fig. 1(d)]. A multistage reconstruction algorithm is proposed to recover the 3D information of thick samples, which improves the ill-conditioning in phase retrieval. It first extracts the overall coarse-grained information, and then iteratively optimizes the information of different layers to obtain fine features, thereby achieving high-resolution 3D tomography. Importantly, this method allows us to reconstruct using several orders of magnitude fewer measurements than related previous work. Our algorithm retains a reasonable computation time that is essential for practical implementation of the iterative inverse algorithm. The proposed imaging method is label-free, motion-free, fast, and enables the high-fidelity quantitative reconstruction of 3D intrinsic structural features. The ability to image deep-lying layers of samples in a non-invasive way holds great promise for the research of materials and biology.

2. Methods

Advances in computational imaging have introduced techniques that utilize optimization for 3D reconstruction, which generally relies on two key components—a light scattering model and a phase retrieval algorithm. Here, the expected measurements at each iteration are calculated by the forward model that describes how light propagates through the sample; then the multistage coarse-to-fine reconstruction framework can be formulated, which uses digital refocusing to extract coarse-grained features and then iteratively refines the features in gradient descent steps. The following describes the multi-slice forward model and the inverse algorithm used to implement the reconstruction.

2.1 Optical path and acquisition process

Here, the experimental setup uses visible light illumination (532 nm) and a high-NA objective lens ($10 \times $/0.25 NA) to achieve optical sectioning capabilities suitable for high-resolution tomography. The illumination port is equipped with a DMD module that modulates the loading pattern at a speed of several kilohertz. The experiments with a proof-of-concept setup was built following the scheme in Fig. 2(a). The laser (PGL-FS-532 nm–10 Mw, CNI Laser) is expanded into a collimated beam with an aperture of 30 mm and irradiated on the DMD at the incident angle of 24°. Micromirrors within a proper region are switched to the “on” state to reflect the laser beam onto the sample, forming an illumination pattern. The 4f lens configuration (L1: f = 180 mm and L2: infinity corrected objective lens with $10 \times $ magnification and 0.25 NA) ensures that the DMD is conjugated to the outermost layer of the sample. Another 4f lens configuration (L3 and L4, symmetrical to the previous 4f lens configuration) is placed behind the sample for magnification. The transmitted wavefront passes through a condenser (L5: f = 80 mm) and is recorded by a CMOS camera (Lumenera Lt1245R, active resolution $4112 \times 3008$, pixel size $3.45\mathrm{\;\ \mu m}$) located at the focal plane of the condenser. The Fourier intensity is measured by placing a pixelated detector at the focal plane of the condenser and using the condenser as the optical element of the Fourier transform.

 figure: Fig. 2.

Fig. 2. (a) Experimental verification of the proposed technique. The image sensor captures the corresponding Fourier intensities ${I_n}({\mathbf r} )$ under different illumination patterns ${s_n}({\mathbf r} )$ provided by the DMD. (b) The process of combining coarse-grained modeling and fine-grained reconstruction. The expression of $s({\mathbf r} )$, $f({{\mathbf r}, m\Delta z} )$, $u({{\mathbf r}, m\Delta z} )$ and $v({{\mathbf r}, m\Delta z} )$ in the figure are simplified to s, ${f_m}$, ${u_m}$ and ${v_m}$, respectively. (c) The simulated Fourier intensities collected with the structured pattern and the full-open pattern using the same maximum exposure (Note: there is an overexposed area in the middle due to the high dynamic range of the Fourier intensity).

Download Full Size | PDF

The sample, the detector, and the light beam incident on the DMD remain stationary during the whole data collection process. The data acquisition only needs to switch “on/off” micromirrors within a proper region to form the illumination pattern ${s_n}({\mathbf r} )$ on the sample and record the corresponding Fourier intensity ${I_n}({\mathbf u} )$, which can be automatically and rapidly realized through a control program. Our reinterpretation of ptychography within the physical framework of diffraction tomography enables accurate reconstruction of 3D specimens from reduced intensity-only measurements.

2.2 Multi-slice forward model with complementary structured patterns

The forward model is critical to reconstruction quality and calculation time. The forward model propagation through a small number of input-output pairs has certain requirements for the input patterns. The complementary patterns are adopted to achieve complete coverage of the object space; and sufficient constraint information needs to be provided for effective reconstruction of details, so structured illumination is selected. In this method, the DMD is placed at the conjugate of the outermost layer of the sample to provide complementary structured patterns s(r). The main advantages of choosing a set of statistically independent, complementary, and orthogonal binary patterns are as follows: First, the intensity of low-frequency components in Fourier space can be substantially reduced and photons can be more evenly distributed on the detector (refer to Fig. 2(c) for the comparison of the Fourier intensities collected with the structured pattern and the full-open pattern using the same maximum exposure), thus reducing the dynamic range requirements of the detector. Each diffraction intensity is acquired in a single shot without multiple exposures and high-dynamic range synthesis. Second, complementary structured patterns are provided as support constraints, which can greatly help to solve the subsequent inverse problem with fewer measurements [29]. Since the use of structured patterns effectively breaks the symmetry of the support domain and removes the twin image and spatial shift ambiguities, the uniqueness of the scheme for different types of samples can almost be guaranteed [30,31]. Most importantly, structured illumination generates different optical structures in each layer through the Fresnel propagation between them [see Fig. 9(a)], which improves not only the phase retrieval conditions but also the optical sectioning capability [17].

For the inverse problem of recovering the complex field of thick samples, weak scattering models may lead to failure or underestimation, especially with a large index contrast [32]. A forward model that describes light propagation beyond the single-scattering regime is required for optically dense samples. The multi-slice beam propagation method models multiple scattering in a computationally efficient manner, providing a more realistic and versatile reconstruction. As shown in Fig. 2(b), it approximates the sample as a series of infinitely thin planar slices along the axial direction, each modeled as a 2D transmission layer separated by a uniform medium [33]. The wave field propagates through the sample from slice to slice, with each slice modulating the field. The forward model is written through the following recursive process. The field propagating through a thick sample is modeled by a multi-slice approximation that splits the 3D sample into a series of M thin slice, each having a complex transmittance function $f({{\mathbf r}, m\Delta z} )$ ($m = 1,2, \ldots ,M$). The interval between adjacent slices is modeled as a uniform medium of thickness $\Delta z$. The complementary structured illumination patterns $s({\mathbf r} )$ are projected to the first axial layer of the sample to make $u({{\mathbf r}, \Delta z} )= s({\mathbf r} )$. As light passes through each slice, the field is first multiplied by the 2D transmittance function of that slice, and then propagated to the next slice. Therefore, the field exiting the sample can be calculated using a series of multiplication and propagation operations:

$$v({\mathbf r},m\Delta z) = u({\mathbf r},m\Delta z)f({\mathbf r},m\Delta z),$$
$$u({\mathbf r},(m + 1)\Delta z) = v({\mathbf r},m\Delta z) \otimes {h_{\Delta z}}({\mathbf r}),$$
$${h_{\Delta z}}({\mathbf r}) = {{\cal F}^{ - 1}}\{ \exp (i2\pi \Delta z\sqrt {1/{\lambda ^2} - {{|{\mathbf u} |}^2}} )\} ,$$
where u and v are the fields before and after each slice, ${\mathbf r} = ({x, y} )$ is the 2D vector in object space coordinates, ${h_{\varDelta z}}({\mathbf r} )$ is the point spread function (PSF) of the angular spectrum propagation with the 2D Fourier space coordinates vector ${\mathbf u}$, ${\otimes} $ denotes convolution, ${\mathrm{{\cal F}}^{ - 1}}$ represents the 2D inverse Fourier transform, and $\lambda $ is wavelength. After passing through the thick sample, the Fourier intensity of the exit field is then recorded by the detector:
$$I({\mathbf u}) = {|{{\cal F}\{ v({\mathbf r},M\Delta z)\} } |^2},$$
where $\mathrm{{\cal F}}$ represents the 2D Fourier transform. Thereby, the 3D complex field of the volumetric object is encoded in the intensity-only image $I({\mathbf u} )$.

2.3 Coarse-grained modeling through digital refocusing

An estimate of the 3D complex field of the sample is required to initialize the reconstruction algorithmic process. It is assumed that the error in the initial guess can be continuously improved through the iterative algorithm so that the final estimate approaches that of the actual complex field distribution. In practice, due to the ill-posed nature of the problem, the algorithm cannot converge to the global optimum under all initial conditions. Based on this, we propose a multistage reconstruction algorithm, which is inspired by the coarse-to-fine strategies of biological vision [34] and computer vision [35]. The reconstruction algorithm considers both coarse-grained information of digital refocusing and fine features optimized across multiple slices. The goal is to quickly perform coarse-grained modeling of the sample using fewer iterations and leverage the constraints it provides to perform faster and better fine optimization in subsequent stages.

Coarse-grained modeling makes it possible to obtain better generalization performance with few iterations on few measurements. This method does not need to consider the form of the forward model, nor does it need to add additional measurements for fine optimization, and directly uses refocusing to obtain coarse-grained modeling. The simplest method in refocusing is to acquire multiple images by axially translating the sample. However, this method involves mechanical vibration and multiple sequential image acquisitions, greatly reducing the volumetric imaging rate [36]. Considering the above limitations, a numerical propagation method [37] is applied to calculate the field distribution at different axial positions and bring out-of-focus sample features into focus. This makes it feasible to generate large focal series, permitting fast digital refocusing without acquiring multiple images.

Our previous work can accurately recover the 2D complex field by using four complementary structured patterns and corresponding diffraction intensities [38]. Thus, the multi-slice complex field of the 3D sample can be synthesized into one layer for a rough solution. Due to the support constraints in object space [31], the features in the layer conjugated with the complementary structured pattern can be restored with high contrast, while other layers are affected by defocus blur. The patterns are projected to the outermost layer of the sample to iteratively solve the wave field information of the first layer:

$$\hat{v}({\mathbf r},\Delta z) = {{\cal F}^{ - 1}}\{ \sqrt {I({\mathbf u})} \frac{{{\cal F}\{ f({\mathbf r},\Delta z)s({\mathbf r})\} }}{{|{{\cal F}\{ f({\mathbf r},\Delta z)s({\mathbf r})\} } |}}\} ,$$
$$\hat{f}({\mathbf r},\Delta z) = \hat{v}({\mathbf r},\Delta z)s({\mathbf r}) + [f({\mathbf r},\Delta z) - \hat{v}({\mathbf r},\Delta z)][1 - s({\mathbf r})].$$

Equations (5), (6) are repeated within 10 iterations to converge to the correct solution. When an optical field $f({{\mathbf r},\Delta z} )$ is known in a given plane $\Delta z$, the field $f({{\mathbf r},m\Delta z} )$ in a different parallel plane $m\Delta z$ can be calculated using the angular spectrum method. Mathematically, this is achieved by convolving the field by a term ${h_{({m - 1} )\Delta z}}({\mathbf r} )$ that characterizes the axial propagation:

$$f({\mathbf r},m\Delta z) = f({\mathbf r},\Delta z) \otimes {h_{(m - 1)\Delta z}}({\mathbf r}).$$

The numerical propagation of the recovered complex field effectively refocuses the sample features of a certain thickness, thereby facilitating further processing. However, refocusing cannot remove the light from areas above and below the plane of interest. The out-of-focus sample features become increasingly blurred due to the inherent trade-off between spatial resolution and depth of field (DOF). For thick samples, such out-of-focus blur produces a background that deteriorates the contrast and signal-to-noise ratio (SNR) of the in-focus feature. Therefore, the refocusing result needs to be further refined and used as an initial estimate for the upcoming optimization. Such a coarse-grained modeling strategy can increase the reconstruction speed by more than an order of magnitude while avoiding stagnation.

2.4 Fine-grained reconstruction through iterative optimization

By postponing the bulk of the processing until after coarse-grained modeling, our approach can leverage adequate constraints to effectively optimize coarse features in the later stage. Combining N illumination patterns ${s_n}({\mathbf r} )$ ($n = 1,2, \ldots ,N$) and the multi-slice forward model, an optimization procedure is used to recover 3D intensity and phase from the captured data. The intensity-only tomography is formulated as an optimization problem with an objective function:

$$\mathop {\min }\limits_{f({\mathbf r},m\Delta z)} \sum\nolimits_{n = 1}^N {{w_n}({\mathbf u}){{|{{I_n}({\mathbf u}) - {{|{{\cal F}\{ v({\mathbf r},M\Delta z)\} } |}^2}} |}^2}} ,$$
where ${w_n}({\mathbf u} )$ is a weighting function used to eliminate the adverse effects of saturated detector pixels. Equation (8) provides a metric of how well the current object estimate fits the measurements through the Euclidean distance between the estimated intensities and the measured intensities. Since the objective function cannot be minimized analytically, optimization-based iterative solvers are a promising alternative. Among them, due to the large-scale nature of 3D reconstruction, the gradient descent method is selected, which has relatively lower memory requirements and computational complexity compared with the alternative direction method of multipliers (ADMM) [39] or the second-order method [40].

The iterative optimization process consists of initial estimation, forward model, amplitude constraints in Fourier space, back-propagation, and support constraints in object space. The first layer of the 3D sample is multiplied by multiple structured illumination patterns ${s_n}({r} )$ and the corresponding diffraction intensities are recorded, so that all layers of the 3D sample can be integrated into a 2D complex field for phase retrieval. Digital refocusing of the retrieved 2D complex field at different depths gives an initial guess of the complex transmittance function at the corresponding sample slice (Section 2.B). The coarse-grained modeling of the intensity and phase at each sample slice is then improved through the iterative phase retrieval process, combined with the multi-slice inversion procedure. At each iteration, the transmittance function of the sample is updated for each illumination pattern as follows:

  • (1) Starting from the current guess of the multi-slice transmittance function, the forward model (Section 2.A) is used to generate the estimate of the Fourier spectrum at the detector plane $\mathrm{{\cal F}}\{{{v_n}({{\mathbf r},M\Delta z} )} \}$, when illuminating with the $n$th illumination pattern.
  • (2) The exit field of the last layer is updated by replacing the estimated intensity with the actual measurement:
    $${\widehat v_n}({\mathbf r},M\Delta z) = {{\cal F}^{ - 1}}\{ {w_n}({\mathbf u})\sqrt {{I_n}({\mathbf u})} \frac{{{\cal F}\{ {v_n}({\mathbf r},M\Delta z)\} }}{{|{{\cal F}\{ {v_n}({\mathbf r},M\Delta z)\} } |}} + [1 - {w_n}({\mathbf u})]{\cal F}\{ {v_n}({\mathbf r},M\Delta z)\} \} .$$
  • (3) The wave field is back-propagated through the 3D sample, and following steps are repeated until the first slice. At the $m$th slice, the incident field and the transmittance function of this slice are updated using the regularized PIE (rPIE) [41]:
    $${\widehat u_n}({\mathbf r},m\Delta z) = {u_n}({\mathbf r},m\Delta z) + \frac{{{f^\ast }({\mathbf r},m\Delta z)[{{\widehat v}_n}({\mathbf r},m\Delta z) - {v_n}({\mathbf r},m\Delta z)]}}{{(1 - \alpha ){{|{f({\mathbf r},m\Delta z)} |}^2} + \alpha |{f({\mathbf r},m\Delta z)} |_{\max }^2}},$$
    $$\widehat f({\mathbf r},m\Delta z) = f({\mathbf r},m\Delta z) + \frac{{\widehat u_n^\ast ({\mathbf r},m\Delta z)[{{\widehat v}_n}({\mathbf r},m\Delta z) - {v_n}({\mathbf r},m\Delta z)]}}{{(1 - \beta ){{|{{{\widehat u}_n}({\mathbf r},m\Delta z)} |}^2} + \beta |{{{\widehat u}_n}({\mathbf r},m\Delta z)} |_{\max }^2}},$$
    where the superscript * denotes the complex conjugate, $\alpha $ and $\beta $ are constant parameters for updating the incident field and the transmittance function, respectively. These parameters are selected as $\alpha = 0.3$ and $\beta = 0.6$ in the experiment. Since the two functions come as a product, the gradient descent procedure is used to separate the two updates. Then the updated exit field of the previous slice is obtained from the incident field of the current slice by
    $${\mathop{v}\limits^\frown_n}({\mathbf r},(m - 1)\Delta z) = {\widehat u_n}({\mathbf r},m\Delta z) \otimes {h_{ - \Delta z}}({\mathbf r}),$$
    where the convolution operation denotes the inverse Fresnel propagation with distance $\Delta z$.
  • (4) At the first slice, the incident field is kept unchanged as the original illumination pattern with ${\hat{u}_n}({{\mathbf r},\Delta {z}} )= {s_n}({\mathbf r} )$ and the transmittance function is then updated by
    $$\widehat f({\mathbf r},\Delta z) = f({\mathbf r},\Delta z) + \frac{{s_n^\ast ({\mathbf r})[{{\widehat v}_n}({\mathbf r},\Delta z) - {v_n}({\mathbf r},\Delta z)]}}{{(1 - \beta ){{|{{s_n}({\mathbf r})} |}^2} + \beta |{{s_n}({\mathbf r})} |_{\max }^2}}.$$

The 3D complex transmittance function is iteratively updated for each illumination pattern, effectively implementing a nonlinear 3D deconvolution that reduces out-of-plane blur. The inverse problem is solved via a dual constrained optimization procedure with an effective initial assignment to make the iteration converge. The algorithm converges reliably within only a few iterations in all cases we tested.

3. Simulation

The challenges for 3D reconstruction include the high sensitivity of reconstruction convergence to noise or missing experimental data, the limited ability to retrieve images for extended objects or complex-valued samples from few measurements, and the need to collect data with a high dynamic range [42]. Coded illumination with multiple modulation has been introduced as a feasible solution to the ambiguity challenge. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. For example, the single-shot complex-field tomography approach uses a diffuser in a ptychographic experiment to produce a randomly structured probe, and combines image priors to handle the ill-posed phase retrieval [17]. The diffuser provides diversity for diffraction measurement and reduces the dynamic range. However, a single measurement strongly depends on the sample prior, and different parameter settings (such as regularization coefficients) lead to diverse reconstruction results that may deviate from the ground truth. PIE, as multi-shot diffraction imaging, narrows the illumination beam and oversamples the diffraction pattern, which enables to increase data redundancy for successful reconstruction. To the best of our knowledge, at least hundreds of patterns are required in existing methods for robust 3D reconstruction.

Our method adopts complementary structured illumination, which requires a minimum of four binary modulation patterns. For multi-slice ptychographic reconstruction, the data collection process is the same as single-slice. This is advantageous because no additional sample rotation is needed to provide 3D information. It also extends the thickness limit that ptychography can deal with. To validate that four patterns are the least to recover high- fidelity results, different numbers of modulations were tested. The red rectangle of Fig. 3(a) and Fig. 3(b) represent the reconstruction results and convergence performance under different modulation numbers, respectively. To ensure that all the signals in the field are multiplexed in the measurements, the duty cycle of each pattern is inversely proportional to the total modulation number. As the iteration number increases, the calculated absorption and RI errors for four or more modulation patterns become less than 0.05. More modulation patterns do not bring higher visual fidelity. When the modulation amount is less than four, the measurements are not able to provide enough constraints for the algorithm convergence, resulting in unacceptable errors. As a conclusion, four patterns are the least for high-fidelity 3D reconstruction in this case. The blue rectangle of Fig. 3(a) provides a comparison with the linear single-scattering model [43]. To investigate the performance of models with highly scattering samples, reconstruction was performed by using different numbers of illumination patterns. Compared with the linear single-scattering model, the results computed by our inverse solver show clearer and more accurate reconstructions. Our method recovers the most quantitative 3D information because of the high accuracy of the scattering model.

 figure: Fig. 3.

Fig. 3. (a) Reconstruction results of different numbers of illumination patterns using our method (red rectangle) and linear single-scattering model (blue rectangle). (b) The variation of error of retrieved absorption and RI (red rectangle) with iteration number, where the green, blue, and red lines represent the errors of using 10, 4, and 3 illumination patterns, respectively.

Download Full Size | PDF

To evaluate different algorithms on a sample whose RI values are not homogeneous and which contains fine details, a synthetic cell phantom is generated for 3D RI recovery. Figure 4 shows the reconstruction results of a cell phantom using our method and linear single-scattering model through 8 measurements. The differences in refractive index within this cell are sufficiently large so that the cell is not a weakly-scattering sample – thus, the reconstruction based on the linear single-scattering model is not accurate enough. Both the lateral and axial cross-sectional planes through our method show excellent matching with corresponding planes through the volume of the true simulated phantom.

 figure: Fig. 4.

Fig. 4. Reconstruction results of a cell phantom through our method and linear single-scattering model. Left column: the $yz$ slice, with the $z$-position of the $yx$ slices indicated by using dashed lines. Second through fourth columns: $yx$ slices at the positions indicated in the left column.

Download Full Size | PDF

4. Experimental results

In this section, experiments are conducted to prove the ability of the proposed technique to meet the above requirements under four modulation patterns. It is demonstrated experimentally with both resolution target and biological sample that the reconstruction quality and tomography ability can be reached with the setup and algorithm.

4.1 Quantitative verification

We first demonstrate the improvement of coarse-grained modeling, which uses a two-slice test sample consisting of two resolution targets, one placed in conjugate with the DMD plane and the other axially spaced apart and rotated relative to the first one. The stacked resolution targets provide a convenient way to experimentally characterize lateral resolution at multiple depths. Figure 5(a) shows the comparison between coarse-to-fine recovery (third column) and only fine-grained recovery (first column). The enlarged images and the profiles along the blue and red line segments in the last row show the improvement in resolution using our proposed algorithm. Due to the rough approximation of integrating the multi-slice complex fields of the 3D sample into one layer, the 2D phase retrieval cannot accurately recover the sample. On this basis, the rough-grained modeling of digital refocusing cannot achieve high resolution (second column). The digitally refocused image cannot resolve small features and is used as an initial estimate for the energy distribution. Subsequent fine-grained reconstruction is required to use coarse-grained modeling as the initial assignment and iteratively update it to improve resolution and tomographic capabilities. Figure 5(b) compares the reconstruction of our multistage method with images captured from physical focus. The coarse-to-fine reconstruction using the two-layer model can restore line width of 1.23 $\mathrm{\mu}\mathrm{m}$ (group 8, element 5) at two depths over a field of view (FOV) of $770\mathrm{\mu}\mathrm{m} \times 510\mathrm{\mu}\mathrm{m}$. For high-resolution multi-slice reconstruction in a large FOV, the algorithm takes 2.83 minutes using Matlab on an Intel i7 3.6 GHz CPU computer with 16G RAM. The reconstructed resolution is better than the physically refocused image, because the resolution of the latter is inevitably affected by the content of other layers. Since our result represents the sample transmittance function, it is different from physical refocusing, and can greatly eliminate the out-of-plane blur from other resolution target. In addition, the applied coherent transfer function (CTF) has a uniform frequency response within the passband, providing a high frequency contrast [44].

 figure: Fig. 5.

Fig. 5. Absorption reconstruction of a two-slice sample consisting of two resolution targets placed at different depths (0 and 150 $\mathrm{\mu}\mathrm{m}$), with one rotated laterally with respect to the other. (a) Comparison with only fine-grained reconstruction (first column). The complex field obtained by approximating the 3D sample as one layer for 2D phase retrieval is digitally refocused for coarse-grained modeling (second column), and subsequent fine-grained reconstruction (third column) can recover high-resolution images with improved contrast, while also removing out-of-focus blur. Scale bar: 100 $\mathrm{\mu}\mathrm{m}$ (first row) and 10 $\mathrm{\mu}\mathrm{m}$ (third row). (b) Comparison with the result of physically changing the focus. (c) The reconstruction on a two-slice test sample consisting of two resolution targets at different intervals. Scale bar: 50 $\mathrm{\mu}\mathrm{m}$ (first row) and 10 $\mathrm{\mu}\mathrm{m}$ (third row).

Download Full Size | PDF

The resolving power of the system is limited by the spectrum collection ability. The larger the angle of the diffraction field from the targeted sample structure that can be collected, the higher the resolution that can be achieved [1]. For the coherent imaging system, the diffraction-limited resolution is $\frac{\lambda }{{NA}} = \frac{{0.532\,\mathrm{\mu}\mathrm{m}}}{{0.25}} = 2.128\,\mathrm{\mu}\mathrm{m}$, corresponding to a line width of $1.064\,\mathrm{\mu}\mathrm{m}$, which is close to the experimental value provided above. The depth (z-axis) sectioning is theoretically predicted by the 3D coherent transfer function (CTF) of the imaging system [45,46]. The axial resolution can be calculated as $\frac{\lambda }{{{n_m} - \sqrt {{n_m} - N{A^2}} }} = \frac{{0.532\,\mathrm{\mu}\mathrm{m}}}{{1 - \sqrt {1 - {{0.25}^2}} }} = 16.75\,\mathrm{\mu}\mathrm{m}$, where ${n_m}$ is the RI of surrounding medium. This estimate sets a lower bound for an achievable axial resolution. The corresponding experimental verification for achieving an axial resolution of 20 $\mathrm{\mu}\mathrm{m}$ close to theoretical value can be found in Fig. 5(c). As the slice spacing drops below this theoretical limit, the influence of other layers becomes more obvious, and the image tomography capability drops significantly.

4.2 Biological experiments

In order to demonstrate the practical applications of the proposed technique, we first imaged an unstained Polyrhachis vicina Roger with absorption and phase effects, and performed multi-slice reconstruction over a FOV of $430\,\mathrm{\mu}\mathrm{m} \times 430\,\mathrm{\mu}\mathrm{m} \times 150\,\mathrm{\mu}\mathrm{m}$. The phase (a quantity of the light field) can be translated to RI (a quantity of the sample) [47]. Our reconstruction framework can produce volumetric images of RI by stacking individual images of phase at different planes and accounting for the effective thickness of each slice. The phase term can be related to the RI of the sample by taking the 2D screen of infinitesimally small thickness $\delta z$ to instead represent an average transmittance over a finite thickness $\Delta z$, which yields the form of $\phi ({{\mathbf r},m\Delta z} )= \frac{{2\pi }}{\lambda }\mathrm{\Delta }z[{n({{\mathbf r}, m\Delta z} )- {n_m}} ]$, where $\phi ({{\mathbf r},m\Delta z} )$ is the corresponding phase of $f({{\mathbf r},m\Delta z} )$, $n({{\mathbf r}, m\Delta z} )$ is the locally varying RI of the sample, and ${n_m}$ is the RI of surrounding medium.

The reconstruction shows true depth sectioning, allowing us to generate 3D renderings of the sample, as shown in the left part of Fig. 6. Three different closely spaced z-slices of the reconstructed 3D sample are extracted for display. Each z-slice contains sample features that are not present in the adjacent z-slices. For example, the leg structure in the lower left of the $z = 0\,\mathrm{\mu}\mathrm{m}$ plane almost completely disappears in the $z = 50\,\mathrm{\mu}\mathrm{m}$ plane. Figure 6 also shows reconstruction using only fine-grained recovery as a comparison of biological results. The noise is more obvious and the SNR is worse without coarse-grained modeling. The DOF for coherent imaging is relatively large, which is not enough to provide a reference for depth. With coherent imaging, out-of-focus information is integrated with in-focus content in the 2D image, making it hard to distinguish one from the other. Therefore, the presence of these structures is confirmed by mechanical scanning with a standard microscope (incoherent imaging) [right part of Fig. 6]. The $10 \times $ standard microscope uses LED as the light source of incoherent illumination for scanning and imaging, and obtains distinct intensity variation in depth. It can be seen that the axial position of the restored structure is consistent with that of the scanned imaging structure. In addition, our results can segment each specific plane of interest and greatly reduce the influence from other layers.

 figure: Fig. 6.

Fig. 6. Experimental results of a continuous 3D sample (Polyrhachis vicina Roger). (Left) 3D rendering of the recovered RI. (Right) Some example slices of the complex transmittance function using coarse-to-fine reconstruction, compared with the results of only fine-grained recovery and physical focusing using the same objective lens. The 3D reconstruction of our method naturally displays the quantitative transmittance function at each depth and builds up high contrast contours that separate distinct structures.

Download Full Size | PDF

One of the key objectives for the proposed reconstruction is to maintain rapid tomographic reconstruction of complex samples for imaging living dynamic biological samples. To prove this capability, we applied it to the time series measurement of Caenorhabditis elegans (C. elegans) worm. A common method of C. elegans imaging makes use of spinning disk confocal microscopy [48,49]. Instead of resorting to mechanical scanning, we adopted the method of ptychographic tomography with multistage reconstruction to provide fine details of the worm and monitor the freely moving worm. Figure 7 highlights the wealth of information recovered from the complex, dynamic biological sample. Figure 7(a) shows depth-coded projection of volumetric reconstruction of the entire worm, where the total volume is $660\,\mathrm{\mu}\mathrm{m} \times 170\,\mathrm{\mu}\mathrm{m} \times 40\,\mathrm{\mu}\mathrm{m}$. It took 4.36 minutes to reconstruct this 3D structure. In Fig. 7(b), two lateral slices through the reconstruction volume are shown at axial positions of $z = 0$ and 20 $\mathrm{\mu}\mathrm{m}$, respectively. Compared with the results using the linear single-scattering model [43] in Fig. 7(c), our method shows significant noise suppression, enhanced depth sectioning and improved feature recovery.

 figure: Fig. 7.

Fig. 7. Tomographic reconstruction of the freely moving C. elegans. (a) The depth-coded projection of volumetric reconstruction of the entire worm. (b) Lateral slices through the 3D reconstruction volume at axial positions $z = 0, 20\,\mathrm{\mu}\mathrm{m}$. (c) The comparison of (b) using linear single-scattering model. (d)-(f) The lateral slices of the white rectangle in (b) at times t = 0 s, 0.13 s, and 1.6 s, respectively. The orange rectangles in (d), (e) highlight the dynamics of the pharynx.

Download Full Size | PDF

In addition, the important application that is enabled by our system is 3D tracking throughout entire volume. Continuous tracking of a single worm is feasible, and our system can also be adapted to monitor whole body posture and locomotion-related behaviors in 3D. These results are obtained at full frame rate and without the requirement of any active worm tracking. The results are shown for specific time points in Figs. 7(d)–7(f). The orange rectangles in Figs. 7(d), 7(e) highlight the dynamics of the pharynx. The pharynx is a muscular food pump in the head of C. elegans, which is triangular in cross-section, and grinds food and transports it directly to the intestine. Figure 7(e) shows the movement of prominent structures compared to Fig. 7(d), while Fig. 7(f) exhibits the morphological changes of the head from a larger dimension. This experiment provides the possibility for applications to evaluate the temporal dynamics of biological samples with different layer information. The high-fidelity recovery of 3D features in temporal data demonstrates its utility for general dynamic sample imaging.

5. Discussion

5.1 Effect of multistage reconstruction

The above experiments verify that the proposed multistage reconstruction method adopts coarse-grained modeling to accelerate iterative optimization across multiple slices, and produces satisfactory results on four measurements.

Most tomography methods adopt direct recovery, choosing a simple initial estimate, such as a single raw image padded along all three dimensions or a 3D array containing constant values. These arbitrary initializers are effective for FPT or PIE, which exploits redundancy in ptychographic data to relax the stringent experimental conditions initially required. The general belief is that gradient-based optimization in high-capacity samples requires multiple iterative steps over many measurements to perform well [50,51]. Although traditional 3D reconstructions have shown great success in the large data domain, they generally perform poorly in few measurements. The reason why the gradient-based optimization algorithm fails under a small amount of data is as follows. First, gradient-based optimization algorithms cannot guarantee absolute convergence within a few steps, especially for non-convex problems. Secondly, the direct reconstruction from random initialization considerably weakens the ability to converge to the global minimum after multiple updates. A systematic way is needed for multistage reconstruction to ensure that the coarse-grained modeling is an optimal starting point for subsequent optimization.

For experiments using only four measurements, it is necessary to compare the effects of coarse-to-fine reconstruction and only fine-grained reconstruction. Figure 8 shows the impact of matching energy and coarse-grained modeling on the reconstruction of Fig. 5. The accuracy of the experimental reconstruction can be quantified by the error of retrieved Fourier intensity [Fig. 8(a)] [52,53] with $E = \mathop \sum \nolimits_{n = 1}^N \frac{{\mathop \sum \nolimits_{\mathbf u} {{|{{{|{\mathrm{{\cal F}}\{{{v_n}({{\mathbf r},M\Delta z} )} \}} |}^2} - {I_n}({\mathbf u} )} |}^2}}}{{\mathop \sum \nolimits_{\mathbf u} {{|{{I_n}({\mathbf u} )} |}^2}}}/N$. To evaluate this reconstruction quality more comprehensively, based on the known ground truth of stacked resolution targets, the root mean square error (RMSE) of retrieved object absorption is also given in Fig. 8(b). The 3D complex transmittance function of the sample, when input to the forward model, produces a wavefront at the detector plane whose energy matches the corresponding measured diffraction intensity. Therefore, the close initial estimate has an energy matching the input Fourier intensity and further fits the ground truth. The error value at iterations 0 is the degree of match between the initial estimate and the measured intensity. The matching energy is more critical (blue lines relative to red lines), because it determines the main energy distribution. Although random estimate with matching energy is beneficial to subsequent optimization, when the deviation of the new data from the original data is relatively large, the optimization performance will be significantly reduced. On this basis, more accurate results can be reconstructed through coarse-grained modeling closer to the target value (blue dotted line). The coarse-grained modeling of energy mismatch uses $f({{\mathbf r}, m\Delta z} )$ in Eq. (7), and the corresponding energy matching result is further improved to $f{({{\mathbf r}, m\Delta z} )^{1/N}}$.

 figure: Fig. 8.

Fig. 8. The impact of matching energy and coarse-grained modeling on the reconstruction of Fig. 6, which is quantified by the variation of error of retrieved Fourier intensity (a) and retrieved object absorption (b) with iteration number. The curves with different colors (blue/red) indicate whether the initial energy matches the intensity of the captured image, while the curves with two line types (dashed/solid) indicate whether the coarse-grained modeling is used.

Download Full Size | PDF

The coarse-grained initialization makes the initial value close to the global minimum to achieve global convergence. Only fine-grained reconstruction simply assigns the initial estimate, which is not close to the true value in most cases. The algorithm alternatively projects the function between dual constraints until it gets stuck at the local minimum. In contrast, the initial assignment provided by coarse-grained modeling in multistage reconstruction is near the global minimum, which makes the ability to escape the local minimum far superior to direct reconstruction. In short, although iterative methods often get stuck in local minima under few measurements, the multistage coarse-to-fine reconstruction algorithm helps to avoid this problem and converge to the global solution. The proposed algorithm is superior to the traditional direct reconstruction algorithms under few measurements.

5.2 Analysis of complementary structured illumination

The choice of illumination patterns and quantitative evaluation of reconstruction fidelity in the experiment are further investigated. The size of each square in one of the four patterns is set to $4 \times 4$, $10 \times 10$, and $20 \times 20$ micromirrors to compare the field $v({{\mathbf r},m\Delta z} )$ after each slice. The exit field amplitude of each layer in Fig. 9(a) shows the modulation ability of different layers to the propagation field. If the size of the square is smaller, the stronger diffraction effect provides more obvious inconsistency in the modulation patterns of different layers, thus enhancing the tomographic ability of the reconstruction result. Conversely, if the size of the square is larger, the deformation degree of the modulation pattern at other layers is smaller, and there will be ambiguity in the range of multiplication approximation, making it impossible to ensure the z-axis resolution. The large size of the square also brings certain benefits. It is more effective to approximate multiple layers into one layer in the coarse–grained modeling process, which makes the resolution of subsequent iterative optimization higher. The reconstruction results versus different illumination patterns are shown in Fig. 9(b), where the lateral resolution in top row and axial resolution in bottom row are relatively low. Consequently, the illumination patterns need to realize low axial correlations between the propagating fields at neighboring layers and meet the approximate requirements of coarse-grained modeling. This allows us to set the size of each square in the patterns to $10 \times 10$ micromirrors, which considers both reconstruction fidelity and sectioning capability.

 figure: Fig. 9.

Fig. 9. The field amplitude $|{v(\textrm{r},m\Delta z} |$ after each slice (a) and recovered absorption images (b) of the experiment in Fig. 5(c) versus different illumination patterns. The size of each square in the pattern is set to $4 \times 4$, $10 \times 10$, and $20 \times 20$ micromirrors, respectively.

Download Full Size | PDF

5.3 Guarantee of uniqueness

Uniqueness is an important aspect of the inverse problem. There must be sufficiently strong constraints about the sample to make the solution unique. Generally, measuring only the diffraction intensity lacks a crucial phase portion and results in an underdetermined reconstruction. For modulus constraints provided by Fourier intensity, there is an equivalence class consisting of solutions related to translation, inversion, and global phase factor [54]: $f({{\mathbf r} + {{\mathbf r}_0}} )$, ${f^\mathrm{\ast }}({ - {\mathbf r}} )$, $f({\mathbf r} ){e^{i{\phi _0}}}$, where ${{\mathbf r}_0}$ is a real vector in object space coordinates, and ${\phi _0}$ is a real constant. In addition, the existence of noise and limited prior knowledge (loose constraints) increase the number of solutions. In order to eliminate the above ambiguity, support constraints in object space are necessary to ensure the uniqueness of the solution. The dual constraints provide sufficient structural information to solve the inverse problem of 2D reconstruction. The steps of modulation and wave propagation between the modulator and the detector plane yield a rapid and reliable algorithmic convergence.

However, there is usually more than one solution for the recovered layers of a 3D object that satisfies these constraints. For instance, if the correct solution for multiple object slices is $f({{\mathbf r}, m\Delta z} )$, then ${c_m}f({{\mathbf r}, m\Delta z} )$ is also a valid solution, where ${c_m}$ is a complex constant and satisfies $\mathop \prod \limits_{m = 1}^M {c_m} = 1$. The complex transmittance of the reconstructed slices can be related to each other by a constant scale factor. This is similar to the case of 2D phase retrieval, where the scaling of the reconstructed image intensity is arbitrary and the reconstructed phase is relative. An additional constraint restricting the maximum modulus of each slice can be employed to alleviate this ambiguity.

A second ambiguity arises when two adjacent slices of the object, $f({\textrm{r}, m\Delta z} )$ and $f({\textrm{r}, ({m + 1} )\Delta z} )$, are in close proximity, well within the bounds of multiplicative approximation. Then $a(\textrm{r} )f({\textrm{r}, m\Delta z} )$ and ${a^{ - 1}}(\textrm{r} )f({\textrm{r}, ({m + 1} )\Delta z} )$ will also be a solution, where $a(\textrm{r} )$ is any complex-valued function. In this case, it is considered that adjacent slices are approximately on the same plane and cannot form an effective 3D distribution. In order to resolve the ambiguity caused by the small interval $\Delta z$, it is necessary to ensure that the deformation degree of the modulation patterns in different layers is sufficient for tomographic reconstruction [Fig. 9]. The size of the pattern square is correspondingly reduced when the interval between adjacent layers is small.

6. Conclusion

In conclusion, we propose a label-free and motion-free intensity diffraction tomography technique to recover 3D samples. Our tomography imaging technique offers several notable advantages over alternative strategies. Intensity-only acquisitions enable phase stable and reference-free measurements with a simpler and cheaper optical system. And there are no constraints upon the sample to be weakly scattering or sparse, which opens up 3D tomography to the more general class of samples.

Factors such as increasing the number of acquisitions (to increase the likelihood of correct convergence for the reconstruction) or the resolution of the reconstruction volume further slow the computation. To address this, the 3D complex field of a sample is reconstructed from intensity-only images by employing four modulation patterns provided by a DMD. Through the advantages of multistage algorithm combining coarse-grained modeling and fine-grained reconstruction, the physical capabilities of imaging systems can be extended and high-fidelity holographic and tomographic imaging can be realized.

Reconstructed wave field provides structural information that is extensively utilized to study morphological parameters and biological information. It is swiftly developing into a mainstream technique, with a growing list of applications across a range of imaging modalities. Due to its simplicity, versatility, and excellent performance, we believe that the proposed technique will become a useful tool for a wide range of spectral regions, including electron beams, x-rays, infrared light, as well as terahertz radiation. Our present efforts can be further advanced. The compressive sensing reconstruction strategy and deep learning technique can be applied to further improve running efficiency and reconstruction quality.

Funding

National Natural Science Foundation of China (61327902, 61927802).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135 (2019). [CrossRef]  

2. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

3. D. Jin, R. Zhou, Z. Yaqoob, and P. T. C. So, “Tomographic phase microscopy: principles and applications in bioimaging [Invited],” J. Opt. Soc. Am. B 34(5), B64–B77 (2017). [CrossRef]  

4. R. Ling, W. Tahir, H. Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9(5), 2130–2141 (2018). [CrossRef]  

5. K. Kim, J. Yoon, and Y. Park, “Simultaneous 3D visualization and position tracking of optically trapped particles using optical diffraction tomography,” Optica 2(4), 343 (2015). [CrossRef]  

6. K. Lee, K. Kim, G. Kim, S. Shin, and Y. Park, “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42(5), 999–1002 (2017). [CrossRef]  

7. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517 (2015). [CrossRef]  

8. K. Kim, Z. Yaqoob, K. Lee, J. W. Kang, Y. Choi, P. Hosseini, P. T. So, and Y. Park, “Diffraction optical tomography using a quantitative phase imaging unit,” Opt. Lett. 39(24), 6935–6938 (2014). [CrossRef]  

9. M. Habaza, M. Kirschbaum, C. Guernth-Marschner, G. Dardikman, I. Barnea, R. Korenstein, C. Duschl, and N. T. Shaked, “Rapid 3D Refractive-Index Imaging of Live Cells in Suspension without Labeling Using Dielectrophoretic Cell Rotation,” Adv. Sci. 4(2), 1600205 (2017). [CrossRef]  

10. M. Habaza, B. Gilboa, Y. Roichman, and N. T. Shaked, “Tomographic phase microscopy with 180 degrees rotation of live cells in suspension by holographic optical tweezers,” Opt. Lett. 40(8), 1881–1884 (2015). [CrossRef]  

11. B. Bhaduri, C. Edwards, H. Pham, R. Zhou, T. H. Nguyen, L. L. Goddard, and G. Popescu, “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics 6(1), 57 (2014). [CrossRef]  

12. D. N. Wadduwage, V. R. Singh, H. Choi, Z. Yaqoob, H. Heemskerk, P. Matsudaira, and P. T. C. So, “Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy,” Optica 4(5), 546 (2017). [CrossRef]  

13. T. E. Gureyev, T. J. Davis, A. Pogany, S. C. Mayo, and S. W. Wilkins, “Optical phase retrieval by use of first Born- and Rytov-type approximations,” Appl. Opt. 43(12), 2418–2430 (2004). [CrossRef]  

14. M. Chen, D. Ren, H.-Y. Liu, S. Chowdhury, and L. Waller, “Multi-layer Born multiple-scattering model for 3D phase microscopy,” Optica 7(5), 394 (2020). [CrossRef]  

15. T.-a. Pham, E. Soubies, A. Ayoub, J. Lim, D. Psaltis, and M. Unser, “Three-dimensional optical diffraction tomography with Lippmann-Schwinger model,” IEEE Trans. Comput. Imaging 6, 727–738 (2020). [CrossRef]  

16. J. Lim, A. B. Ayoub, E. E. Antoine, and D. Psaltis, “High-fidelity optical diffraction tomography of multiple scattering samples,” Light: Sci. Appl. 8(1), 82 (2019). [CrossRef]  

17. R. Horisaki, K. Fujii, and J. Tanida, “Diffusion-based single-shot diffraction tomography,” Opt. Lett. 44(8), 1964 (2019). [CrossRef]  

18. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1 (2018). [CrossRef]  

19. C. Zuo, J. Li, J. Sun, Y. Fan, J. Zhang, L. Lu, R. Zhang, B. Wang, L. Huang, and Q. Chen, “Transport of intensity equation: a tutorial,” Optics and Lasers in Engineering 135, 106187 (2020). [CrossRef]  

20. S. K. Rajput, M. Kumar, X. Quan, M. Morita, T. Furuyashiki, Y. Awatsuji, E. Tajahuerce, and O. Matoba, “Three-dimensional fluorescence imaging using the transport of intensity equation,” J. Biomed. Opt. 25(03), 1–7 (2019). [CrossRef]  

21. J. A. Rodrigo, J. M. Soto, and T. Alieva, “Fast label-free microscopy technique for 3D dynamic quantitative imaging of living cells,” Biomed. Opt. Express 8(12), 5507–5517 (2017). [CrossRef]  

22. M. Kahnt, J. Becher, D. Brückner, Y. Fam, T. Sheppard, T. Weissenberger, F. Wittwer, J.-D. Grunwaldt, W. Schwieger, and C. G. Schroer, “Coupled ptychography and tomography algorithm improves reconstruction of experimental data,” Optica 6(10), 1282 (2019). [CrossRef]  

23. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

24. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

25. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Optics and Lasers in Engineering 128, 106003 (2020). [CrossRef]  

26. S. Chowdhury, M. Chen, R. Eckert, D. Ren, F. Wu, N. Repina, and L. Waller, “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211 (2019). [CrossRef]  

27. J. Li, A. Matlock, Y. Li, Q. Chen, L. Tian, and C. Zuo, “Resolution-enhanced intensity diffraction tomography in high numerical aperture label-free microscopy,” Photonics Res. 8(12), 1818 (2020). [CrossRef]  

28. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104 (2015). [CrossRef]  

29. F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-Sicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” Nat. Commun. 7(1), 13367 (2016). [CrossRef]  

30. T. R. Crimmins and J. R. Fienup, “Uniqueness of phase retrieval for functions with sufficiently disconnected support,” J. Opt. Soc. Am. 73(2), 218 (1983). [CrossRef]  

31. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” Journal of the Optical Society of America A 4(1), 118 (1987). [CrossRef]  

32. A. Berdeu, F. Momey, B. Laperrousaz, T. Bordy, X. Gidrol, J. M. Dinten, N. Picollet-D’hahan, and C. Allier, “Comparative study of fully three-dimensional reconstruction algorithms for lens-free microscopy,” Appl Opt 56(13), 3939–3951 (2017). [CrossRef]  

33. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

34. K. Petras, S. Ten Oever, C. Jacobs, and V. Goffaux, “Coarse-to-fine information integration in human vision,” NeuroImage 186, 103–112 (2019). [CrossRef]  

35. W. Jiang, K. Huang, J. Geng, and X. Deng, “Multi-Scale Metric Learning for Few-Shot Learning,” IEEE Trans. Circuits Syst. Video Technol. 31(3), 1091–1102 (2021). [CrossRef]  

36. S. Xiao, H. Gritton, H.-A. Tseng, D. Zemel, X. Han, and J. Mertz, “High-contrast multifocus microscopy with a single camera and z-splitter prism,” Optica 7(11), 1477 (2020). [CrossRef]  

37. R. Claveau, P. Manescu, M. Elmi, V. Pawar, M. Shaw, and D. Fernandez-Reyes, “Digital refocusing and extended depth of field reconstruction in Fourier ptychographic microscopy,” Biomed. Opt. Express 11(1), 215–226 (2020). [CrossRef]  

38. J. Hu, X. Xie, and Y. Shen, “Quantitative phase imaging based on wavefront correction of a digital micromirror device,” Opt. Lett. 45(18), 5036 (2020). [CrossRef]  

39. S. Ramani and J. A. Fessler, “A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction,” IEEE Trans. Med. Imaging 31(3), 677–688 (2012). [CrossRef]  

40. R. Battiti, “First- and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method,” Neural Computation 4(2), 141–166 (1992). [CrossRef]  

41. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736 (2017). [CrossRef]  

42. M. Li, L. Bian, and J. Zhang, “Coded coherent diffraction imaging with reduced binary modulations and low-dynamic-range detection,” Opt. Lett. 45(16), 4373–4376 (2020). [CrossRef]  

43. J. Li, A. Matlock, Y. Li, Q. Chen, C. Zuo, and L. Tian, “High-speed in vitro intensity diffraction tomography,” Adv. Photonics 1(06), 1 (2019). [CrossRef]  

44. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017). [CrossRef]  

45. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2(2), 121 (1985). [CrossRef]  

46. S. S. Kou and C. J. Sheppard, “Image formation in holographic tomography: high-aperture imaging conditions,” Appl. Opt. 48(34), H168–175 (2009). [CrossRef]  

47. P. Ledwig and F. E. Robles, “Quantitative 3D refractive index tomography of opaque samples in epi-mode,” Optica 8(1), 6–14 (2021). [CrossRef]  

48. V. Venkatachalam, N. Ji, X. Wang, C. Clark, J. K. Mitchell, M. Klein, C. J. Tabone, J. Florman, H. Ji, J. Greenwood, A. D. Chisholm, J. Srinivasan, M. Alkema, M. Zhen, and A. D. Samuel, “Pan-neuronal imaging in roaming Caenorhabditis elegans,” Proc. Natl. Acad. Sci. U. S. A 113(8), E1082–1088 (2016). [CrossRef]  

49. J. P. Nguyen, F. B. Shipley, A. N. Linder, G. S. Plummer, M. Liu, S. U. Setru, J. W. Shaevitz, and A. M. Leifer, “Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans,” Proc. Natl. Acad. Sci. U. S. A. 113(4), 1074–1079 (2016). [CrossRef]  

50. J. Nie, N. Xu, M. Zhou, G. Yan, and Z. Wei, “3D Model classification based on few-shot learning,” Neurocomputing 398, 539–546 (2020). [CrossRef]  

51. S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” in ICLR 2017 (2017).

52. Y. H. Lo, L. Zhao, M. Gallagher-Jones, A. Rana, J. J. Lodi, W. Xiao, B. C. Regan, and J. Miao, “In situ coherent diffractive imaging,” Nat. Commun. 9(1), 1826 (2018). [CrossRef]  

53. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

54. E. Malm, E. Fohtung, and A. Mikkelsen, “Multi-wavelength phase retrieval for coherent diffractive imaging,” Opt. Lett. 46(1), 13 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Ptychographic tomography from intensity-only measurements (d) compared to current optical diffraction tomography (ODT) techniques (a)-(c). (a) Interferometry-based methods capture the interference between the object beam and the reference beam under various illumination angles to reconstruct the 3D complex field of the sample. (b) Ptychographic iterative engine (PIE) translates the sample through a grid of positions and records the diffraction pattern at each position. (c) Fourier ptychographic tomography (FPT) allows 3D complex field recovery using angled illumination-only intensity measurements. (d) The proposed ptychographic tomography uses a sequence of illumination patterns and a single focusing lens. The 3D complex field can be retrieved from the intensity distribution measured in the focal plane of the lens. The same operation principle can be applied, if the light from the sample is modulated by a set of patterns generated on a spatial light modulator (SLM) instead of the configuration shown in the figure.
Fig. 2.
Fig. 2. (a) Experimental verification of the proposed technique. The image sensor captures the corresponding Fourier intensities ${I_n}({\mathbf r} )$ under different illumination patterns ${s_n}({\mathbf r} )$ provided by the DMD. (b) The process of combining coarse-grained modeling and fine-grained reconstruction. The expression of $s({\mathbf r} )$, $f({{\mathbf r}, m\Delta z} )$, $u({{\mathbf r}, m\Delta z} )$ and $v({{\mathbf r}, m\Delta z} )$ in the figure are simplified to s, ${f_m}$, ${u_m}$ and ${v_m}$, respectively. (c) The simulated Fourier intensities collected with the structured pattern and the full-open pattern using the same maximum exposure (Note: there is an overexposed area in the middle due to the high dynamic range of the Fourier intensity).
Fig. 3.
Fig. 3. (a) Reconstruction results of different numbers of illumination patterns using our method (red rectangle) and linear single-scattering model (blue rectangle). (b) The variation of error of retrieved absorption and RI (red rectangle) with iteration number, where the green, blue, and red lines represent the errors of using 10, 4, and 3 illumination patterns, respectively.
Fig. 4.
Fig. 4. Reconstruction results of a cell phantom through our method and linear single-scattering model. Left column: the $yz$ slice, with the $z$-position of the $yx$ slices indicated by using dashed lines. Second through fourth columns: $yx$ slices at the positions indicated in the left column.
Fig. 5.
Fig. 5. Absorption reconstruction of a two-slice sample consisting of two resolution targets placed at different depths (0 and 150 $\mathrm{\mu}\mathrm{m}$), with one rotated laterally with respect to the other. (a) Comparison with only fine-grained reconstruction (first column). The complex field obtained by approximating the 3D sample as one layer for 2D phase retrieval is digitally refocused for coarse-grained modeling (second column), and subsequent fine-grained reconstruction (third column) can recover high-resolution images with improved contrast, while also removing out-of-focus blur. Scale bar: 100 $\mathrm{\mu}\mathrm{m}$ (first row) and 10 $\mathrm{\mu}\mathrm{m}$ (third row). (b) Comparison with the result of physically changing the focus. (c) The reconstruction on a two-slice test sample consisting of two resolution targets at different intervals. Scale bar: 50 $\mathrm{\mu}\mathrm{m}$ (first row) and 10 $\mathrm{\mu}\mathrm{m}$ (third row).
Fig. 6.
Fig. 6. Experimental results of a continuous 3D sample (Polyrhachis vicina Roger). (Left) 3D rendering of the recovered RI. (Right) Some example slices of the complex transmittance function using coarse-to-fine reconstruction, compared with the results of only fine-grained recovery and physical focusing using the same objective lens. The 3D reconstruction of our method naturally displays the quantitative transmittance function at each depth and builds up high contrast contours that separate distinct structures.
Fig. 7.
Fig. 7. Tomographic reconstruction of the freely moving C. elegans. (a) The depth-coded projection of volumetric reconstruction of the entire worm. (b) Lateral slices through the 3D reconstruction volume at axial positions $z = 0, 20\,\mathrm{\mu}\mathrm{m}$. (c) The comparison of (b) using linear single-scattering model. (d)-(f) The lateral slices of the white rectangle in (b) at times t = 0 s, 0.13 s, and 1.6 s, respectively. The orange rectangles in (d), (e) highlight the dynamics of the pharynx.
Fig. 8.
Fig. 8. The impact of matching energy and coarse-grained modeling on the reconstruction of Fig. 6, which is quantified by the variation of error of retrieved Fourier intensity (a) and retrieved object absorption (b) with iteration number. The curves with different colors (blue/red) indicate whether the initial energy matches the intensity of the captured image, while the curves with two line types (dashed/solid) indicate whether the coarse-grained modeling is used.
Fig. 9.
Fig. 9. The field amplitude $|{v(\textrm{r},m\Delta z} |$ after each slice (a) and recovered absorption images (b) of the experiment in Fig. 5(c) versus different illumination patterns. The size of each square in the pattern is set to $4 \times 4$, $10 \times 10$, and $20 \times 20$ micromirrors, respectively.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

v ( r , m Δ z ) = u ( r , m Δ z ) f ( r , m Δ z ) ,
u ( r , ( m + 1 ) Δ z ) = v ( r , m Δ z ) h Δ z ( r ) ,
h Δ z ( r ) = F 1 { exp ( i 2 π Δ z 1 / λ 2 | u | 2 ) } ,
I ( u ) = | F { v ( r , M Δ z ) } | 2 ,
v ^ ( r , Δ z ) = F 1 { I ( u ) F { f ( r , Δ z ) s ( r ) } | F { f ( r , Δ z ) s ( r ) } | } ,
f ^ ( r , Δ z ) = v ^ ( r , Δ z ) s ( r ) + [ f ( r , Δ z ) v ^ ( r , Δ z ) ] [ 1 s ( r ) ] .
f ( r , m Δ z ) = f ( r , Δ z ) h ( m 1 ) Δ z ( r ) .
min f ( r , m Δ z ) n = 1 N w n ( u ) | I n ( u ) | F { v ( r , M Δ z ) } | 2 | 2 ,
v ^ n ( r , M Δ z ) = F 1 { w n ( u ) I n ( u ) F { v n ( r , M Δ z ) } | F { v n ( r , M Δ z ) } | + [ 1 w n ( u ) ] F { v n ( r , M Δ z ) } } .
u ^ n ( r , m Δ z ) = u n ( r , m Δ z ) + f ( r , m Δ z ) [ v ^ n ( r , m Δ z ) v n ( r , m Δ z ) ] ( 1 α ) | f ( r , m Δ z ) | 2 + α | f ( r , m Δ z ) | max 2 ,
f ^ ( r , m Δ z ) = f ( r , m Δ z ) + u ^ n ( r , m Δ z ) [ v ^ n ( r , m Δ z ) v n ( r , m Δ z ) ] ( 1 β ) | u ^ n ( r , m Δ z ) | 2 + β | u ^ n ( r , m Δ z ) | max 2 ,
v n ( r , ( m 1 ) Δ z ) = u ^ n ( r , m Δ z ) h Δ z ( r ) ,
f ^ ( r , Δ z ) = f ( r , Δ z ) + s n ( r ) [ v ^ n ( r , Δ z ) v n ( r , Δ z ) ] ( 1 β ) | s n ( r ) | 2 + β | s n ( r ) | max 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.