Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Snapshot quantitative phase microscopy with a printed film

Open Access Open Access

Abstract

This paper proposes a low-cost snapshot quantitative phase imaging approach. The setup is simple and adds only a printed film to a conventional microscope. The phase of a sample is regarded as an additional aberration of the optical imaging system. And the image captured through a phase object is modeled as the distorted version of a projected pattern. An optimization algorithm is utilized to recover the phase information via distortion estimation. We demonstrate our method on various samples such as a micro-lens array, IMR90 cells and the dynamic evaporation process of a water drop, and our approach has a capability of real-time phase imaging for highly dynamic phenomenon using a traditional microscope.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Many label-free biological samples are transparent, which make them hard to be investigated by conventional microscopy. For a long time, Zernike phase contrast microscopy [1] and differential interference contrast (DIC) microscopy [2] have been two most popular methods to visualize transparent samples by qualitative phase imaging. However, the lack of quantitative phase measurement prevents further applications such as measuring the refractive index [3] and surface profiling [4]. In the past decade, lots of work have been done in the field of quantitative phase imaging (QPI) [5] to improve their performance in resolution, speed and simplicity.

QPI techniques can be broadly categorized as interferometric and non-interferometric techniques. Interferometry methods such as digital holography can estimate the optical path length distribution at a sub-wavelength resolution [6]. To increase the capturing speed, off-axis holography [7] and parallel quasi-phase-shifting holography [8] are proposed to multiplex multiple holograms in a single shot. Since these methods usually need a temporally coherent source for interferometry, they are quite expensive and difficult for alignment. Spatial light interference microscopy further extends the holography method to white light and works as an add-on to the phase contrast microscopy [9]. But the use of spatial light modulator (SLM) and phase contrast objective lens in [9] still makes it complex and expensive for QPI.

Transport of Intensity Equation (TIE) [10] and Differential Phase Contrast (DPC) [11] are two partially coherent QPI techniques, which can be used as simple add-ons for a conventional microscope. With suitable boundary conditions, high-resolution optical path length can be estimated [12, 13]. Despite their low-cost character, conventional TIE and DPC approaches still require multiple images to measure the gradient-related phase information. Thus many modifications have been introduced to make them as single-shot methods. Waller et al. make use of the chromatic aberration of the lenses to obtain three-plane imaging with an RGB camera for TIE-based reconstruction [14]. Wavelength multiplexing in the illumination side encodes the two-axis DPC information to a single RGB image and realizes single-shot reconstruction of the complex field [15]. However, the use of an RGB camera reduces its throughput and light efficiency due to the Bayer filter in front of the sensor. SLM-based [16] and multi-camera [17] setups achieve more flexibility within a single-shot but lose the simplicity and low-cost advantage. Pavani et al. utilize the aberration of an additional amplitude mask to recover the phase information [18], while they can only work for thick phase objects and fail to image most biological samples such as cells. In a word, it’s necessary for a simple and low-cost approach to quantitatively measure the phase of thin samples in a single shot at high-precision.

Here, we report a new snapshot computational QPI method, with only a printed film added to a conventional microscope. Different from the TIE and DPC, we estimate the phase information by observing the distortion of a reference image instead of the intensity contrast introduced by defocusing or angular illumination. The reference image is provided by the printed films or a projector and can be pre-calibrated before the experiments. As shown in Fig. 1(a), we place the sample at the defocused plane instead of the image plane and regard the phase of sample as an additional aberration of the optical system. Then the small phase change of the sample is encoded into a clear distorted image of the mask, which is specially magnified by the defocus distance. We can recover the phase information through distortion estimation as shown in Fig. 1(b). To test the schematic, we print a mask with a binary pattern and insert it between the condenser and the light source without doing any other hardware changes of a conventional microscopy. The whole modification can be completed in 5 minutes and costs less than 1 US dollar. Experiments on a microlens array, IMR90 cells and the dynamic evaporation process of a water drop are performed to show its comparable performance with TIE-based methods. We anticipate that researchers can use our algorithm as an open-source software (see Code 1 [19]) to achieve quantitative phase results by simply putting a mask on conventional microscope without careful alignment.

 figure: Fig. 1

Fig. 1 The basic idea (a) and framework (b) of our method as we shown in Code 1 [19]. (a) The phase information of the sample is viewed as the aberration of the optical system, which can be estimated by the distortion of a reference image. (b) The input of our approach is a real-time distorted video captured through a phase sample and a pre-captured reference image. An optimization algorithm is applied to estimate the distortions of each frame. The quantitative phase video is then recovered by surface integration.

Download Full Size | PDF

2. Theory

We treat the target transparent sample as an element that causes additional aberration to the optical system. To infer this introduced abberation quantitatively, we introduce a pattern with abundant textures to reveal the abberation cues.

The scheme of our model is illustrated in Fig. 2(a). The target transparent sample and the image of a textured pattern is located at z = Δz and z = 0 plane, respectively. By placing a reference mask and removing the target sample, we firstly capture a reference image. Then with the sample placed, the captured image is distorted by the phase of sample. During capturing the reference and distorted images, we keep the camera focusing on z = 0 plane (i.e., the focus plane). Comparing these two image without and with sample [Figs. 2(b) and 2(c)], we can see obvious distortion revealing the phase information of the sample.

 figure: Fig. 2

Fig. 2 Illustration of our method. (a) The schematic with and without a sample. (b) The reference image of a binary mask captured without a sample. (c) A distorted image of (b) captured through a drop of water.

Download Full Size | PDF

In order to obtain the relationship between distortion and the phase of sample, we need to analyze the relationship between the reference and distorted images, namely, the complex field at the focus plane with and without sample (i.e., U1(x, y, 0) and U0(x, y, 0)). As shown in Fig. 2(a), the derivation includes three steps: light propagation from z = 0 to z = Δz plane without sample, placing sample on z = Δz plane and light propagation from z = Δz to z = 0 plane with sample.

2.1. Light propagation without sample

For the case without a sample (i.e., the step 1 in Fig. 2), we discretize the complex field U0(x, y, 0) at z = 0 plane into small patches labeled by (m, n):

U0(x,y,0)=m,nU0(x,y,0;m,n)=m,nU0(x,y,0)rect(xdm,ydn),
where (x, y) denotes the 2D lateral coordinates, d is the patch size, U0(x, y, 0; m, n) is the complex patch images of U0(x, y, 0), whose central point is (md, nd) on z = 0 plane, and rect(x, y) is the rectangular function.

The patch size d is determined by the image pixel size at the focus plane (z = 0), and the image patch U0(x, y, 0; m, n) can be approximated as

U0(x,y,0;m,n)A0(md,nd,0)exp[jϕ0(md,nd,0)]rect(xd,m,ydn),
where A0(md, nd, 0) and ϕ0 (md, nd, 0) are the amplitude and phase of U0(x, y, 0) at the central points (md, nd) on z = 0 plane, respectively.

Then we can propagate the complex field on z = 0 plane U0(x, y, 0; m, n) to the z = Δz plane as:

F0(fx,fy,Δz;m,n)=H(fx,fy;Δz)F0(fx,fy,0;m,n),
where F0(fx, fy, Δz; m, n) and F0(fx, fy, 0; m, n) denote the Fourier transform of the complex field U0(x, y, Δz; m, n) and U0(x, y, 0; m, n), respectively, and H(fx, fy; Δz) denotes the transfer function of the free space propagation over distance Δz.

Based on Eqs. (2) and (3) and the Fresnel diffraction theory, the complex field U0(x, y, Δz; m, n) at z = Δz plane without sample is a Fresnel diffraction pattern of a rectangular aperture. We can ignore the sidelobes of the diffraction pattern whose amplitude is small and U0(x, y, Δz; m, n) is approximated to a patch image of which patch size is d′ and central point is (md, nd) (i.e. U0(x, y, Δz; m, n) ≈ U0(x, y, Δz; m, n)rect [(xmd)/d′, (ynd)/d′]). The patch size d′ at z = Δz plane is decided by the distance Δz (more details in Sec. 5).

2.2. Placing sample on z = Δz plane

When the sample is placed on z = Δz plane, the complex field U1(x, y, Δz; m, n) at z = Δz plane becomes (i.e., the step 2 in Fig. 2)

U1(x,y,Δz;m,n)=U0(x,y,Δz;m,n)exp[jϕ(x,y)],
where ϕ(x, y) is the phase of the transparent sample. Then the phase information of sample is approximately linear within each single patch, and Eq. (4) can be rewritten as
U1(x,y,Δz;m,n)U0(x,y,Δz;m,n)exp[j(Ax+By+C)],
in which Ax + By + C is the linear representation of the sample’s phase information ϕ(x, y) at the (m, n) patch:
A=ϕ(x,y)x|x=md,y=nd,B=ϕ(x,y)y|x=md,y=nd.

Based on Eqs. (3) and (5), we can derive the relationship between U0(x, y, 0; m, n) and U1(x, y, Δz; m, n) in Fourier domain as:

F1(fx,fy,Δz;m,n)=exp(jC)H(fxA2π,fyB2π;Δz)F0(fxA2π,fyB2π;0;m,n),
where F1(fx, fy, Δz; m, n) denotes the Fourier transform of U1(x, y, Δz; m, n).

2.3. Light propagation with sample

For the case with a sample, we can further back-propagate the complex field on sample plane F1(fx, fy, Δz; m, n) to the z = 0 plane as (i.e., the step 3 in Fig. 2):

F1(fx,fy,0;m,n)=H(fx,fy;Δz)F1(fx,fy,Δz;m,n),
where F1(fx, fy, 0; m, n) denotes the Fourier transform of the complex field U1(x, y, 0; m, n) at z = 0 plane with sample. According to the Fresnel diffraction theory, the transfer function term in Eq. (7) can be transformed as:
H(fxA2π,fyB2π;Δz)=exp[jΔz2k(A2+B2)]exp[jλΔz(Afx+Bfy)]H(fx,fy;Δz),
where k is the wave number of illumination light, and λ is the wavelength of illumination light. Substitute Eqs. (7) and (9) into Eq. (8), we can find the relationship between U0(x, y, 0; m, n) and U1(x, y, 0; m, n) in Fourier domain as:
F1(fx,fy,0;m,n)=exp{j[Δz2k(A2+B2)+C]}exp[jλΔz(Afx+Bfy)]F0(fxA2π,fyB2π,0;m,n).

By applying inverse Fourier transform to Eq. (10), we can represent the complex field U0(x, y, 0; m, n) without sample as:

U0(x,y,0;m,n)=U1(xΔzkA,yΔzkB,0;m,n)exp{j[Ax+By+CΔz2k(A2+B2)]}.

This is a discrete expression of the relationship between the complex field U0(x, y, 0) without sample and U1(x, y, 0) with a sample. The continuous formulation of Eq. (11) is:

U0(x,y,0)U1(xΔzkϕ(x,y)x,yΔzkϕ(x,y)y,0)exp(j{ϕ(x,y)Δz2k[(ϕ(x,y)x)2+(ϕ(x,y)y)2]}).

2.4. Distortion caused by the phase of sample

Based on Eq. (12), the intensity of the complex field at z = 0 plane with sample is distorted by the phase of sample:

I0(x,y)=I1(x+u(x,y),y+v(x,y)),
where I0(x, y) is the intensity of U0(x, y, 0), I1(x, y) is the intensity of U1(x, y, 0), and w(x, y) = (u(x, y), v(x, y)) denotes the distortion between the distorted image I1(x, y) and the reference image I0(x, y), which is determined by the phase of sample ϕ, defocus distance Δz and wave number of illumination light k :
{u(x,y)=Δzkϕ(x,y)xv(x,y)=Δzkϕ(x,y)y.

3. Algorithm

Based on the above model, we propose an optimization framework to recover the dynamic phase video with a pre-calibrated binary reference image. According to the framework in Fig. 1, we first estimate the distortion w(x, t) = (u(x, t), v(x, t)) between each distorted video frame and the reference image. Based on Eq. (13), the estimation can be conducted by minimizing an objective function J(w(x, t)):

minwJ(w(x,t))=minw(Ed(w(x,t))+αEm(w(x,t))).
In this equation, Ed(w(x, t)) and Em(w(x, t)) are data term and regularization term, respectively. Here x = (x, y) is the 2D spatial coordinates of image pixels, and α > 0 is a regularization parameter that balances the data term and regularization term.

Specifically, based on Eq. (13), the data term can be formulated as:

Ed(w(x,t))=t=1TΩψ(|I1(x+w(x,t))I0(x)|2+γ|I1(x+w(x,t))I0(x)|2)dx,
where T is the number of video frames, Ω ⊂ ℝ2 denotes the spatial range of valid image pixels, I0(x) denotes the reference image, and I1(x, t) denotes the distorted images captured through the sample at time t. To reduce the influence of slight changes in brightness, gradient term is used here and γ is a weight to balance the image term and gradient term. ψ(ξ2)=ξ2+2 is applied to reduce the influence of outliers on the distortion estimation and increase the robustness [20,21], and is set to a small positive constant (empirically 0.001, which is much smaller than |ξ|) so that ψ(ξ2) is still a convex function while approaching L1 funciton ψ(ξ) = |ξ|.

The regularization term is defined from piecewise smoothness assumption on the gradient field of sample’s phase (i.e. the distortion w(x, t)). This piecewise smoothness constraint can eliminate the influence from inaccurate estimation of some image pixels and thus increase the robustness of our approach. The regularization term can be formulated as:

Em(w(x,t))=t=1TΩψ(u(x,t)22+v(x,t)22)dx,
where u(x, t) and v(x, t) are the distortion between distorted images I1(x, t) and reference image I0(x) along x and y direction, respectively. As it is very similar to the optical flow problem in computer vision field, we use the algorithm in [20–22] to solve this optimization problem with a few modifications: (1) Before applying the algorithm, an image brightness normalization is applied to the reference image and the distorted images. (2) To improve accuracy of our algorithm and correct the distortion caused by the misalignment of our optical system and the movement of objective lens, we also pre-shift the reference image with a fixed distortion to match the distorted images. The pre-shift distortion can be calibrated without sample placed before measurement. (3) After the optimization algorithm, we remove the defocus aberration caused by optical system itself. This aberration can be calibrated without sample placed before measuring according to the defocus distance.

Next, we can calculate the phase’s gradient information of each image pixel from the estimated distortion by Eq. (14). Finally we recover the phase video of the dynamic object from its gradient field by solving the Poisson equation, which has been well studied so far [23,24].

4. Experimental results

To validate and demonstrate the simplicity and convenience of the proposed method, we introduce only a printed binary mask into a conventional microscopy to build a snapshot quantitative phase microscope, and show its capability by capturing the microlens, IMR90 cells and the evaporation process of a water drop.

Figure 3 shows the schematic of our system and a photograph of our prototype setup. We print a film with a binary mask and insert it between the condenser and the light source of a conventional microscope without any other hardware changes. Here we do not insert the mask between the condenser and the sample directly to prevent the mask touching the sample. And the binary mask is used as a prior to enhance the contrast of our projected pattern and improve our robustness for the semi-opaque sample. An Andor Zyla 5.5 sCMOS Camera is used to capture images (6.5 μm pixel size, 2560 × 2160 pixels, and up to 100 fps). During capture, we place the sensor and mask on the conjugated focus plane (i.e., z = 0 plane) by adjusting the focusing knob of condenser and the objective lens. The focus plane is slightly off the sample plane with a distance Δz and the aperture of condenser is setting to a small size. The distance Δz is adjusted to achieve a suitable distortion. More details about the setting of Δz are in Sec. 5. The whole operation can be completed in 5 minutes and costs less than 1 US dollar. All the sample here are immersed in the air.

As shown in Fig. 3(a), Either before or after the capture, the reference image produced by the mask can be obtained by removing the sample (the dotted line in Fig. 3). When the sample is placed on the stage, the aberration caused by the phase sample results in a shift on the sensor for each point of the reference image (the solid line in Fig. 3). Then, we can use this distortion from the reference image to reconstruct the phase of the sample based on the framework illustrated in Fig. 1(b).

 figure: Fig. 3

Fig. 3 Schematic of our system (a) and a photograph showing the prototype (b). The mask is projected to the focus plane of camera and distorted by the phase of sample.

Download Full Size | PDF

To demonstrate the accuracy and robustness of our proposed approach, we use a standard micro-lens array as the target sample (RPC Photonics MLA-S100-f8, 100 μm pitch, f/# = 7.8, index of refraction is 1.56). The pitch size of the binary pattern at the focus plane is around 6.5 μm, and the distance between the focus plane and the sample plane is 100 μm. The phase reconstruction results of our approach and the TIE approach in [25] are shown in Figs. 4(c) and 4(e), respectively, captured with Nikon Eclipse Ti microscope and a Nikon CFI Plan Apochromat VC 20 × 0.75 NA objective. Here we display the phase reconstruction results ϕ as the height h of the sample for better visualization, where h = ϕλ/2πΔn and Δn is the differential refractive index between the specimen and the air. The reference image is shown in Fig. 4(a) and the distorted image is shown in Fig. 4(b). The binary pattern in Figs. 4(a) and 4(b) lose some contrast due to the projection optical system, and our method is robust for the dust and contrast lose on our reference image in Fig. 4(a). Figure 4(d) displays the two defocused images used for TIE reconstruction. The comparison results of a microlens cross-section is shown in Fig. 4(f). Our approach achieves better result for the edge of image compared with the conventional TIE approach in spite of only a single shot.

Experiments on IMR90 cells are demonstrated in Fig. 5. We use a 3.5 μm mask and the distance between the focus plane and the sample plane is 80 μm. A Zeiss Axio Observer Z1 microscope with Zeiss EC Plan-Neofluar 40 × 0.75NA objective is used to capture images. Before applying our algorithm, we remove the dark edge of the original image. Figure 5(a) is the images distorted by the samples. The nucleus and F-action are labeled by DAPI and Alexafluor 532, respectively. The fluorescence images of the same areas are shown in Fig. 5(b). Figure 5(d) displays the two defocused images used for TIE reconstruction (the defocus distance is ± 60 μm). Figures 5(c) and 5(e) are the phase reconstruction results of our approach and TIE approach in [25]. Our result reveals details about nucleus and cytoplasm distribution, which is well corresponding to the fluorescence images and the result of TIE method.

 figure: Fig. 4

Fig. 4 The experimental results of a microlens array. (a) The reference image. (b) The distorted image captured through the microlens. (c) Reconstructed phase result by our approach. (d) The defocus image pair for TIE approach. (e) Reconstructed phase result by TIE approach in [25]. (f) Lens thickness cross-sections corresponding to the line profiles indicated in (c) and (e), respectively.

Download Full Size | PDF

To further validate our proposed method for quantitative phase imaging of highly dynamic events, we use our setup to observe the dynamic evaporation process of a water drop as shown in Fig. 6. To demonstrate the robustness of our approach for a different pattern, here we use a binary pattern with less texture as a print mask. The patch size of the binary pattern on the focus plane is 20 μm and the distance between focus plane and sample plane is 100 μm. Then we capture a distorted video through a drip on a slide [Fig. 6(a)] with a Zeiss Plan-Apochromat 10 × 0.45NA objective at 33.3 frames per second (fps). Here we also remove the dark edge of the original image before applying our method. Figure 6(b) and Visualization 1 show the reconstructed phase video of the drop at different stages during its evaporation. Our dynamic quantitative phase result visualizes the accurate evaporation process of water drop at high speed, which demonstrates the advantage of our snapshot imaging method.

 figure: Fig. 5

Fig. 5 Experimental results for IMR90 cells. (a) The distorted image captured through the sample. (b) The fluorescence image of sample. The nucleus labeled by DAPI and F-actin labeled by Alexafluor 532 are shown in blue and green, respectively. (c) Reconstructed phase image by our approach. (d) The defocus image pair for TIE approach. (e) Reconstructed phase result by TIE approach in [25].

Download Full Size | PDF

5. Discussion

The key parameters of our approach is the distance Δz between the sample and focus plane. During capture, the distance Δz between the sample plane and the reference plane is adjusted to achieve a suitable distortion. From Eq. (14), a smaller gradient of the sample’s phase ϕ(x,y)x requires a larger Δz to reveal the distortion. However, a too large Δz should be penalized to ensure high accuracy. Here we analyze the distance setting mathematically.

Based on Eqs. (2) and (3), when the aperture of illumination is small, the complex field U0(fx, fy, Δz; m, n) at plane z = Δz is

U0(x,y,Δz;m,n)=U0(md,nd,0)Urec(xmd,ynd,Δz).
In this equation, the first term U0(md, nd, 0) is the complex field at central point (md, nd) of z = 0 plane, with d being the size of rectangular aperture. The second term Urec(x, y, Δz) is the Fresnel diffraction pattern of a rectangular aperture when the propagation distance is Δz :
Urec(x,y)=exp(jkΔz)2j{[C(ξ2)C(ξ1)]+j[S(ξ2)S(ξ1)]}{[C(η2)C(η1)]+j[S(η2)S(η1)]}.
Here C(ξ) and S(ξ) are the Fresnel integral function, with ξ1, ξ2, η1, η2 being
{ξ1=kπΔz(d2+x),ξ2=kπΔz(d2x)η1=kπΔz(d2+y),η2=kπΔz(d2y).

 figure: Fig. 6

Fig. 6 Experimental results for the evaporation process of a water-drop (see Visualization 1). (a) Distorted images of the evaporating water drop at different time points. (b) The corresponding reconstructed phase images at different time points.

Download Full Size | PDF

Therefore, the complex field U0(fx, fy, Δz; m, n) at z = Δz plane without sample is a Fresnel diffraction pattern of a rectangular aperture with a constant amplitude and phase, and the amplitude of this complex field U0(fx, fy, Δz; m, n) at z = Δz plane without sample is:

|U0(x,y,Δz;m,n)|=12A0(md,nd,0)[C(ξ2)C(ξ1)]2+[S(ξ2)S(ξ1)]2[C(η2)C(η1)]2+[S(η2)S(η1)]2.

For a suitable spreading distance Δz, the diffraction is of limited size and we can ignore the sidelobes of the diffraction pattern whose amplitude is small:

|U0(x,y,Δz;m,n)|ε,(x,y){(x,y)||xmd|d,|ynd|d},
where ε is a small constant threshold. Then we can regard the complex field U0(fx, fy, Δz; m, n) as a complex patch image with patch size d′ and located at (md, nd):
U0(fx,fy,Δz;m,n)U0(fx,fy,Δz;m,n)rect(xmdd,yndd).
The patch size d′ is decided by the amplitude of the diffraction pattern, which relates to the distance Δz, wave number k and patch size d [i.e., Eq. (22)]. As Δz increases, the corresponding increasing patch size d′ on the sample plane would decrease the accuracy of final reconstruction.

Furthermore, the maximum of distortion cannot be either too small or too large for our algorithm. Thus in practice we adjust the defocus distance Δz to make sure that the maxinum distortion of the sample is around 10 pixel size, which is suitable for both our model and algorithm. To improve the accuracy and robustness for the semi-opaque sample, here we also use the binary mask as a prior and a gradient image term in the optimization function.

In addition, the spatial resolution of our approach depends on the patch size d′ on the sample plane. Thus for the sample with small structure such as cells, we need a mask with small pitch size to measure it. And for the sample with fewer structure such as water drops, we can use a larger pattern instead. Based on Eq. (14), the resolution of phase’s gradient Δx ϕ(x) that can be estimated by our system is

Δxϕ(x)=kΔzβw,
where βw is the smallest distortion that we can estimate by our algorithm. Thus the phase resolution of our approach is determined by wave number k, the defocus distance Δz, the smallest distortion we can distinguish by our algorithm β and the pixel size on the image plane w.

6. Conclusion

In this paper, we propose a novel single-shot quantitative phase imaging approach, which is highly compatible with conventional microscope by only a printed film introduced added. The phase of the sample is regarded as an additional aberration of the optical system and a model is built to infer this aberration from the distortion with respect to a reference image. Based on this model, we develop an optimization algorithm to reconstruct the phase information via distortion analysis. We validate the effectiveness and accuracy of our proposed approach via various experiments, compared with TIE-based approaches. The quantitave phase imaging can be acquired at camera frame rate. It provides a practical, low-cost and open-source solution to achieve snapshot quantitative phase imaging simply.

Funding

Project of NSFC (No. 61327902, No. 61722110 and No. 61671265).

Acknowledgments

The authors thank Dr. Xu Zhang for providing the sample of IMR90 cells.

References and links

1. F. Zernike, “Das phasenkontrastverfahren bei der mikroskopischen beobachtung,” Z. Techn. Phys. 16, 454–457 (1935).

2. G. Nomarski, “Nouveau dispositif pour lobservation en contraste de phase differentiel,” J. Phys. Radium 16, S88 (1955).

3. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717 (2007). [CrossRef]   [PubMed]  

4. K. Stout and L. Blunt, Three-Dimensional Surface Topography (Elsevier, 2000).

5. G. Popescu, Quantitative Phase Imaging of Cells and Tissues (McGraw Hill Professional, 2011).

6. C. J. Mann, L. Yu, C.-M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693–8698 (2005). [CrossRef]   [PubMed]  

7. S. Witte, A. Plauşka, M. C. Ridder, L. van Berge, H. D. Mansvelder, and M. L. Groot, “Short-coherence off-axis holographic phase microscopy of live cell dynamics,” Biomed. Opt. Express 3, 2184–2189 (2012). [CrossRef]   [PubMed]  

8. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]  

9. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011). [CrossRef]   [PubMed]  

10. M. R. Teague, “Deterministic phase retrieval: a green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). [CrossRef]  

11. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009). [CrossRef]   [PubMed]  

12. L. Tian, J. C. Petruccelli, and G. Barbastathis, “Nonlinear diffusion regularization for transport of intensity phase imaging,” Opt. Lett. 37, 4131–4133 (2012). [CrossRef]   [PubMed]  

13. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22, 9220–9244 (2014). [CrossRef]   [PubMed]  

14. L. Waller, S. S. Kou, C. J. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010). [CrossRef]   [PubMed]  

15. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS ONE 12, 1–14 (2017). [CrossRef]  

16. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Noninterferometric single-shot quantitative phase microscopy,” Opt. Lett. 38, 3538–3541 (2013). [CrossRef]   [PubMed]  

17. J. Wu, X. Lin, Y. Liu, J. Suo, and Q. Dai, “Coded aperture pair for quantitative phase imaging,” Opt. Lett. 39, 5776–5779 (2014). [CrossRef]   [PubMed]  

18. S. R. P. Pavani, A. R. Libertun, S. V. King, and C. J. Cogswell, “Quantitative structured-illumination phase microscopy,” Appl. Opt. 47, 15–24 (2008). [CrossRef]  

19. M. Zhang, “The code for snapshot quantitative phase microscopy with a printed film,” GitHub (2018). [retrieved 28 Jun. 2018], https://github.com/zmj1203/Snapshot-quantitative-phase-microscopy-with-a-printed-film.

20. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in “European Conference on Computer Vision,” (Springer, 2004), pp. 25–36.

21. T. Brox and J. Malik, “Large displacement optical flow: Descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011). [CrossRef]  

22. C. Liu, “Beyond pixels: Exploring new representations and applications for motion analysis,” Ph.D. thesis, MIT, Cambridge, MA, USA (2009).

23. A. Agrawal, R. Raskar, and R. Chellappa, “What is the range of surface reconstructions from a gradient field,” in “European Conference on Computer Vision,” (Springer, 2006), pp. 578–591.

24. A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in “IEEE International Conference on Computer Vision,” (IEEE, 2005), pp. 174–181.

25. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22, 9220–9244 (2014). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Code 1       The code of our approach
Visualization 1       The movie shows the dynamic evaporation process of a water drop at 33.3 frames per second (fps) by our approach. It demonstrates the good performance of our proposed method for quantitative phase imaging of highly dynamic events.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 The basic idea (a) and framework (b) of our method as we shown in Code 1 [19]. (a) The phase information of the sample is viewed as the aberration of the optical system, which can be estimated by the distortion of a reference image. (b) The input of our approach is a real-time distorted video captured through a phase sample and a pre-captured reference image. An optimization algorithm is applied to estimate the distortions of each frame. The quantitative phase video is then recovered by surface integration.
Fig. 2
Fig. 2 Illustration of our method. (a) The schematic with and without a sample. (b) The reference image of a binary mask captured without a sample. (c) A distorted image of (b) captured through a drop of water.
Fig. 3
Fig. 3 Schematic of our system (a) and a photograph showing the prototype (b). The mask is projected to the focus plane of camera and distorted by the phase of sample.
Fig. 4
Fig. 4 The experimental results of a microlens array. (a) The reference image. (b) The distorted image captured through the microlens. (c) Reconstructed phase result by our approach. (d) The defocus image pair for TIE approach. (e) Reconstructed phase result by TIE approach in [25]. (f) Lens thickness cross-sections corresponding to the line profiles indicated in (c) and (e), respectively.
Fig. 5
Fig. 5 Experimental results for IMR90 cells. (a) The distorted image captured through the sample. (b) The fluorescence image of sample. The nucleus labeled by DAPI and F-actin labeled by Alexafluor 532 are shown in blue and green, respectively. (c) Reconstructed phase image by our approach. (d) The defocus image pair for TIE approach. (e) Reconstructed phase result by TIE approach in [25].
Fig. 6
Fig. 6 Experimental results for the evaporation process of a water-drop (see Visualization 1). (a) Distorted images of the evaporating water drop at different time points. (b) The corresponding reconstructed phase images at different time points.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

U 0 ( x , y , 0 ) = m , n U 0 ( x , y , 0 ; m , n ) = m , n U 0 ( x , y , 0 ) rect ( x d m , y d n ) ,
U 0 ( x , y , 0 ; m , n ) A 0 ( m d , n d , 0 ) exp [ j ϕ 0 ( m d , n d , 0 ) ] rect ( x d , m , y d n ) ,
F 0 ( f x , f y , Δ z ; m , n ) = H ( f x , f y ; Δ z ) F 0 ( f x , f y , 0 ; m , n ) ,
U 1 ( x , y , Δ z ; m , n ) = U 0 ( x , y , Δ z ; m , n ) exp [ j ϕ ( x , y ) ] ,
U 1 ( x , y , Δ z ; m , n ) U 0 ( x , y , Δ z ; m , n ) exp [ j ( A x + B y + C ) ] ,
A = ϕ ( x , y ) x | x = m d , y = n d , B = ϕ ( x , y ) y | x = m d , y = n d .
F 1 ( f x , f y , Δ z ; m , n ) = exp ( j C ) H ( f x A 2 π , f y B 2 π ; Δ z ) F 0 ( f x A 2 π , f y B 2 π ; 0 ; m , n ) ,
F 1 ( f x , f y , 0 ; m , n ) = H ( f x , f y ; Δ z ) F 1 ( f x , f y , Δ z ; m , n ) ,
H ( f x A 2 π , f y B 2 π ; Δ z ) = exp [ j Δ z 2 k ( A 2 + B 2 ) ] exp [ j λ Δ z ( A f x + B f y ) ] H ( f x , f y ; Δ z ) ,
F 1 ( f x , f y , 0 ; m , n ) = exp { j [ Δ z 2 k ( A 2 + B 2 ) + C ] } exp [ j λ Δ z ( A f x + B f y ) ] F 0 ( f x A 2 π , f y B 2 π , 0 ; m , n ) .
U 0 ( x , y , 0 ; m , n ) = U 1 ( x Δ z k A , y Δ z k B , 0 ; m , n ) exp { j [ A x + B y + C Δ z 2 k ( A 2 + B 2 ) ] } .
U 0 ( x , y , 0 ) U 1 ( x Δ z k ϕ ( x , y ) x , y Δ z k ϕ ( x , y ) y , 0 ) exp ( j { ϕ ( x , y ) Δ z 2 k [ ( ϕ ( x , y ) x ) 2 + ( ϕ ( x , y ) y ) 2 ] } ) .
I 0 ( x , y ) = I 1 ( x + u ( x , y ) , y + v ( x , y ) ) ,
{ u ( x , y ) = Δ z k ϕ ( x , y ) x v ( x , y ) = Δ z k ϕ ( x , y ) y .
min w J ( w ( x , t ) ) = min w ( E d ( w ( x , t ) ) + α E m ( w ( x , t ) ) ) .
E d ( w ( x , t ) ) = t = 1 T Ω ψ ( | I 1 ( x + w ( x , t ) ) I 0 ( x ) | 2 + γ | I 1 ( x + w ( x , t ) ) I 0 ( x ) | 2 ) d x ,
E m ( w ( x , t ) ) = t = 1 T Ω ψ ( u ( x , t ) 2 2 + v ( x , t ) 2 2 ) d x ,
U 0 ( x , y , Δ z ; m , n ) = U 0 ( m d , n d , 0 ) U rec ( x m d , y n d , Δ z ) .
U rec ( x , y ) = exp ( j k Δ z ) 2 j { [ C ( ξ 2 ) C ( ξ 1 ) ] + j [ S ( ξ 2 ) S ( ξ 1 ) ] } { [ C ( η 2 ) C ( η 1 ) ] + j [ S ( η 2 ) S ( η 1 ) ] } .
{ ξ 1 = k π Δ z ( d 2 + x ) , ξ 2 = k π Δ z ( d 2 x ) η 1 = k π Δ z ( d 2 + y ) , η 2 = k π Δ z ( d 2 y ) .
| U 0 ( x , y , Δ z ; m , n ) | = 1 2 A 0 ( m d , n d , 0 ) [ C ( ξ 2 ) C ( ξ 1 ) ] 2 + [ S ( ξ 2 ) S ( ξ 1 ) ] 2 [ C ( η 2 ) C ( η 1 ) ] 2 + [ S ( η 2 ) S ( η 1 ) ] 2 .
| U 0 ( x , y , Δ z ; m , n ) | ε , ( x , y ) { ( x , y ) | | x m d | d , | y n d | d } ,
U 0 ( f x , f y , Δ z ; m , n ) U 0 ( f x , f y , Δ z ; m , n ) rect ( x m d d , y n d d ) .
Δ x ϕ ( x ) = k Δ z β w ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.