Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot Fourier ptychographic microscopy with isotropic lateral resolution via polarization-multiplexed LED illumination

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) has emerged as a new wide-field and high-resolution computational imaging technique in recent years. To ensure data redundancy for a stable convergence solution, conventional FPM requires dozens or hundreds of raw images, increasing the time cost for both data collection and computation. Here, we propose a single-shot Fourier ptychographic microscopy with isotropic lateral resolution via polarization-multiplexed LED illumination, termed SIFPM. Three LED elements covered with 0°/45°/135° polarization films, respectively, are used to provide numerical aperture-matched illumination for the sample simultaneously. Meanwhile, a polarization camera is utilized to record the light field distribution transmitted through the sample. Based on weak object transfer functions, we first obtain the amplitude and phase estimations of the sample by deconvolution, and then we use them as the initial guesses of the FPM algorithm to refine the accuracy of reconstruction. We validate the complex sample imaging performance of the proposed method on quantitative phase target, unstained and stained bio-samples. These results show that SIFPM can realize quantitative imaging for general samples with the resolution of the incoherent diffraction limit, permitting high-speed quantitative characterization for cells and tissues.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) [14] has received much attention in recent years due to its high-resolution and wide-field imaging properties, showing abundant biomedicine applications such as digital pathology [5,6], high-throughput cytometry [7], and intensity diffraction tomography [810], et al.

In contrast to real-space ptychography [11,12], FPM avoids the use of mechanical scanning devices and utilizes angle-varied illuminations for samples instead. The hardware for angle scanning illumination is easy to implement because what one needs to do is replace the conventional microscopic light source with a programmable LED array. Therefore, FPM implementation is compatible with most microscopes. During the data acquisition process, each LED element is lit up sequentially to provide a quasi-monochromatic planer wave with a certain illumination angle, and the corresponding low-resolution (LR) intensity image with a wide field of view (FOV) is recorded and used for the following iteration recovery algorithm. Because of the alternation of illumination angle, captured LR images can carry different spatial frequency components of the sample. By synthesizing these LR images, FPM generates a wide-band spectrum whose support region is extended to the summation of the numerical aperture (NA) of objective and illumination. After performing an inverse Fourier transform to the spectrum, high-resolution quantitative amplitude and phase can be obtained, permitting subsequent characterization of sample information.

To recover the lost phase and high-frequency components of the sample, FPM adopts the alternative projection optimization strategy, which has a very high requirement for data redundancy. The requirement can be satisfied by turning on dozens or hundreds of LEDs and acquiring a large number of raw images. However, this is at the expense of imaging efficiency because both acquisition and computation time will be prolonged. Many methods have been proposed to overcome the problem by reducing the number of acquired images [1319]. Illumination-multiplexed methods [15,16] allow us to turn on multiple LEDs and acquire multiple non-overlapping sub-spectra simultaneously with a single measurement. By taking full advantage of data redundancy, sparse sampling strategies [1719] analyze the minimum spectrum overlapping ratio requirement for the FPM iteration to reduce the required data. However, more than 20 images are still needed for these methods, which is inadequate for high-speed imaging. Single-shot FPM is particularly attractive for further improving the imaging speed, including physical and computational methods. Introducing light-splitting devices (grating or lens array) into the FPM system can separate the light from different LEDs and enable the acquisition of multiple images at an exposure [20,21]. Nevertheless, as all images are arranged on a single image sensor, the resultant FOV will inevitably be traded. The existing computational method for single-shot FPM is based on color-multiplexed LED illumination, where three-channel images corresponding to R/G/B LEDs are separated from a single measurement with a color image sensor and serve as the input of following iteration recovery [22]. However, since the spectral passband is inversely proportional to the illumination wavelength, two frames (i.e., six images) are usually required for isotropic imaging. In addition, to obtain full-focus color images, the axial chromatic dispersion of focal lengths at different wavelengths also needs high-cost apochromatic objective or post algorithms to calibrate [23].

In this paper, we propose a single-shot Fourier ptychographic microscopy with isotropic lateral resolution, termed SIFPM. In its implementation, we introduce polarizers into the illumination and imaging modules, providing the LED array with polarization-multiplexed capability and avoiding anisotropic lateral resolution and chromatic aberration in color-multiplexed schemes. To obtain as many raw images as possible, we analyze mathematically the maximum accessible number of LEDs for polarization-multiplexed and finally choose three LEDs covered with 0°/45°/135° polarization films to provide illumination simultaneously. The intensity images illuminated with the three LEDs can be obtained in a single-shot measurement by polarization decoupling from corresponding channels of the used polarization camera. The illumination NA is also carefully designed to match the objective NA for two important reasons. On the one hand, the low-frequency phase components can be transferred into the acquired intensity image if and only if the matched illumination condition is met [2426]. On the other hand, the recovered spectrum support can cover the maximum region allowed by the system.

It is noted that the three images are inadequate for the convergence of the FPM iteration algorithm because the overlapping ratio of adjacent sub-spectra (<10%) is much smaller than the minimum requirement (∼40%). Therefore, we need to make some modifications to the original algorithm. FPM is an intrinsic non-convex optimization problem, where the solution often gets stuck in the local minima [27,28]. One can increase data redundancy to circumvent this problem and reach a decent solution. An alternative way is to provide a good initial guess [28]. As such, we use the amplitude and phase estimations based on weak object transfer function modeled by quantitative differential phase contrast (qDPC) imaging as the initial guess of sample information [2931]. FPM-based iteration algorithm is then used to refine the initial guess to overcome the weak object approximation limit to form accurate quantitative amplitude and phase maps. Simulations on strong (strong-absorption and large-phase) samples have been conducted to illustrate that SIFPM can effectively overcome the limit of weak object approximation and realize high-accuracy reconstructions. Experimental results on quantitative phase target, unstained and stained slices show that SIFPM can quantitatively characterize general samples with the resolution of incoherent diffraction limit, suggesting the developed method can realize single-shot FPM on the premise of guaranteeing the quality of reconstruction, providing the possibility for FPM to realize real-time imaging for cells and tissues.

2. Methods

2.1 Polarization-multiplexed LED illumination

FPM iteratively imposes constraints with the acquired images between the spatial and Fourier domain to reach a self-consistent solution. Mathematically, FPM is an intrinsic non-convex optimization problem whose solution can be formulated as

$$\mathop {\min }\limits_{O(u,v)} {\sum\limits_j {\sum\limits_{x,y} {\left|{\sqrt {{I_j}(x,y)} - |{{{\cal F}^{ - 1}}\{ O(u - {u_j},v - {v_j})P(u,v)\} } |} \right|} } ^2}, $$
where ${I_j}(x,y)$ is the jth acquired intensity image, (x, y) is the spatial coordinates. ${{\cal F}^{ - 1}}\{{} \}$ is the inverse Fourier transform operator. $P(u,v)$ represents the pupil function serving as the low-pass filter in the Fourier domain, (u, v) denotes the spatial frequency coordinates. $O(u,v) = {\cal F}\{{o(x,y)} \}$ is the Fourier transform of the complex transmittance of sample, $o(x,y)$. $({u_j},{v_j})$ is the illumination wavevector of the jth LED. To ensure the stability of solution, FPM has a minimum requirement for the overlapping ratio of adjacent sub-spectra. Dense sampling strategy is usually adopted to ensure data redundancy, where dozens or hundreds of raw images are acquired. The strategy is effective but not efficient to complete FPM reconstructions because not only a large number of images are acquired, but also long exposure time are needed for dark-field images, blocking the applications of real-time imaging for cells and tissues.

Here, we propose a polarization-multiplexed LED illumination strategy for single-shot FPM. Also, we adopt the idea of multiplexing but consider fusing another feature of light, polarization, into the LED array. As shown in Fig. 1(a), we divide the LED array into four regions and cover 0°/45°/90°/135° polarization films, respectively, to enable polarization modulation. Two keys are carefully designed for the best imaging performance of SIFPM: the maximum accessible number of multiplexed LEDs and the optimal illumination NA.

 figure: Fig. 1.

Fig. 1. System diagram of polarization-multiplexed LED illumination. (a) Hardware modifications for implementing SIFPM. Three LEDs covered with 0°/45°/135 polarization films are turned on to illuminate the sample simultaneously. The distance between the sample and LED array is tuned so that the illumination NA matches the objective NA. (b)-(d) Imaging results of an unlabeled pure-phase sample under different illumination NAs.

Download Full Size | PDF

The number of LEDs for polarization multiplexing is crucial for imaging performance. More LEDs can offer more spectrum information, but the number is impossible to be infinite since the limit of finite polarization channels. Intuitively, because the number of channels in both illumination and sensor is four, the same number of LED can be multiplexed if we can decouple the resultant polarization multiplexing. In the system, the polarization films covering the LED array and in the front of sensor pixels can be treated as the polarizer and analyzer, respectively. According to Malus law, the light intensity ratio before and after the analyzer equals to the square cosine of the angle between polarizer and analyzer. Therefore, the intensity transformation from the illumination to the acquired images can be formulated as

$$\left[ {\begin{array}{c} {{I_{0^\circ }}}\\ {{I_{45^\circ }}}\\ {{I_{90^\circ }}}\\ {{I_{135^\circ }}} \end{array}} \right] = \mathbf{T}\left[ {\begin{array}{c} {{S_{0^\circ }}}\\ {{S_{45^\circ }}}\\ {{S_{90^\circ }}}\\ {{S_{135^\circ }}} \end{array}} \right] = \left[ {\begin{array}{cccc} 1&{{1 / 2}}&0&{1/2}\\ {{1 / 2}}&1&{1/2}&0\\ 0&{1/2}&1&{{1 / 2}}\\ {1/2}&0&{{1 / 2}}&1 \end{array}} \right]\left[ {\begin{array}{c} {{S_{0^\circ }}}\\ {{S_{45^\circ }}}\\ {{S_{90^\circ }}}\\ {{S_{135^\circ }}} \end{array}} \right], $$
where S, S45°, S90°, S135° represent the intensity image under the illumination of LED covered the polarizer of corresponding angle, respectively. I, I45°, I90°, I135° are the intensity images extracted from the four channels of polarization sensor, respectively. T is the intensity transform matrix describing the coupling between each polarization channel. By solving the inverse of T, four images under different LED illuminations can be separated from an acquired image.

However, T is a noninvertible matrix because its rank is just 3, which is less than the number of its rows. It’s found that the summation of rows 1 and 3 of T equals that of rows 2 and 4. It can be interpreted as that decomposing a linear polarization light into 0° and 90° is equivalent to decomposing it into 45° and 135°. Therefore, the number of the maximum accessible raw images is determined as 3. In this paper, we choose 0°, 45°, and 135° channels, thus Eq. (2) is modified to

$$\left[ {\begin{array}{c} {{I_{0^\circ }}}\\ {{I_{45^\circ }}}\\ {{I_{135^\circ }}} \end{array}} \right] = \mathbf{T}\left[ {\begin{array}{c} {{S_{0^\circ }}}\\ {{S_{45^\circ }}}\\ {{S_{135^\circ }}} \end{array}} \right] = \left[ {\begin{array}{ccc} 1&{{1 / 2}}&{{1 / 2}}\\ {1/2}&1&0\\ {1/2}&0&1 \end{array}} \right]\left[ {\begin{array}{c} {{I_1}}\\ {{I_2}}\\ {{I_3}} \end{array}} \right], $$
where Ij (j = 1, 2, 3) are used to replace S, S45°, and S135° for symbol uniformity.

The illumination NA is the second key that needed to be designed carefully. We determine the illumination NA (NAillu) from two aspects. First, with the increase of NAillu, the sub-spectrum of each LR image will shift more and cover a higher frequency region. Matching NAillu to NA of the objective (NAobj) can provide the maximum spectrum support. In addition, NAillu will also significantly affect the passband of phase information. According to the weak object approximation (see details in Section 2.2), the phase transfer function is formed by two anti-symmetrical pupils (positive and negative) centering at coordinates set by NAillu. For unlabeled biological samples, where the phase component dominates the intensity contrast, NAillu is one of the most important factors affecting imaging quality. Under normal illumination (NAillu= 0), the two anti-symmetrical pupils cancel each other out, implying that the phase information cannot provide any intensity variation, as shown in Fig. 1(b). Increasing the illumination NA can make the two pupils no longer overlap completely ($0 < N{A_{illu}} < N{A_{obj}}$), thus rendering phase information visible in the intensity image. However, because of the overlap of low-frequency (near zero frequency) regions, only high-frequency information can be transferred to intensity variation, as shown in Fig. 1(c). To obtain total phase information, the matched illumination condition ($N{A_{illu}} = N{A_{obj}}$) is necessary, because the two pupils can be separated completely in the case, implying all phase components can be coded into intensity variation, as shown in Fig. 1(d). For the following simulations and experiments, the distance between the LED array and the sample is tuned to achieve matched illumination condition.

2.2 qDPC imaging based on weak object transfer functions

By adopting the above polarization-multiplexed illumination strategy, we can separate three intensity images as the raw data of FPM. However, three images cannot satisfy the minimum requirement for data redundancy. To obtain accurate reconstruction results, we make some modifications to the original FPM algorithm. For the non-convex optimization problem, a good initial guess is helpful to reduce data requirement and ensure the solution convergence. Fortunately, qDPC imaging based on weak object transfer functions can also be implemented with the same system of SIFPM. The recovered intensity and phase maps of qDPC can provide a good starting point for FPM iteration.

Obviously, the relation between the acquired intensity image and the complex transmittance of the sample is nonlinear [32]. But for a weak sample [33], the relation can be linearized by neglecting high-order cross terms of absorption and phase, so that

$$o(x,y) = {e^{\mu (x,y) + i\varphi (x,y)}} \approx 1 + \mu (x,y) + i\varphi (x,y). $$
where μ(x, y) is the absorption term, ϕ(x, y) is the phase term, i is the imaginary unit. The Fourier spectrum of acquired intensity images can be expressed as
$${\hat{I}_j}(u,v) = {B_j}\delta (u) + {H_{\mu ,j}}(u,v)\hat{\mu }(u,v) + {H_{\varphi ,j}}(u,v)\hat{\varphi }(u,v), $$
where B is the background term; $\hat{\mu }(u,v)$ and $\hat{\phi }(u,v)$ are the Fourier spectra of the absorption and phase parts of the sample; ${H_\mu }(u,v)$ and ${H_\phi }(u,v)$ are the amplitude transfer function (ATF) and phase transfer function (PTF), respectively:
$$\begin{array}{l} {H_{\mu ,j}}(u,v) = P(u + {u_j},v + {v_j}) + {P^\ast }(u - {u_j},v - {v_j})\\ {H_{\varphi ,j}}(u,v) = i[P(u + {u_j},v + {v_j}) - {P^\ast }(u - {u_j},v - {v_j})] \end{array}. $$

The linear formula, Eq. (5), can be solved through deconvolution. First, the background of each intensity image is removed. Next, both amplitude and phase maps can be obtained by performing the Tikhonov regularized deconvolution [34]. The closed-form solutions are

$$\begin{array}{l} \mu = {{\cal F}^{ - 1}}\left\{ {\frac{1}{A}\left[ {\left( {\sum\limits_j {{{|{{H_{\varphi ,j}}} |}^2} + {\gamma_\varphi }} } \right) \cdot \left( {\sum\limits_j {H_{\mu ,j}^\ast{\cdot} {{\hat{I}}_{B,j}}} } \right) - \left( {\sum\limits_j {{H_{\varphi ,j}} \cdot H_{\mu ,j}^\ast } } \right) \cdot \left( {\sum\limits_j {H_{\varphi ,j}^\ast{\cdot} {{\hat{I}}_{B,j}}} } \right)} \right]} \right\}\\ \varphi = {{\cal F}^{ - 1}}\left\{ {\frac{1}{A}\left[ {\left( {\sum\limits_j {{{|{{H_{\mu ,j}}} |}^2} + {\gamma_\mu }} } \right) \cdot \left( {\sum\limits_j {H_{\varphi ,j}^\ast{\cdot} {{\hat{I}}_{B,j}}} } \right) - \left( {\sum\limits_j {H_{\varphi ,j}^\ast{\cdot} {H_{\mu ,j}}} } \right) \cdot \left( {\sum\limits_j {H_{\mu ,j}^\ast{\cdot} {{\hat{I}}_{B,j}}} } \right)} \right]} \right\} \end{array}, $$
where ${\gamma _\mu }$ and ${\gamma _\varphi }$ are the regularization parameters for amplitude and phase, and A is a normalization term $\left( {\sum\limits_j {{{|{{H_{\mu ,j}}} |}^2} + {\gamma_\mu }} } \right) \cdot \left( {\sum\limits_j {{{|{{H_{\varphi ,j}}} |}^2} + {\gamma_\varphi }} } \right) - \left( {\sum\limits_j {{H_{\mu ,j}} \cdot H_{\varphi ,j}^\ast } } \right) \cdot \left( {\sum\limits_j {H_{\mu ,j}^\ast{\cdot} {H_{\varphi ,j}}} } \right)$, and ${\hat{I}_{B,j}}$ denotes the Fourier spectrum of the background-removed intensity image.

It is noted that the linearization assumption only holds when the sample meets the requirement of weak object approximation. If the absorption and phase are too large, qDPC will suffer from nonlinear errors, which is an intrinsic limit of qDPC imaging. However, in SIFPM, the qDPC-based solutions are only used as the initial guesses, which will be refined further by the following FPM-based algorithm, permitting accurate imaging for strong objects.

2.3 Process of SIFPM

SIFPM can recover both the amplitude and phase of a sample with lateral resolution twice the coherent diffraction in one intensity measurement. Figure 2 displays the general process of SIFPM algorithm, which is composed of the following three steps:

 figure: Fig. 2.

Fig. 2. Flow chart of SIFPM. (a) Polarization image acquired with the polarization camera. (b) Three intensity images of 0°/45°/135° polarization channels extracted by under-sampling. (c) Intensity images illuminated by three different LEDs are separated by polarization decoupling. (d) Intensity spectra of the three intensity images. (e1) and (e2) Weak phase transfer functions and weak amplitude transfer functions. (f) Initial guesses obtained by Tikhonov deconvolution. (g1) and (g2) Refined amplitude and phase images with FPM algorithm.

Download Full Size | PDF

First, separate the three intensity images illuminated by three pre-designed LEDs from one intensity measurement. As detailed in Section 2.1, three LEDs covered with 0°/45°/135° polarization films are used to provide matched illumination for the sample simultaneously, and one intensity image is captured by a polarization CMOS camera, as shown in Fig. 2(a). The intensity image of each polarization channel can be obtained by under-sampling with a step size of two pixels. Here, I, I45°, and I135° shown in Fig. 2(b) are extracted from the raw polarization image. Subsequently, the intensity images under the illumination of different LEDs, Ij (j = 1, 2, 3), can be decoupled according to Eq. (3), as shown in Fig. 2(c). It’s noted that the intensity transform matrix, T, is generated by the idealized Malus Law. However, the extinction ratios of polarizers can vary at different illumination angles and wavelengths, and can be affected by the direction of the polarizers covered on the LED array as well. Therefore, T is necessary to be pre-calibrated for different objectives and illumination wavelengths. The calibration method is akin to that used to compensate for color leakage [35] (details of the method are given in Appendix A).

Second, initial the amplitude and phase guesses with qDPC method. The separated images in Step 1 correspond to three different sub-spectra in the Fourier domain, as shown in Fig. 2(d). By taking weak object approximation, the intensity spectrum can be described by Eq. (5). Therefore, the intensity spectrum can be deconvoluted with PTFs and ATFs depicted in Figs. 2(e1) and 2(e2) to form linear amplitude and phase guesses. However, for strong-absorption and large-phase samples, the amplitude and phase will be estimated wrongly. For example, Fig. 2(f1) shows the recovered amplitude of a stained sample, where the absorption is recovered with obvious errors.

At last, refine the initial guesses with FPM method. In FPM (see details in Appendix B), the absorption and phase are recovered during a nonlinear iteration process, where the amounts of absorption and phase matter little, enabling imaging of strong-absorption and large-phase samples. By switching between spatial and Fourier domain, and imposing constraints, the linear initial guesses will be optimized to reach decent solutions, as shown in Fig. 2(g). It’s noted that although the conventional FPM allows compensating aberration by recovering sample and pupil functions simultaneously, the feature cannot be realized due to the limited number of raw images.

In the process, the advantages of qDPC and FPM are combined to ensure high-quality reconstructions. qDPC has a low requirement for data redundancy but is unsuitable for strong objects. In FPM theory, the sample is only required to be thin enough but not weak enough so that FPM can recover complex amplitude information of most biological samples. However, a high requirement for data redundancy is necessary for FPM iteration. In SIFPM, qDPC and FPM no longer work alone but complement each other to enable single-shot quantitative imaging for general samples.

3. Simulations

To verify the imaging performance of SIFPM for various samples, we numerically simulated two strong samples that are enough to depict the most samples in biomedical applications, to compare the accuracy of qDPC and SIFPM quantitatively.

The simulation parameters were chosen according to the actual system used in experiments. Three LED elements (25 mm away from the optical axis, center wavelength of 523 nm, bandwidth ∼20 nm) were turned on to provide illumination, and a 10X/0.3 objective lens was used to collect light passing through the sample. A polarization camera with a pixel size of 3.45 µm was used to capture images. The distance from the LED array to the sample was set as 79.5 mm to match the NA of objective lens. To meet the weak object approximation condition, the phase is required to be smaller than 0.5 rad [33], which is too restricted for most biological samples. Especially, when the absorption is also coupled with phase, the requirement will be more difficult to satisfy.

We first simulated imaging results of a Siemens star target to verify the imaging performance. Considering the generality of SIFPM, we set the phase of the sample as 1 rad, and introduced an additional absorption parameter to characterize the sample with absorptivity up to 63%. The strong sample is shown in Fig. 3(a), where the amplitude of spokes is 0.37 (µ=-1) and the phase is 1 rad (φ=1 rad). For such a sample, the ignored high-order terms in qDPC contribute significantly to the intensity variation in captured images. Weak object approximation is no longer effective in this case, and thus, the solved maps cannot match the ground truth. As shown in Fig. 3(b), although the appearances of both amplitude and phase images are similar with that of the truth, the values are not consistent with the ground truth. SIFPM can effectively address the problem by nonlinear iteration optimization. By imposing constraints iteratively instead of one-step deconvolution, SIFPM can enforce the initial guesses of qDPC to vary toward the ground truth. Finally, accurate solutions of both amplitude and phase can be obtained, as shown in Fig. 3(c).

 figure: Fig. 3.

Fig. 3. Comparison of imaging performance between qDPC and SIFPM for strong samples. (a) Ground truth of the amplitude and phase of a Siemens star target. (b) Recovered amplitude and phase images of the Siemens star target with qDPC. (c) Recovered amplitude and phase images of the Siemens star target with SIFPM. (d) and (e) Profile curves of amplitude and phase of the Siemens star target. (f) Ground truth of the amplitude and phase of a strong biological sample. (g) Recovered amplitude and phase images of the biological sample with qDPC. (h) Recovered amplitude and phase images of the biological sample with SIFPM.

Download Full Size | PDF

To demonstrate the accuracy and isotropy of SIFPM, the amplitude and phase values at six circles with the same radius are extracted to plot the quantitative curves. As shown in Fig. 3(d), the amplitude values of qDPC (cyan line) are overestimated when compared with the ground truth (black line), while the values of SIFPM (magenta line) are consistent with the truth in all directions. The phase values of qDPC (∼0.35 rad) indicated by the cyan line in Fig. 3(e) are all underestimated, but the values of SIFPM (magenta line) are improved to reach 1 rad in all directions, which agree with the truth well. Interestingly, we found that the recovered phase values with qDPC (underestimated) show an opposite feature to the amplitude values (overestimated). We guess this is because the signs before amplitude (negative) and phase (positive) are opposite.

In addition, we also simulated the imaging results for a strong biological sample. The amplitude and phase are shown in Fig. 3(f), where the amplitude varies from 0 to 1, and the phase varies from 0∼1 rad. Similarly, the recovered amplitude of qDPC is overestimated, while the recovered phase is underestimated, as shown in Fig. 3(g). What’s more, the phase information indicated by the magenta arrows is also submerged in the background because the phase value is too large. However, when recovering with SIFPM, both amplitude and phase information are corrected, and the submerged phase details are also retrieved, as shown in Fig. 3(h). Quantitatively, the RMSEs of amplitude and phase of qDPC are 0.42 and 0.36 rad, while the RMSEs of SIFPM are reduced to 0.02 and 0.01 rad, respectively. The comparison indicates that SIFPM can effectively overcome the weak object approximation and realize high-accuracy reconstructions.

4. Experiments

4.1 Phase resolution target

To verify the effectiveness of SIFPM experimentally, we first measured a pure phase resolution target (Benchmark Quantitative Phase Target, QPT). The setup was modified from a commercial inverted microscope (IX73, Olympus), where the original illumination unit was replaced with a custom-built 22${\times} $22 surface-mounted LED array (5 mm spacing). In our experiments, the green light (λ=523 nm, 20 nm bandwidth) of the LEDs located at (5,0), (-3, 4), (-3, -4), where the first number represents the index of raw, and the second number represents the index of column, were turned on to provide illumination. The distance between the LED array and sample was adjusted so that each illumination angle matches the NA of objective lens (40X, 0.6 NA, Olympus LUCPLFLN), and the images were captured by a polarization camera with the pixels size of 3.45 µm (BFS-U3-51S5P-C).

Figure 4(a1) shows a bright-filed intensity image with normally incident place-wave illumination. To illustrate the imaging resolution better, the small region near the center of the image (black-boxed area) is magnified into Fig. 4(a2). Here, the bright-field and normally incident microscope has a diffraction of 0.87 µm. Thus, the smallest patterns of the phase target (Group 10, Element 6), those with 0.55 µm spacing, are not resolved. Figure 2(b) shows the intensity image of the same region under the polarization-multiplexed and matched illumination scheme. The resolution has been improved, and more details are distinguished. After the recovery of SIFPM, the smallest patterns are visible with the doubled resolution (0.44 µm), as illustrated in Fig. 4(c). To evaluate the phase recovery accuracy quantitatively, the phase values along the cyan line are extracted and converted to the physics height (the refractive index of QPT glass is ∼1.52). Compared with the normal height value (100 nm), we found that the recovered result is consistent with the truth. These results have established SIFM as an effective method to achieve quantitative phase imaging (QPI) with a resolution of incoherent diffraction limit.

 figure: Fig. 4.

Fig. 4. Phase recovery results of a quantitative phase target. (a1) A bright-field intensity image with normally incident plane-wave illumination. (a2) Zoom-in image of the black-boxed area in the bright-field image. (b) Intensity image under the illumination scheme of SIFPM. (c) Phase recovery result of SIFPM. (d) Phase values along the cyan line across the recovered phase result.

Download Full Size | PDF

4.2 Unstained biological slice

The high-resolution QPI capability of SIFPM provides possibility to visible label-free samples. In Fig. 5, we show the quantitative phase recovery results of an unstained mouse thymus slice. The experimental setup followed the parameters of the previous, except that an objective lens (10X, 0.3 NA) was used to obtain a lager FOV.

 figure: Fig. 5.

Fig. 5. Phase recovery results of an unstained mouse thymus slice. (a) Intensity image under the illumination scheme of SIFPM. (b1) and (c1) Two enlarged areas of interest in the intensity image. (b2) and (c2) Recovered quantitative phase images with SIFPM. (b3) and (c3) simulated Zernike phase contrast images using the recovered quantitative phase information.

Download Full Size | PDF

Because the slice was embedded in a paraffin section with a matched refractive index, it is optically transparent and cannot be observed with bright-field microscopy. However, when using the illumination scheme of SIFPM, we can observe the phase information in the captured intensity image, as shown in Fig. 5(a). In addition, the proposed method can also characterize the phase quantitatively. We chose two white-boxed areas of Fig. 5(a) to implement QPI. The recovered phase images [Figs. 5(b2) and (b3)] show a clearer contrast between the tissue and the background and preserve fine structures when compared with the raw images [Figs. 5(b1) and (b2)].

Using quantitative phase images recovered with SIFPM, we can simulate other high-contrast imaging techniques for pure-phase samples, such as Zernike phase contrast microscopy [36]. Figures 5(b3) and (b4) show the simulated phase contrast images of the two areas, which are generated by adding a phase shift of π/2 into the low-frequency components of the sample.

4.3 Stained biological slice

Digital pathology requires high-resolution and wield-field digital slices to find the cause of diseases. The appearance differences between diseased and normal cells or tissues with specific staining methods (e.g., hematoxylin and eosin, Masson) are the cornerstone for diagnosis [37]. For single-shot methods based on color-multiplexing, quantitative imaging of stained samples seems to be impossible because decoupling the intensity variation caused by wavelength-dependent absorption and phase is remarkably difficult. Nevertheless, SIFPM can quantitatively recover both the amplitude and phase at a single wavelength by using polarization-multiplexed LED illumination, and thus can achieve imaging for stained samples with multiple measurements at different illumination wavelengths. It’s noted that although conventional bright-field microscopy can also provide color images for stained samples, it cannot obtain phase information that can quantify cell morphology features and cell mass.

We implemented experiments on a stained dog esophagus slice under the illuminations of R (623 nm), G (523 nm), and B (467 nm) LEDs, respectively. Figure 6 lists the imaging results at different wavelengths. From the captured images displayed in Figs. 6(a1)-(a3), we can find that the absorptivity of red light is weaker than that of green or blue light, which may be the reason why the color of the stained slice we see is red. After recovering with SIFPM, we can obtain both amplitude and phase images, as shown in Figs. 6(b1)-(b3) and Figs. 6(c1-c3), where the contrast between the cell structures and background is clearer. The recovered amplitude images can be merged to generate a quasi-color intensity image, as shown in Fig. 6(d). When compared with the bright-field intensity image [Fig. 6 (e)] captured with a color camera, we found some color artifacts disturbing the observation of sub-cellular structures in the merged image. The artifacts may be caused by the imperfect polarization decoupling, which can be reduced by calibrating the intensity transform matrix, T, for every four adjacent pixels of the camera.

 figure: Fig. 6.

Fig. 6. Imaging results of a stained dog esophagus slice at different illumination wavelengths. (a1-a3) Captured intensity images under the illumination scheme of SIFPM using R/G/B LEDs, respectively. (b1-b3) Recovered amplitude images of SIFPM using R/G/B LEDs, respectively. (c1-c3) Recovered phase images of SIFPM using R/G/B LEDs, respectively. (d) Merged color intensity image using the three recovered amplitude images. (e) Captured bright-field intensity image with a color camera.

Download Full Size | PDF

5. Conclusion and discussion

In this paper, we have developed a polarization-multiplexed LED array and modified the original FPM algorithm to achieve single-shot FPM with isotropic lateral resolution (SIFPM) for unstained and stained samples. By introducing polarizers into the illumination module, we provided the LED array with polarization-multiplexed capability. We analyzed the intensity transform matrix mathematically and determined that the maximum accessible number of LEDs for polarization-multiplexed is three. Furthermore, a modified FPM algorithm based on qDPC initialization was also proposed to reduce the requirement for data redundancy, achieving quantitative complex amplitude imaging for general samples within one intensity measurement. We demonstrated the accuracy of SIFPM on a quantitative phase target, where the recovered phase was consistent with the nominal value. The experimental results of an unstained mouse thymus slice indicated that SIFPM could provide phase maps with a clearer contrast between the tissue and the background and preserve fine structures. Compared with the color-multiplexed method, SIFPM can perform well for imaging stained samples. The recovered amplitude and phase images on a stained dog esophagus slice indicate that SIFPM has the potential for quantitative characterization pathologic slide. The merged color image and retrieved phase image can provide more features of morphology that are more useful for pathologic diagnosis.

In this work, we multiplexed the polarization features of light to enable SIFPM. It should be noted that some samples may alter the polarization states of illuminated light when light passes through them. The measured intensity cannot be considered as the incoherent summation of intensities of the three kinds of used polarized light in the case. The limit is possible to be addressed by pre-calibrating the polarization transfer feature of the samples such as index ellipsoid and Jones matrix, and adding them into the intensity transform matrix. The polarization-multiplexed capability of the developed LED array is not specific for SIFPM. It is expected to synergize with other LED-array-based computational imaging methods such as intensity diffraction tomography to facilitate high-speed and high-throughput imaging.

Appendix A: Calibration of intensity transform matrix

In Eq. (3), the intensity transform matrix, T, is generated by the idealized Malus Law. However, the extinction ratios of polarizers can vary at different illumination angles and wavelengths, and can be affected by the direction of the polarization films covered on the LED array as well. Therefore, T is a function of illumination angle (θ) and wavelength (λ), which needs to be pre-calibrated for better polarization decoupling.

To obtain the response of each polarization channel for a specific combination of objective (i.e., specific θ) and wavelength, we will turn on the three pre-designed LEDs sequentially and capture the corresponding three intensity images as the input of the calibration method. After performing under-sampling for the three intensity images, we can extract nine intensity images that can be used to calculate the elements of T as

$$T_{\theta l}^{{\theta _p}}(\theta ,\lambda ) = \frac{{\sum\limits_{m,n = 1}^{M,N} {I_{{\theta _l}}^{{\theta _p}}(m,n)} }}{{MN}},({\theta _l},{\theta _c} = 0^\circ ,45^\circ ,135^\circ ),$$
where ${\theta _l}$ and ${\theta _p}$ represents the polarization angle of the films covered on the LED array and the camera pixel, respectively. M and N are the total number of pixels of $T_{\theta l}^{{\theta _p}}$ along x and y direction, and (m, n) is the corresponding position index. In fact, $T_{\theta l}^{{\theta _p}}$ provides a linear weighting of the contribution of each LED to our polarization measurement. Once each $T_{\theta l}^{{\theta _p}}$ has been measured, they can be used to form T as
$$\mathbf{T}(\theta ,\lambda ) = \left[ {\begin{array}{ccc} {T_{0^\circ }^{0^\circ }}&{T_{45^\circ }^{0^\circ }}&{T_{135^\circ }^{0^\circ }}\\ {T_{0^\circ }^{45^\circ }}&{T_{45^\circ }^{45^\circ }}&{T_{135^\circ }^{45^\circ }}\\ {T_{0^\circ }^{135^\circ }}&{T_{45^\circ }^{135^\circ }}&{T_{135^\circ }^{135^\circ }} \end{array}} \right].$$

It’s noted that each element of T is a function of θ and λ, which are omitted in the matrix for notational simplicity. If one changes the objective or illumination wavelength, T needs to be calibrated again. This step is important for reducing artifacts in the reconstruction results.

To evaluate the decoupling effectiveness clearly, we compared the decoupled images under polarization-multiplexed illumination with the images under sequential illumination. Figure 7(a1) shows the intensity image under the multiplexed illumination, where the exposure time is controlled to avoid overexposure. For more accurate decoupling, the central region with black-boxed shown is chosen to calibrate T, as shown in Fig. 7(a2). After processing according to Eq. (3), three decoupled images shown in Fig. 7(b) are then extracted. Under the same collection situation, we turned on the three LEDs sequentially and captured intensity images as the references, as shown in Fig. 7(c). It can be found that the decoupled images are consistent with the references, although some background fluctuations are unavoidable. To compare quantitatively, we also calculated the relative RMSE of each map, which maintains around 7%. Although RMSE can be reduced by calibrating T for every four adjacent pixels of the camera, the computation time for thousands of T will increase dramatically compared with that of only one T. Realizing better decoupling effectiveness efficiently will be one of our future work.

 figure: Fig. 7.

Fig. 7. Comparison between the decoupled images under polarization-multiplexed illumination and the images captured under sequential illumination. (a1) Captured intensity image under polarization-multiplexed illumination. (a2) Intensity image used to calibrate intensity transform matrix, T. (b) Decoupled intensity images from (a2). (c) Captured intensity images under sequential illumination.

Download Full Size | PDF

Appendix B: FPM theory

Considering a thin sample with complex transmittance function of $o(x,y) = {e^{\mu (x,y) + i\phi (x,y)}}$(x and y denote the spatial coordinates, µ(x, y) is the absorption distribution, ϕ(x, y) is the phase distribution) illuminated by a single LED, the intensity image at the sensor plane can be modeled as the result of the intensity distribution filtered by the pupil aperture

$${I_j}(x,y) = {|{{{\cal F}^{ - 1}}\{ P(u,v)O(u - {u_j},v - {v_j})\} } |^2},$$
where ${{\cal F}^{ - 1}}\{{} \}$ is inverse Fourier transform operator, $P(u,v)$ represents the pupil function serving as the low-pass filter (support domain constraint), (u, v) denotes the spatial frequency coordinates in the Fourier domain. $O(u,v) = {\cal F}\{{o(x,y)} \}$ is the Fourier transform of o(x, y), and $({u_j},{v_j})$ is illumination spatial frequency of the jth LED element. By sequentially turning on LEDs with different $({u_j},{v_j})$, the filtering center of $O(u,v)$ will shift from origin to $({u_j},{v_j})$ correspondingly. Therefore, the spatial frequency information beyond the incoherent diffraction limit ($2N{A_{obj}}/\lambda $, $N{A_{obj}}$ is the NA of objective lens, and λ the illumination wavelength) can be acquired if the illumination spatial frequency $({u_j},{v_j})$ is more than $N{A_{obj}}/\lambda $.

All acquired LR images are then synthesized in the Fourier domain to reach an HR solution. The process is composed of five steps. First, initialize the guesses of the HR spectrum $O(u,v)$ and pupil function $P(u,v)$ to start the iteration. Second, use the current solution of $O(u,v)$ to form an LR image estimation as

$$o_j^e(x,y) = {{\cal F}^{ - 1}}\{ O(u - {u_j},v - {v_j})P(u,v)\} .$$

Third, replace the estimated amplitude with the square root of the acquired image, and keep the phase unchanged to form an updated LR image as

$$o_{_j}^u(x,y) = \sqrt {{I_j}(x,y)} \frac{{o_j^e(x,y)}}{{|o_j^e(x,y)|}}, $$
where ${I_j}(x,y)$ functions as the amplitude constraint in the spatial domain. Subsequently, the updated image is transformed to the Fourier domain for corresponding sub-spectrum updating, which is given by:
$$\begin{aligned} O(u - {u_j},v - {v_j}) &= O(u - {u_j},v - {v_j}) + \alpha \frac{{{P^\ast }(u,v)}}{{|{P_i}(u,v)|_{\max }^2}}\Delta {O_j},\\ P(u,v) &= P(u,v) + \beta \frac{{{O^\ast }(u - {u_j},v - {v_j})}}{{|O(u - {u_j},v - {v_j})|_{\max }^2}}\Delta {O_j}, \end{aligned}$$
where α and β are the iterative step sizes, and $\Delta {O_j} = {\cal F}\{ o_{_j}^u(x,y)\} - \mathrm{{\cal F}}\{ o_{_j}^e(x,y)\}$ is an auxiliary function.

In the fourth step, the left LR images are used to successively restrict and update other spectrum regions. Finally, the whole iteration process will repeat several times until reaching a stable solution, which will be transformed into the spatial domain to obtain HR amplitude and phase images. Cell morphology and cell mass [38] information contained in these images can be used for the following quantitative study [39].

Funding

National Natural Science Foundation of China (62275020).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, et al., “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

3. G. Zheng, C. Shen, S. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

4. S. Jiang, P. Song, T. Wang, et al., “Spatial- and Fourier-domain ptychography for high-throughput bio-imaging,” Nat. Protoc. 18(7), 2051–2083 (2023). [CrossRef]  

5. R. Horstmeyer, X. Ou, G. Zheng, et al., “Digital pathology with Fourier ptychography,” Comput. Med. Imaging. Grap 42, 38–43 (2015). [CrossRef]  

6. Y. Fan, J. Sun, Y. Shu, et al., “Efficient Synthetic Aperture for Phaseless Fourier Ptychographic Microscopy with Hybrid Coherent and Incoherent Illumination,” Laser Photonics Rev. 17(3), 2200201 (2023). [CrossRef]  

7. J. Chung, X. Ou, R. P. Kulkarni, et al., “Counting white blood cells from a blood smear using Fourier ptychographic microscopy,” PLoS One 10(7), e0133489 (2015). [CrossRef]  

8. R. Horstmeyer, J. Chung, X. Ou, et al., “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

9. C. Zuo, J. Sun, J. Li, et al., “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Laser. Eng. 128, 106003 (2020). [CrossRef]  

10. S. Zhou, J. Li, J. Sun, et al., “Transport-of-intensity Fourier ptychographic diffraction tomography: defying the matched illumination condition,” Optica 9(12), 1362–1373 (2022). [CrossRef]  

11. P. Thibault, M. Dierolf, O. Bunk, et al., “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

12. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

13. L. Bian, J. Suo, G. Situ, et al., “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]  

14. S. Dong, R. Shiradkar, P. Nanda, et al., “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]  

15. L. Tian, X. Li, K. Ramchandran, et al., “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

16. L. Tian, Z. Liu, L. Yeh, et al., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

17. S. Dong, Z. Bian, R. Shiradkar, et al., “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]  

18. K. Guo, S. Dong, P. Nanda, et al., “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]  

19. J. Sun, Q. Chen, Y. Zhang, et al., “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

20. X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018). [CrossRef]  

21. B. Lee, J. Hong, D. Yoo, et al., “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

22. J. Sun, Q. Chen, J. Zhang, et al., “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365–3368 (2018). [CrossRef]  

23. N. Zhou, J. Li, J. Sun, et al., “Single-exposure 3D label-free microscopy based on color-multiplexed intensity diffraction tomography,” Opt. Lett. 47(4), 969–972 (2022). [CrossRef]  

24. J. Li, N. Zhou, J. Sun, et al., “Transport of intensity diffraction tomography with non-interferometric synthetic aperture for three-dimensional label-free microscopy,” Light: Sci. Appl. 11(1), 154 (2022). [CrossRef]  

25. J. Li, A. Matlock, Y. Li, et al., “High-speed in vitro intensity diffraction tomography,” Adv. Photonics 1(6), 066004 (2019). [CrossRef]  

26. Y. Shu, J. Sun, J. Lyu, et al., “Adaptive optical quantitative phase imaging based on annular illumination Fourier ptychographic microscopy,” PhotoniX 3, 24 (2022). [CrossRef]  

27. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

28. J. R. Fienup and C. C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3(11), 1897–1907 (1986). [CrossRef]  

29. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394–11403 (2015). [CrossRef]  

30. Y. Fan, J. Sun, Q. Chen, et al., “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” Photonics Res. 7(8), 890–904 (2019). [CrossRef]  

31. S. Liu, C. Zheng, Q. Hao, et al., “Single-shot quantitative differential phase contrast imaging combined with programmable polarization multiplexing illumination,” Opt. Lett. 48(13), 3559–3562 (2023). [CrossRef]  

32. H. Hopkins, “On the diffraction theory of optical images,” Proc. R. Soc. London, Ser. A 217(1130), 408–432 (1953). [CrossRef]  

33. Y. Fan, J. Sun, Y. Shu, et al., “Accurate quantitative phase imaging by differential phase contrast with partially coherent illumination: beyond weak object approximation,” Photonics Res. 11(3), 442–455 (2023). [CrossRef]  

34. J. Li, A. Matlock, Y. Li, et al., “Resolution-enhanced intensity diffraction tomography in high numerical aperture label-free microscopy,” Photonics Res. 8(12), 1818–1826 (2020). [CrossRef]  

35. P. L. P. Dillon, D. M. Lewis, and F. G. Kaspar, “Color Imaging System Using a Single CCD Area Array,” IEEE J. Solid-State Circuits 13(1), 28–33 (1978). [CrossRef]  

36. F. Zernike, “How I discovered phase contrast,” Science 121(3141), 345–349 (1955). [CrossRef]  

37. F. Ghaznavi, A. Evans, A. Madabhushi, et al., “Digital imaging in pathology: whole-slide imaging and beyond,” Annu. Rev. Pathol.: Mech. Dis. 8(1), 331–359 (2013). [CrossRef]  

38. G. Popescu, Y. Park, N. Lue, et al., “Optical imaging of cell mass and growth dynamics,” Am. J. Physiol. 295(2), C538–C544 (2008). [CrossRef]  

39. A. R. Cohen, F. L. Gomes, B. Roysam, et al., “Computational prediction of neural progenitor cell fates,” Nat. Methods 7(3), 213–218 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. System diagram of polarization-multiplexed LED illumination. (a) Hardware modifications for implementing SIFPM. Three LEDs covered with 0°/45°/135 polarization films are turned on to illuminate the sample simultaneously. The distance between the sample and LED array is tuned so that the illumination NA matches the objective NA. (b)-(d) Imaging results of an unlabeled pure-phase sample under different illumination NAs.
Fig. 2.
Fig. 2. Flow chart of SIFPM. (a) Polarization image acquired with the polarization camera. (b) Three intensity images of 0°/45°/135° polarization channels extracted by under-sampling. (c) Intensity images illuminated by three different LEDs are separated by polarization decoupling. (d) Intensity spectra of the three intensity images. (e1) and (e2) Weak phase transfer functions and weak amplitude transfer functions. (f) Initial guesses obtained by Tikhonov deconvolution. (g1) and (g2) Refined amplitude and phase images with FPM algorithm.
Fig. 3.
Fig. 3. Comparison of imaging performance between qDPC and SIFPM for strong samples. (a) Ground truth of the amplitude and phase of a Siemens star target. (b) Recovered amplitude and phase images of the Siemens star target with qDPC. (c) Recovered amplitude and phase images of the Siemens star target with SIFPM. (d) and (e) Profile curves of amplitude and phase of the Siemens star target. (f) Ground truth of the amplitude and phase of a strong biological sample. (g) Recovered amplitude and phase images of the biological sample with qDPC. (h) Recovered amplitude and phase images of the biological sample with SIFPM.
Fig. 4.
Fig. 4. Phase recovery results of a quantitative phase target. (a1) A bright-field intensity image with normally incident plane-wave illumination. (a2) Zoom-in image of the black-boxed area in the bright-field image. (b) Intensity image under the illumination scheme of SIFPM. (c) Phase recovery result of SIFPM. (d) Phase values along the cyan line across the recovered phase result.
Fig. 5.
Fig. 5. Phase recovery results of an unstained mouse thymus slice. (a) Intensity image under the illumination scheme of SIFPM. (b1) and (c1) Two enlarged areas of interest in the intensity image. (b2) and (c2) Recovered quantitative phase images with SIFPM. (b3) and (c3) simulated Zernike phase contrast images using the recovered quantitative phase information.
Fig. 6.
Fig. 6. Imaging results of a stained dog esophagus slice at different illumination wavelengths. (a1-a3) Captured intensity images under the illumination scheme of SIFPM using R/G/B LEDs, respectively. (b1-b3) Recovered amplitude images of SIFPM using R/G/B LEDs, respectively. (c1-c3) Recovered phase images of SIFPM using R/G/B LEDs, respectively. (d) Merged color intensity image using the three recovered amplitude images. (e) Captured bright-field intensity image with a color camera.
Fig. 7.
Fig. 7. Comparison between the decoupled images under polarization-multiplexed illumination and the images captured under sequential illumination. (a1) Captured intensity image under polarization-multiplexed illumination. (a2) Intensity image used to calibrate intensity transform matrix, T. (b) Decoupled intensity images from (a2). (c) Captured intensity images under sequential illumination.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

min O ( u , v ) j x , y | I j ( x , y ) | F 1 { O ( u u j , v v j ) P ( u , v ) } | | 2 ,
[ I 0 I 45 I 90 I 135 ] = T [ S 0 S 45 S 90 S 135 ] = [ 1 1 / 2 0 1 / 2 1 / 2 1 1 / 2 0 0 1 / 2 1 1 / 2 1 / 2 0 1 / 2 1 ] [ S 0 S 45 S 90 S 135 ] ,
[ I 0 I 45 I 135 ] = T [ S 0 S 45 S 135 ] = [ 1 1 / 2 1 / 2 1 / 2 1 0 1 / 2 0 1 ] [ I 1 I 2 I 3 ] ,
o ( x , y ) = e μ ( x , y ) + i φ ( x , y ) 1 + μ ( x , y ) + i φ ( x , y ) .
I ^ j ( u , v ) = B j δ ( u ) + H μ , j ( u , v ) μ ^ ( u , v ) + H φ , j ( u , v ) φ ^ ( u , v ) ,
H μ , j ( u , v ) = P ( u + u j , v + v j ) + P ( u u j , v v j ) H φ , j ( u , v ) = i [ P ( u + u j , v + v j ) P ( u u j , v v j ) ] .
μ = F 1 { 1 A [ ( j | H φ , j | 2 + γ φ ) ( j H μ , j I ^ B , j ) ( j H φ , j H μ , j ) ( j H φ , j I ^ B , j ) ] } φ = F 1 { 1 A [ ( j | H μ , j | 2 + γ μ ) ( j H φ , j I ^ B , j ) ( j H φ , j H μ , j ) ( j H μ , j I ^ B , j ) ] } ,
T θ l θ p ( θ , λ ) = m , n = 1 M , N I θ l θ p ( m , n ) M N , ( θ l , θ c = 0 , 45 , 135 ) ,
T ( θ , λ ) = [ T 0 0 T 45 0 T 135 0 T 0 45 T 45 45 T 135 45 T 0 135 T 45 135 T 135 135 ] .
I j ( x , y ) = | F 1 { P ( u , v ) O ( u u j , v v j ) } | 2 ,
o j e ( x , y ) = F 1 { O ( u u j , v v j ) P ( u , v ) } .
o j u ( x , y ) = I j ( x , y ) o j e ( x , y ) | o j e ( x , y ) | ,
O ( u u j , v v j ) = O ( u u j , v v j ) + α P ( u , v ) | P i ( u , v ) | max 2 Δ O j , P ( u , v ) = P ( u , v ) + β O ( u u j , v v j ) | O ( u u j , v v j ) | max 2 Δ O j ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.