## Abstract

Differential Phase Contrast (DPC) microscopy is a practical method for recovering quantitative phase from intensity images captured with different source patterns in an LED array microscope. Being a partially coherent imaging method, DPC does not suffer from speckle artifacts and achieves 2× better resolution than coherent methods. Like all imaging systems, however, DPC is susceptible to aberrations. Here, we propose a method of algorithmic self-calibration for DPC where we simultaneously recover the complex-field of the sample *and* the spatially-variant aberrations of the system, using 4 images with different illumination source patterns. The resulting phase reconstructions are digitally aberration-corrected.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Quantitative Phase Imaging (QPI) techniques enable non-invasive analysis of *in vitro* biological samples. In contrast to *qualitative* phase imaging (*e*.*g*. Zernike Phase Contrast [1] and Differential Interference Contrast [2]), QPI methods recover the phase delay caused by the sample, decoupled from absorption effects. Having full complex-field information makes it possible to digitally remove known aberrations [3] or to digitally refocus in post-processing [4]. System aberrations are typically measured once as a pre-calibration step, by conducting QPI with a guide star or ground-truth object. Then the aberrations can be computationally removed from subsequent measurements by a deconvolution process [3]. This strategy, however, requires precisely known calibration objects and will be sensitive to calibration drift and sample-induced aberrations.

Here, we propose a method of *algorithmic self-calibration* for Differential Phase Contrast (DPC) microscopy [5–10], where we avoid the need for pre-calibration by jointly recovering both the sample’s complex-field *and* the spatially-varying aberrations of the system, directly from raw images. The method is an extension of illumination-based DPC microscopy [6, 7], where images are captured with different source patterns, then a reconstruction algorithm recovers the complex-field. Experiments are implemented in a commercial brightfield microscope with a low-cost programmable LED array light source, such that patterns can be switched quickly with no moving parts [7, 11–13]. Among the wide variety of QPI methods, those which use partially coherent illumination, like DPC, are advantageous since they provide 2× better resolution, more light throughput and reduced speckle [7, 14–16], as compared to coherent methods.

The concept of *algorithmic self-calibration* – solving jointly for the reconstructed image and the calibration parameters – has proven particularly useful in coherent computational imaging. Examples include probe retrieval in Ptychography [17–20], source recovery for through-focus phase imaging [16, 21], pupil and source recovery for Fourier Ptychography Microscopy (FPM) [22–24], and calibration-invariant inverse scattering models [25]. The standard approach to self-calibration uses alternating projections (AP), which optimizes multiple variables serially, keeping other parameters fixed during each sub-iteration. The non-convexity of AP provides no guarantee of global convergence, but in practice it works with sufficiently diverse data. Self-calibration of aberrations has been demonstrated previously in an LED array microscope for the cases of through-focus phase [26] and FPM [13, 22, 27–29], but required a large number of images to be captured. For example, a typical FPM setup [30] uses approximately 5× as many images as our system to achieve the equivalent resolution.

Here we propose an AP framework that uses 4 captured images for simultaneous phase retrieval and digital correction of spatially-varying aberrations (Fig. 1). Three of the measurements are partially-coherent conventional DPC images (with rotated half-circle sources); these provide good phase contrast, but poor aberration contrast. The fourth image uses single-LED (spatially coherent) illumination; this provides aberration contrast, but alone cannot resolve the ambiguity between complex-field and pupil aberration [31]. Using both partially-coherent and coherent images together improves sensitivity to aberrations without sacrificing the benefits of partial coherence. We model the aberrations parametrically (with a Zernike basis [26, 32]) to dramatically reduce the number of unknowns. In addition, by segmenting the field-of-view (FoV), we are able to recover and digitally correct for spatially-varying aberrations across the FoV.

## 2. Joint estimation of complex-field and aberrations

Our LED array microscope and data capture scheme is shown in Fig. 1. Quantitative DPC (without aberration correction) requires a minimum of 3 intensity measurements to reconstruct phase [10], though only two quantities (amplitude and phase) are reconstructed at each pixel. Hence, there is significant redundancy in the data which may, in principle, be used to solve for aberrations. Unfortunately, intensity images formed by partially-coherent illumination do not exhibit significant aberration contrast. Hence, we modify the capture scheme to add one additional measurement with spatially-coherent illumination (a single on-axis LED). The example in Fig. 1 has a different aberration in each quadrant of the FoV, and only the single-LED image displays visible differences in contrast for each region. An off-axis LED would also provide the necessary coherent contrast, but the on-axis LED provides higher intensity and better signal-to-noise ratio (SNR). Having achieved both phase and aberration contrast with 1 on-axis and 3 half-circle sources, we use this dataset to jointly recover both spatially-varying aberrations and the complex-field of the sample (with resolution set by the incoherent diffraction limit).

The process of estimating the sample’s complex-field and the system’s aberrations simultaneously is a joint estimation large-scale nonlinear non-convex problem. To simplify, DPC typically makes a weak scattering approximation which linearizes the forward model. This approximation is generally valid for optically thin or index-matched samples, like biological cells. Intensity measurements can then be related to absorption (*µ*) and phase (*ϕ*) of the sample [7, 10] by simple convolutions. As derived in Appendix A, the DPC forward model in matrix form is:

Here, diag(*x*) denotes a diagonal matrix with diagonal values *x*, **F** and **F**^{−1} represent the DFT and inverse DFT matrices, *I*_{n} is the vectorized normalized measured intensity (with the DC term subtracted), and *H _{µ}* and

*H*are vectorized transfer functions for absorption and phase. If the source satisfies the Köhler illumination configuration, the transfer functions can be numerically evaluated based on Eq. (13) and the cross-correlation property of Fourier transforms:

_{ϕ}*I*

_{o}is the total intensity of the source passing through the system,

*S*and

*P*are the vectorized source and pupil, and

*O*=

**F**diag(

*S*)

*P*. Typically, the space-invariant exit pupil

*P*is a circular function with its radius determined by numerical aperture (NA), and wavelength,

*λ*. The phase of

*P*is the pupil aberration we wish to recover, modeled as a weighted sum of Zernike modes on spatial frequency coordinate (

**u**) [32]:

*M*is the total number of Zernike modes and

**c**contains the coefficients,

*c*, of each orthogonal mode

_{m}*Z*. To recover spatially-varying aberrations, we solve for individual pupil aberrations at different spatial regions across the FoV and assume the aberrations are locally space-invariant within each region. Given Eqs. (1)–(4), an objective function for the joint optimization of absorption, phase and pupil aberrations can be formulated as:

_{m}*s*is the measurement index of each corresponding source pattern,

*N*is the total number of measurements, ║·║

_{s}_{2}represents an

*ℓ*

_{2}norm,

*τ*is a regularization parameter and

*R*is a regularization term that helps mitigate noise artifacts. It is inferred from Eq. (5) that the aberration coefficients

**c**are coupled with both

*µ*and

*ϕ*, so simultaneously optimizing all variables does not guarantee convergence. An alternating projections update strategy instead provides a non-divergence guarantee, as was previously used for phase-from-focus joint source recovery [21]. Similarly, we iteratively solve for both the complex object and system aberrations as two sub-problems.

Our alternating projections algorithm initializes **c** with zero. At the start of one iteration, the Zernike coefficients **c** are fixed and a DPC deconvolution sub-procedure (Section 2.1) is performed in order to update the estimates of amplitude, exp(*µ*), and phase, *ϕ*. This new complex-field estimate is then held fixed while an aberration estimation procedure (Section 2.2) is performed. After updating the aberration estimate based on this procedure, a new iteration begins. Eventually, the objective function converges to a stationary point, giving the final estimates of amplitude, phase, and aberration coefficients. In general, this optimization strategy works as long as there exist enough diversity and redundancy in the measurements.

#### 2.1. DPC phase retrieval sub-procedure

The phase retrieval sub-procedure amounts to solving the conventional DPC inverse problem [7, 10], except that it incorporates the current estimate of the Zernike coefficients **c*** _{k}* at iteration

*k*:

The regularization term, *R*, should be chosen based on *a priori* information about the sample. For instance, Tikhonov regularization can mitigate noise, and the solution of Eq. (6) can then be found using a non-iterative deconvolution [10]. If the gradients of the object are relatively sparse, the Total Variation (TV) regularizer can be used to reduce noise without degrading edges. Since the regularization term for TV is not differentiable, iterative algorithms for solving Eq. (6) are needed. In this paper, we use the Alternating Direction Method of Multipliers (ADMM) to implement TV regularization [33], typically requiring about 20 iterations.

#### 2.2. Aberration recovery sub-procedure

The aberration recovery sub-procedure uses a nonlinear optimization algorithm to update the aberration estimate based on the newly updated complex-field, which is held fixed. The sub-procedure is initialized with the Zernike coefficients estimate from the previous iteration, **c*** _{k}*. Mathematically, the sub-procedure problem is written as:

Equation (7) may be solved by a gradient descent (first-order optimization) approach, or more sophisticated second-order optimization routines (*e*.*g*. Newton’s method [21]). All of these require computation of the gradient of the objective function with respect to **c**. If we define the cost function as $f={\displaystyle {\sum}_{s=1}^{{N}_{s}}{\Vert {\epsilon}_{s}\Vert}_{2}^{2}}$, in which *ε _{s}* =

**F**

*I*

_{n,}

*− diag(*

_{s}*H*

_{µ}_{,}

*)*

_{s}**F**

*µ*

_{k}_{+1}− i · diag(

*H*

_{ϕ}_{,}

*)*

_{s}**F**

*ϕ*

_{k}_{+1}is the residual vector, the gradient becomes ${\nabla}_{\mathbf{c}}f={\displaystyle {\sum}_{s=1}^{{N}_{s}}{\left[\partial {\epsilon}_{s}/\partial \mathbf{c}\right]}^{\mathrm{H}}{\epsilon}_{s}}$, where H denotes Hermitian transpose. Using Eqs. (2)–(4), the gradient can be calculated analytically as:

In this gradient, T denotes transpose and the Zernike basis, **Z**^{T} = [*Z*_{0}, *Z*_{1,}* _{…}*,

*Z*]

_{M}^{T}, contains a finite number of modes where

*Z*

_{0}−

*Z*are the vectorized Zernike modes. For efficient computation, we adopt the L-BFGS algorithm [34] and use the gradient in Eq. (8) to solve this nonlinear optimization problem, which generally takes ∼ 10 iterations to converge.

_{M}## 3. Simulation results

To verify the performance of our joint estimation framework, we show simulation results in Fig. 2. The system parameters were chosen to match our experimental setup (0.4NA, wavelength 514*nm*, 177 source LEDs), with the LED array placed sufficiently far away from the sample such that the illumination from each LED is effectively spatially coherent (plane wave) [11, 13, 28].

We compare our results with joint phase and aberration recovery FPM in Fig. 2. FPM captures a separate image for each of the 177 LEDs, whereas DPC requires only 4 images to reconstruct the same quantities. FPM intensity images are simulated by using different tilted plane wave illuminations corresponding to each single LED. All the intensity images contain the same pupil aberration, which is a weighted sum of the first 21 Zernike modes. Our simulated DPC measurements are the sum of the intensity images from half-circle blocks of LEDs on the top, bottom, or right regions of the LED array [7]. For all measurements, we added synthesized noise using a Poisson distribution with a mean of ∼3000 photons per pixel. Equation (5) was then solved with *ℓ*_{2} regularization using the 4 DPC images, while we implemented the same algorithm in [35] to recover complex-field and aberrations using FPM with the full 177 image dataset. In this setup, FPM could use as few as 32 images (illuminations from the outer-most annular LEDs only) to achieve the same spatial frequency coverage as our DPC method. Therefore, we also include the results of FPM with 32 measurements for comparison.

Reconstructions from both FPM and our DPC algorithm match the ground truth (Fig. 2(b)); however, our method only requires 4 measurements, reducing acquisition time and memory requirements. Figure 2(c) plots the normalized root-mean-square error (║*x* − *x*_{true} ║_{2}/║*x*_{true}║_{2}) at each iteration for both complex-field and pupil aberrations. FPM incurs lower complex-field error than DPC, likely due to both the weak scattering approximation and the larger dataset. As for the pupil aberration error, FPM performance varies significantly with dataset size. While FPM with the full dataset outperforms the proposed DPC framework, our method provides a better result than FPM with 32 images. This is because FPM with fewer measurements has lower effective SNR, adding noise to the recovered pupil aberration (inset of Fig. 2(b)). Our method requires significant computation since it solves two optimization problems at each iteration, but the computation time is comparable to a sequential FPM reconstruction. Both methods were implemented in MATLAB on a desktop computer (Intel Core i7 CPU, Nvidia Tesla C2075 GPU). With a 650×584 pixel object, each iteration took 2.2s for FPM and 2.5s for our DPC algorithm.

## 4. Experimental results

Experimentally, we use an LED array microscope with the illumination module replaced with a custom-built LED array (*λ* = 0.514*µm*) [7, 13]. A phase target (Benchmark Technologies), which contains periodic patterns of continuous spatial frequencies, is imaged by a 20× 0.4NA objective lens (Nikon, CFI Plan Achro) in a Nikon TE300 microscope and images are recorded by a PCO.edge 5.5 sCMOS camera on the front port of the microscope (which adds 2× magnification). To test the weak phase gradient assumption, phase images of 6 resolution targets of different heights are recovered using DPC. After validating the reconstructed phase values against theoretical ones, we find that DPC provides accurate results when the phase of the sample is below 0.64 radians, then underestimates the phase values due to breakdown of the approximation (see Visualization 1). We collect 177 measurements by scanning individual LEDs within a maximum 0.4NA illumination angle. To provide a fair comparison, we use the same measurements to synthesize the 4 images for our method. As shown in Fig. 3(a), DPC measurements with half-circle source patterns have high resolution and qualitatively reveal the phase gradients of the sample, while measurements with single-LED illumination have lower resolution. Two single-LED image zoom-ins that contain the same structure at different orientations are shown in Fig. 3(a). One has high contrast, while the other does not, because of directional aberrations. By processing the images using the FPM algorithm and the proposed method with TV regularization, we recover the phase of the sample as shown in Fig. 3(b). The FPM and DPC reconstructions are similar, and can resolve features with period as small as *λ*/(2 × *NA*) = 0.643*µm*. In addition, both results provide reliable quantitative phase of the sample. The refractive index of the binary phase target is 1.52, with height of 100*nm*, resulting in ∼0.64 radians peak-to-valley. Looking at the 1D cut-lines (taken along dashed lines) in Fig. 3(b), after subtracting the mean of each, the reconstructions show good agreement with the ideal height.

Pupil aberrations recovered by both FPM and DPC algorithms are in Fig. 3(c). While we have no ground truth, aberrations estimated from FPM and DPC match well within the 4* ^{th}* radial degree of Zernike modes. The dominant aberration in the objective lens is the 8

*Zernike mode (horizontal coma); this agrees with the evidence of directional aberration mentioned above. One reason causing a difference between the reconstructed pupils is the high-frequency fluctuation shown in the pupil aberration from FPM, which does not exist in the low-order Zernike modes. Although high-order aberrations might be estimated with more measurements to avoid overfitting, it’s usually enough to improve image quality by correcting the low-order aberrations.*

^{th}To quantify the performance of our method, we introduced known defocus aberrations using an axial motion stage (Thorlabs, MZS500-E). We translated the phase test target across a range of known axial steps over a total range of 40*µm*. Each 2*µm* of translation, the 4 images for our method were acquired using a 10× 0.25NA objective lens (Nikon, CFI Plan Achro) at an acquisition rate of 20Hz. Quantitative phase reconstructions assuming zero aberration, aberration-corrected phase images and the recovered aberrations are shown in Fig. 4(a). With increased defocus, the uncorrected phase results degrade. Though our method also suffers from resolution degradation at large defocus distances (±20*µm*), it provides better resolution than the uncorrected phase retrieval. For example, in Fig. 4(b) the numbers in group 9 are resolved with pupil correction, while they are not readable in the uncorrected result.

We can also recover the focus distances from the defocus term of the Zernike modes and compare the estimated defocus values to known values (Fig. 4(c)). Since the Zernike basis is normalized in a range from −1 to 1, the defocus value, *d*, at each time point, *t*, can be evaluated as *d*(*t*) = *c*_{4}(*t*)×*λ*/*π*/(1−(1−NA^{2})^{0.5}). As expected, the predicted pupil aberrations have quadratic form, which indicates that defocus dominates. Experimentally, our method overestimates defocus values by a factor of 1.16. This discrepancy might originate from mis-calibration of the experiment or parameters used in computation (*e*.*g*. wavelength, NA of the objective lens, precision of the motion stage). This linear relationship between the predicted positions and the true focus holds when the magnitude of defocus is less than 16 *µm*, ∼2× Depth of Field (DoF). However, the accuracy of aberration correction drops as the defocus value exceeds 2× DoF, when the maximum phase difference in the pupil becomes larger than 2*π* and the algorithm becomes more likely to converge to a local minimum. From these experiments, we conclude that our method accurately estimates time-varying aberrations within a total range of 4× DoF.

Our method is easily extended to account for spatially-varying aberrations, simply by solving the joint estimation problem separately over different patches of the FoV. In order to visualize spatially-varying aberrations, we prepared oil (Cargille, n_{D} = 1.58) immersed 10*µm* polystyrene beads (Sigma-Aldrich) as our sample, which is assumed to be nearly spatially invariant. The sample was imaged by a 4× 0.2NA objective lens (Nikon, CFI Plan Apo Lambda) that has a FoV of 1.7 × 2.1mm. Four images were captured at 12.5Hz using the same source patterns as in Fig. 3(a). Field-dependent aberrations are primarily caused by two types of system imperfections. First, the incident angle of individual LEDs changes from one FoV to another, since the relative position between the LEDs and the sample at each FoV varies. Second, there exist spatially-varying aberrations native to the optical system. To accommodate this spatial variance, we break the full FoV into small patches, and apply the joint estimation algorithm on each patch independently. In this case, the incident angle of individual LEDs and the aberrations are assumed invariant within the patch, which is usually valid when the region is much smaller than the total FOV. In Fig. 5(a), the full FoV is divided into 100 patches. Within each patch, the absorption, phase and pupil aberration were recovered independently before being stitched together to form the full FoV images. In practice, optimization for each patch converges in 20 – 30 iterations.

Looking at the aberrations recovered, the center of the FoV is essentially aberration-free, which is consistent with the fact that optical systems are usually optimized there. Therefore, the absorption and phase images with pupil recovery are similar to that without aberration correction. Aberrations are much stronger along the edges of the FoV, where the field curvature blurs the image. Consequently, obvious differences occur between the normal DPC and aberration-corrected DPC reconstructions at those patches. The recovered amplitude (absorption) at the bottom right corner is dramatically changed after applying aberration correction. Qualitatively, the uncorrected absorption at the edge of the field appears to contain some phase information, leading to a shadow-like appearance; these artifacts are removed with pupil recovery. In addition, while aberrated phase evaluated with Tikhonov deconvolution suffered from high-frequency noise, phase with pupil correction shows reduced noise due to the use of TV regularization.

## 5. Conclusion

In computational imaging systems, aberrations can significantly degrade complex-field reconstructions if not properly compensated for. In order to correct aberrations without pre-calibration, we propose joint estimation of the sample and the pupil aberration. Our DPC-based phase retrieval technique simultaneously recovers the system pupil aberration and quantitative phase and absorption of the sample from only 4 intensity images. By combining 3 half-circle source pattern measurements and 1 made under spatially coherent (single-LED) illumination, diffraction-limited complex-field and aberration Zernike coefficients were recovered through an alternating projections non-convex optimization. This method not only reduces the data acquisition time, but also increases the SNR due to higher light throughput compared to single-LED acquisition, which should aid in imaging dynamic biological samples. The recovered system aberrations are general to the system, and may be used for image correction in other subsequent imaging modalities on the same microscope, such as deconvolution of fluorescence images [29].

## Appendix A: Derivation of Eqs. (1)–(3)

Any 2D optical field at the sample plane can be written as exp(Φ). Therefore, the output field passed through the imaging system becomes a convolution between exp(Φ) and *h*, where *h* is the point spread function or the inverse Fourier transform of a shifted pupil *P*(**u** + **u′**), which depends on illumination angle (**u′**) on 2D spatial frequency coordinate (**u**). Hence, intensity *I* at the image plane can be expressed as:

Here, **r** indicates 2D spatial coordinates, ⊗ denotes the convolution operation, and *S* is the intensity of the incident plane wave. Eq. (9) can also be represented in an integral form:

When the absorption and phase gradient are small enough, the exponential term in Eq. (10) is approximately the same as its first order Taylor series expansion, *i*.*e*. exp(*x*)≈ 1 + *x*. As a result, Eq. (10) becomes a combination of convolutions and multiplications:

Assuming Köhler illumination, the output intensity under partially coherent
illumination is an incoherent sum of coherent intensity,
*i*.*e*. an integration of Eq. (11) over
**u′**. By taking Fourier transform $(\tilde{\cdot})$ of the final intensity, we get the system
response in spatial frequency domain:

Since Eq. (12) satisfies slowly varying phase, $\tilde{\Phi}(\mathbf{u},{\mathbf{u}}^{\prime})$ of 2D samples can be expressed as $-\mathrm{i}\tilde{V}(\mathbf{u})/4\pi {({\lambda}^{-2}-|\mathbf{u}+{\mathbf{u}}^{\prime}{|}^{2})}^{0.5}$ based on Rytov approximation [36], where *V* is the scattering potential and *λ* is the wavelength of light. Let $\tilde{\Phi}(\mathbf{u},0)=\tilde{\mu}(\mathbf{u})+\mathrm{i}\tilde{\varphi}(\mathbf{u})$, and express the spatial frequency of complex phase under different illumination as $\tilde{\Phi}(\mathbf{u},{\mathbf{u}}^{\prime})=\tilde{\Phi}(\mathbf{u},0){(({\lambda}^{-2}-|\mathbf{u}{|}^{2})/({\lambda}^{-2}-|\mathbf{u}+{\mathbf{u}}^{\prime}{|}^{2}))}^{0.5}$. By substituting into Eq. (12), subtracting and divided by the DC term *I*_{o} (first term in Eq. (12)), the spectrum of the normalized intensity ${\tilde{I}}_{\mathrm{n}}$ is related to $\tilde{\mu}$ and $\tilde{\varphi}$ by:

If the NA of the objective lens is small, the oblique factor (*λ*^{−2} −|**u**|^{2})^{0.5} in Eq. (13) becomes 1/*λ*, and the forward model using partially coherent illumination can be written as Eqs. (1)–(3).

## Funding

TSTROBE: A National Science Foundation Science &Technology Center (DMR 1548924); Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative (GBMF4562); Office of Naval Research (ONR) (N00014-17-1-2401).

## References

**1. **F. Zernike, “How I discovered phase contrast,” Science **121**, 345 (1955). [CrossRef] [PubMed]

**2. **W. Lang, *Nomarski Differential Interference-Contrast Microscopy* (Carl Zeiss, 1982).

**3. **C. Liu, X. Yu, and M.K. Kim, “Fourier transform digital holographic adaptive optics imaging system,” Appl. Opt. **51**, 8449–8454 (2012). [CrossRef] [PubMed]

**4. **J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. **45**, 836–850 (2006). [CrossRef] [PubMed]

**5. **B. Kachar, “Asymmetric illumination contrast: a method of image formation for video light microscopy,” Science **227**, 766–768 (1985). [CrossRef] [PubMed]

**6. **S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. **34**, 1924–1926 (2009). [CrossRef] [PubMed]

**7. **L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express **23**, 11394–11403 (2015). [CrossRef] [PubMed]

**8. **R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Opt. Express **23**, 26672–26682 (2015). [CrossRef] [PubMed]

**9. **M. Chen, L. Tian, and L. Waller, “3D differential phase contrast microscopy,” Biomed. Opt. Express **7**, 3940–3950 (2016). [CrossRef] [PubMed]

**10. **Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLOS ONE **12**, e0171228 (2017). [CrossRef] [PubMed]

**11. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photon. **7**, 739–745 (2013). [CrossRef]

**12. **Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. **19**, 106002 (2014). [CrossRef] [PubMed]

**13. **L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica **2**, 904–911 (2015). [CrossRef]

**14. **Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express **19**, 1016–1026 (2011). [CrossRef] [PubMed]

**15. **J. A. R. T. Alieva, “Rapid Quantitative Phase Imaging for Partially Coherent Light Microscopy,” Opt. Express **22**, 13472–13483 (2014). [CrossRef] [PubMed]

**16. **J. Zhong, L. Tian, J. Dauwels, and L. Waller, “Partially coherent phase imaging with simultaneous source recovery,” Opt. Express **6**, 257–265 (2015). [CrossRef]

**17. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express **16**, 7264–7278 (2008). [CrossRef] [PubMed]

**18. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy **109**, 1256–1262 (2009). [CrossRef] [PubMed]

**19. **A. Tripathi, I. McNulty, and O. G. Shpyrko, “Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods,” Opt. Express **22**, 1452–1466 (2014). [CrossRef] [PubMed]

**20. **A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A **29**, 1606–1614 (2012). [CrossRef]

**21. **J. Zhong, L. Tian, P. Varma, and L. Waller, “Nonlinear Optimization Algorithm for Partially Coherent Phase Retrieval and Source Recovery,” IEEE Trans. Comput. Imag. **2**, 310–322 (2016). [CrossRef]

**22. **Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express **21**, 32400–32410 (2013). [CrossRef]

**23. **L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of fourier ptychography phase retrieval algorithms,” Opt. Express **23**, 33214–33240 (2015). [CrossRef]

**24. **L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected fourier ptychography,” Biomed. Opt. Express **7**, 4543–4553 (2016). [CrossRef] [PubMed]

**25. **G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express **25**, 17466–17479 (2017). [CrossRef] [PubMed]

**26. **G. Zheng, X. Ou, R. Horstmeyer, and C. Yang, “Characterization of spatially varying aberrations for wide field-of-view microscopy,” Opt. express **21**, 15131–15143 (2013). [CrossRef] [PubMed]

**27. **R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier coding for optical aberration removal,” Opt. Express **22**, 24062–24080 (2014). [CrossRef] [PubMed]

**28. **X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express **22**, 4960–4972 (2014). [CrossRef] [PubMed]

**29. **J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express **7**, 352–368 (2016). [CrossRef] [PubMed]

**30. **X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express **23**, 3472–3491 (2015). [CrossRef] [PubMed]

**31. **H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express **24**, 25345–25361 (2016). [CrossRef] [PubMed]

**32. **V. N. Mahajan, “Zernike Circle Polynomials and Optical Aberrations of Systems with Circular Pupils,” Appl. Opt. **33**, 8121–8124 (1994). [CrossRef] [PubMed]

**33. **S. Boyd, N. Parikh, B. P. E Chu, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations Trends Mach. Learn. **3**, 1–122 (2011). [CrossRef]

**34. **D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Math. Program. **45**, 503–528 (1989). [CrossRef]

**35. **L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express **5**, 2376–2389 (2014). [CrossRef] [PubMed]

**36. **M. Born and E. Wolf, *Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light* (Cambridge University Press, 1999), 7th ed. [CrossRef]