Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accelerated and high-quality Fourier ptychographic method using a double truncated Wirtinger criteria

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a recently developed computational microscopy technique for wide-field-of-view and super-resolution complex imaging. Wirtinger-flow-based methods can effectively suppress noise and reduce data acquisition time, but they are time-consuming during the phase reconstruction. In this paper, we present a Wirtinger-flow-based reconstruction method for FPM, which combines the Poisson maximum likelihood objective function, improved truncated Wirtinger criteria, and improved adaptive momentum method. Both the simulation and experimental results demonstrate that the proposed method runs faster and the reconstruction quality is similar to or better than other state-of-the-art Wirtinger-flow-based methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) [1–6] is a recently developed computational imaging method that can significantly increase the space-bandwidth product (SBP) [5] of optical microscopy, which consists of an improved microscopy and a special algorithm called Fourier ptychography (FP). The FP algorithm integrates two classical algorithms: the aperture synthesis algorithm [7–10] and phase retrieval algorithm [11–13]. In fact, FP shares its roots with the ptychographical iterative engine [14–16].

Because of the advantages of FPM, it should be widely used, but many factors limit its application, such as efficiency, noise, LED positional misalignment, and defocusing. Thus far, several new improved methods have been proposed to address these issues. For example, Sun [17], Yeh [18], and Liu [19] proposed three different improved methods to address the LED positional misalignment. In order to correct the system aberrations, many researchers [20–23] have proposed some different improved methods. However, the most important factors that limit the application of FPM are noise and efficiency. The efficiency of FPM consists of the efficiency of data acquisition and the efficiency of FP algorithm operation. It takes the original FPM tens of minutes to collect the low-resolution (LR) images and almost the same to run the FP algorithm using the alternating projection (AP) frame. In order to address the noise and efficiency, several methods have been proposed. For example, Bian et al. [24] proposed a method to improve the efficiency of the data acquisition and address the Gaussian noise, called the non-convex Wirtinger flow algorithm for FP (WFP) based on the Wirtinger derivatives and gradient descent scheme; however, the runtime of the algorithm was considerable. Bian et al. [25] and Zhang et al. [26] further reduced the effect of the noise but required more time to run the algorithm (refer to [26]). Waller et al. [27] proposed an accelerated method for multiplexed WFP by using “fast iterative shrinkage-thresholding algorithm” however the recovery result is not better than that of WFP.

In this paper, we propose a new improved WFP method with the purpose of drastically reducing the runtime of the algorithm. The proposed method incorporates the improved truncated Wirtinger flow, adaptive momentum, and Poisson maximum likelihood objective function, called DTWFP (double truncated Poisson Wirtinger Fourier ptychographic). The TPWFP (truncated Poisson Wirtinger Fourier ptychographic) and GATFP (generalized Anscombe transform approximation Fourier ptychographic) use the same truncation criterion as defined in [25,26]. The DTWFP method incorporates this criterion and another new and improved truncation criterion (inspired by [28,29]). Because of this newly added criterion, the improved truncation criterion has a stronger restriction ability than that used in TPWFP and GATFP, and this gives the DTWFP a higher probability to reject ‘bad’ search directions. This implies that the DTWFP method will possess a better convergence speed and recovery result. In order to further accelerate the convergence rate, we borrow the idea of momentum from the deep learning community [30]. Both the simulation and experiment demonstrate that the DTWFP method possesses the following two main advantages:

  • 1) The convergence speed of the DTWFP method can increase several or even dozens of times higher than that of other WF-based algorithms.
  • 2) The DTWFP method has a better inhibiting ability of Poisson noise and high-level noise than the state-of-the-art algorithm, while it has a similar inhibiting ability of low or medium-level noise to GATFP.

2. Principle

2.1 Forward problem for FPM

Before we introduce the proposed method in detail, we first review the imaging formation of FP. Regardless of the AP method or WF method, they both use the same optical setup, a classical optical microscope with the small difference of an LED array instead of the traditional light source (Kohler source). For the typical FPM, each LED element, treated as a point source, emits a spherical wave and illuminates the thin sample, s(r), where r represents the 2D spatial coordinates in the sample plane. Because of the large distance between the LED and the sample, we can treat the spherical wave that illuminates the sample as a plane wave. The result of the interaction between the plane wave and thin sample continues to propagate through the optical system to the CCD and forms a limited-resolution image:

Ij=|F1{P(k)F(s(r)ei2πkjr)}|2=|F1{P(k)S(kkj)}|2j=1...N,
where Ij is the captured ideal LR image corresponding to the jth LED element, F–1 is the 2D inverse Fourier transform, P(k) is the coherent transfer function of the optical system, S(k) is the Fourier spectrum of the HR sample, N is the number of captured LR images, k is the spatial frequency coordinates in the Fourier plane, and kj represents the amount of translation of the spectrum corresponding to the jth LED element. This propagation process from the HR spectrum of the sample to the LR image can be mathematically divided into two linear operations: 1) down-sampling caused by the objective lens, P(k)S(kkj); 2) inverse Fourier transformation to the imaging plane caused by the optical system, F–1(·). Note that the purpose of FPM is to recover the HR complex amplitude of the sample, which is a multivariate problem that is convenient to reformulate using linear algebra. We use a matrix Aj to represent these two operations and rewrite the image formation model (1) in a vector form:
bj=|F1{P(k)S(kkj)}|2=|AjS|2,
bj is the ideal captured image corresponding to the jth LED.

2.2 Likelihood objective function and improved truncated Wirtinger gradient

Both WF-based and AP-based FP algorithms aim to recover the HR complex amplitude from intensity-only observations, which is a well-known nonconvex quadratic programming problem. In order to solve this problem, the likelihood objective function must be constituted as the starting point of the solution. For now, the commonly used likelihood functions include the intensity/amplitude-based likelihood function without assuming any noise models, the Poisson maximum likelihood objective function, and the mixed Poisson–Gaussian likelihood function (PGL). For WF-based FP algorithms, the most used objective functions are the latter two, which possess better recovery results. In this study, we primarily conduct analysis based on the Poisson maximum likelihood objective function. In the actual situation, the primary noise of bright-field images is photon noise which follows the Poisson distribution because they catch a lot of light. The dark-field images captured in FPM are more consistent with the Poisson distribution [25] unless the read noise which follows the Gaussian distribution cannot be ignored. In fact, the proposed method can be extended to the PGL.

As above, we assume that the intensity at each pixel of the measured images follows the Poisson distribution and is independent from each other, that is, the measured signal can be represented as

[I˜n]i~Poisson([bn]i),
where the subscripts i and n represent the ith pixel of the nth image, respectively, and [bn]i is the ideal signal as described in Eq. (2). We can obtain the probability mass function [25] denoting the probability of the measured value ([I˜n]i) when Poisson parameter vector [bn]iis
p([I˜n]i|[bn]i)=[bn]i[I˜n]i[I˜n]i!e[bn]i,
where e is Euler’ number and ‘!’ represents the factorial operation.

According to maximum likelihood estimation theory, the goal is to maximize the probability mass function of all the measurements. However, it is difficult to use this maximum likelihood function to directly perform the derivative. Thus, we turn the likelihood function into a negative log-likelihood function and ignore the constant term:

minf(S)=n=1Ni=1m|[An]iS|2n=1Ni=1m[I˜n]ilog(|[An]iS|2),
where N is the number of measurements and m represents the size of each measurement.

Clearly, we obtain the reconstructed signal S by minimizing Eq. (5), and the method we use is the gradient-descent optimization method by calculating the Wirtinger gradient of f(S) with respect to S* [24,25]as

f(S)=df(S)dS*=d{n=1Ni=1m|[AnS]i|2n=1Ni=1m[I˜n]ilog(|[AnS]i|2)}dS*=n=1Ni=1md[|[AnS]i|2[I˜n]ilog(|[AnS]i|2)]dS*=n=1Ni=1md[([AnS]i)*([AnS]i)[I˜n]ilog(([AnS]i)*([AnS]i))]dS*=n=1Ni=1m([An]iH[AnS]i[I˜n]i|[AnS]i|2[An]iH[AnS]i),
where H represents the transposed-conjugate operator. However, because of the noise or mismatch [25], the direction of the gradient does not necessarily point to the true value. Thus, we use the truncated Wirtinger gradient instead of the Wirtinger gradient to solve this problem by introducing truncation criteria. Inspired by [28,29], the improved truncation criteria we used here can be specified by
ζ={1im·N||[I˜n]i|[AnS]i|2|ah[I˜]|[AS]|21m·N·|[AnS]i|2S|[AnS]i|2[I˜n]i1(1+γ)2},
where ζ represents the constraint set that consists of ones and zeros, where zero indicates that this signal does not update in the following iteration update and one indicates the contrary. ah is a predetermined parameter specified by the user and γ is a selected threshold with the range from 1 to 1.14. Thus, the truncated Wirtinger gradient that is used can be described as

ζf(S)=n=1Ni=1m{([An]iH[AnS]i[I˜n]i|[AnS]i|2[An]iH[AnS]i)}ζ,

As expressed in Eq. (8), only the gradients of the signals satisfying Eq. (7) remain; the gradients of the other signals are set to zero.

In the simulations, we know the pupil function precisely. However, in the real experiment, the pupil function that we used in Eq. (1) is not an exact function but a predictive function. If the optical system used in the experiment is a high-quality optical system, the error introduced by the pupil function is very small; thus, the predictively ideal pupil function can be used without updating it. Otherwise, the pupil function needs to be updated as the iteration progresses. The gradient can be described as

ζf(S)=n=1N{P*(k+kn)F{F1{P(k+kn)S(k)}I˜n|F1{P(k+kn)S(k)}|2F1{P(k+kn)S(k)}}}ζζf(P)=n=1N{S*(kkn)F{F1{P(k)S(kkn)}I˜n|F1{P(k)S(kkn)}|2F1{P(k)S(kkn)}}}ζ,

2.3 Iterative update rule

Instead of updating the spectrum of the sample directly by using the gradient, we update the velocity map, vt, inspired by the Polyak heavy ball method [30], which is based on the current gradient and previous gradients:

vt=βt*vt1+μt*ζf(S),
where, βt represents the momentum parameter, μ represents the learning parameter, v-1 = 0, these two parameters control the influence of the previous gradients and the current gradient respectively, here we set μ as an adaptive value which is similar to [25]
μt=min(1et/t0,μmax)m·N,
where, t0 = 50, and μmax = 0.2 in this paper. This type of learning rate is widely used in the gradient-based method to express the influence of the current gradient in the same way we used here. βt represents momentum parameter, which is used to express the influence of the current velocity map
βt=0.90.9/(1+(3·t/2·T)^8),
where, T is the total number of iterations. Due to the momentum in most of cases is very high and do not slow down when the result is close to the goal in the traditional momentum method, which will cause it oscillate around the goal. So we use this type of parameter to ensure the convergence accuracy and avoid the later oscillations without reducing too much convergence speed.

Then, we use the current velocity map to update the spectrum of the sample with the improved adaptive momentum method instead of Polyak’s method

St+1=St+0.9vt,

Actually, the learning parameter defined by Eq [11], is an empirical equation, which means that it may be not optimal. So similar to [31], the optimal learning parameter can be obtained by solving the line search,

μt=argminf(Stμf(St)),
however, it is time-consumed. Stμf(St) is inserted into Eq. (5), and it becomes a 1D-convex real value optimization problem w.r.t. μ, then take the derivative w.r.t. μ,
df(Stμf(St))dμ=ni{μ3|ht|2|ht|23μ2Re(yt*ht)|ht|2+μ[|yt|2|ht|2...+2Re(yt*ht)Re(yt*ht)[I˜n]i|ht|2]+[I˜n]iRe(yt*ht)...|yt|2Re(yt*ht)},yt=[An]iSt,ht=[An]if(St),
This is a univariate cubic equation which is easy to solve.

3. Simulations

3.1 Simulation setting

In the simulation, we assumed the parameters of the virtualized FPM setup as follows: the numerical aperture (NA) of the objective lens is 0.08, the distance between the LED plane and sample plane is 84.8 mm, the distance between adjacent LEDs is 4 mm, the pixel size is 0.2 μm, and a programmable 15 × 15 LED matrix with a wavelength of 630 nm is used as the light source. In addition, we assume that the pupil function is an ideal binary circular function, bounded by a circle with the NA, with all ones inside and all zeros outside. The ‘Lena’ and ‘Aerial’ maps (512 × 512 pixels) from the USC-SIPI database are used as the original HR amplitude and phase maps, respectively. In the simulation, we set ah and γ to 25 as recommended in [25,26] and 1.1, respectively.

In fact, it does not make much sense to simulate with ideal LR images, captured by the FP imaging formation. Therefore, similar to reality, the ideal LR images should be corrupted with noise, i.e., Gaussian noise, Poisson noise, and mixed Poisson–Gaussian noise, which are the three most common noise types [26]. Note that the standard deviation (std) is the ratio between the actual std and the maximum of the ideal LR images for the Gaussian component. To demonstrate the performance and advantages of the proposed DTWFP algorithm, three state-of-the-art algorithms, i.e., AP algorithm, TPWFP algorithm and GATFP algorithm, are used for comparison. The code of the TPWFP was obtained from http://www.sites.goole.com/site/lihengbian.

The iteration number is an important parameter in all iteration algorithms. For AP, we set the iteration number to 100 to ensure its convergence. For TPWFP and GATFP, the iteration number was set to 300 and 1000 respectively to achieve the same purpose, and for the proposed DTWFP algorithm, the iteration number was set to 55, sufficiently to ensure convergence.

To quantitatively evaluate the quality of the recovery results of different algorithms, we introduce the relative error (RE) [25], defined as

RE=minφ[0,2π)ejφSSr2Sr2,
where S denotes the true spectrum of the sample and Sr denotes the reconstructed spectrum.

3.2 Simulation result

In this section, we do not only test the proposed algorithm on the simulated data corrupted with Poisson noise, Gaussian noise, and Poisson–Gaussian noise, but also compare the three abovementioned state-of-the-art algorithms by using the same noisy data. The reconstruction results of these four algorithms under these three different types of noise are presented in Fig. 1. Note that the std for the Poisson–Gaussian noise denotes the noise level of the Gaussian component. For representation, the reconstructed amplitude images and phase images under Gaussian noise and Poisson–Gaussian noise presented in Fig. 1 are when the std of the Gaussian component is 0.002.

 figure: Fig. 1

Fig. 1 Comparison of the reconstruction results by the three state-of-the-arts and the proposed DTWFP. The upper left corner of each amplitude map is the RE at the end of 50 iterations.

Download Full Size | PDF

From the Fig. 1, we can see that the TPWFP and DTWFP have the better performance than AP and GATFP under Poisson noise. As describe in [26], for TPWFP, this own to the accurate Poisson signal model. For DTWFP, it performs better than TPWFP, which we think is due to the improved truncation criteria and the improved adaptive momentum method. For Gaussian noise, as a whole, GATFP performs better overall than AP and TPWFP, which is due to the Gaussian noise being considered in the GATFP model. AP algorithm performs poor regardless of the noise type because it does not consider any noise model or take any means to suppress noise except subtracting the background noise, so it cannot extract useful signals from the noisy data accurately. When the Gaussian noise is small (std = 0.001), we can see that the reconstructed result of TPWFP is close to that of GATFP because of the capability of its threshold constraint to remove most of the Gaussian noise. However, as the Gaussian noise increases, GATFP obtains better results. DTWFP also outperforms the AP algorithm and TPWFP. In the cases of large or small Gaussian noise (std = 0.001 or std ≥ 0.008), DTWFP even performs better than GATFP, which benefits from the improved truncation criteria and the improved adaptive momentum method. When the Gaussian noise is large, regardless of TPWFP and GATFP, their threshold constraints barely distinguish the noise from the latent signals and omits too much useful information, which leads to a poorly reconstructed result. The improved truncation criteria can add another threshold to the original threshold constraint to reduce the omitted measurements. Meanwhile, by cooperating with the improved truncation criteria, a better-reconstructed result can be obtained. The same reason is why DTWFP performs better than GATFP under small Gaussian noise. However, when the std of Gaussian noise is between the two cases, GATFP obtains the best result because of its noise signal model. For the Poisson–Gaussian noise, the result is similar to that of Gaussian noise, and the reason that causes such a phenomenon is the same.

The running time is also an important index for the algorithm, so we summarize the running time of these four algorithms in Table 1. All these algorithms are operated using MATLAB R2016a on a computer with a 3.00 GHz Intel(R) Pentium(R) G3220 processor, 6 GB of RAM, and 64-bit Windows 7. From the table, we can see that AP runs the fastest and GATFP runs the slowest. The running time of DTWFP is nearly one-tenth that of TPWFP and twice that of AP. The reason DTWFP converges rapidly is that the momentum method can accelerate convergence and the improved truncation criteria perform well with almost no increase in the amount of computation.

Tables Icon

Table 1. Comparison of running time between the three state-of-the-arts and the proposed DTWFP.

4. Experiment

To further validate the performance of DTWFP, we test above four algorithms on the real experiment data sets. The equipment of the real FPM is built as follows: an Olympus objective lens (magnification 4 × , NA = 0.1) and a microscopic module is used to build a light microscope as the imaging system. A programmable LED matrix (15 × 15) with the incident wavelength λ = 532nm is used as the angle-varied illumination source. The distance between adjacent LEDs is 4mm, and the distance from the LED matrix to the sample is about 90mm. A CCD camera (JENOPTIK ProgRes CFScan) with the pixel size of 6.45μm is used as the image recording device. We add a tube lens (magnification 2 × ) between the microscopic module and the CCD camera. 225 low-resolution intensity images corresponding to 15 × 15 LED elements are captured as the measured images.

The recovery results and the running times are played in Fig. 2. The initial std is obtained by using the Median of Absolute Deviation technique for GATFP. However, as show in [26], this std is not accurate, so we must adjust it and the best recovery result is shown in Fig. 2. From Fig. 2, we can see that compared to the recovery results of AP and TPWFP, the recovery results of GATFP and DTWFP can recognize more details (see the group 9 element 1). And the reconstruction quality of DTWFP is very similar or even better than that of GATFP (the enlarged group 9 element 1), but the running time of DTWFP is much less than that of GATFP and TPWFP. And this conclusion is the same as what we get form the simulation conclusion.

 figure: Fig. 2

Fig. 2 Comparison of the reconstruction results by the three state-of-the-arts and the proposed DTWFP under real data.

Download Full Size | PDF

5. Conclusion

In this paper, we propose and test a novel, fast, and high-quality reconstruction method for FPM, called DTWFP, which combines with the Poisson maximum likelihood objective function, improved truncated Wirtinger criteria, and improved adaptive momentum method. We compare the reconstruction result and the running time of the proposed method to three other state-of-the-art methods using simulation data with different types of noise and real data. Both the simulation and experimental results demonstrate that our method runs six times faster than TPWFP, and the recovery quality is similar to or better than GATFP. Therefore, the proposed DTWFP can greatly extend the application of WF-based FP algorithms.

The limitations of our method mainly include the following two points. First, the model is non-convex, indicating that the method might converge to local minima; second, the proposed method is still time-consuming compared to the AP, though it is much faster than TPWFP and GATFP. The convergence rate of the Nesterov’s method [32] is faster than that of Polyak heavy ball method in an ordinary way, therefore, we can consider the Nesterov-type acceleration strategy to further accelerate the convergence in our future work.

Funding

National Natural Science Foundation of China (51775148, 51275121); Science Foundation of Heilongjiang Province (QC2018079); China Postdoctoral Science Foundation (2017T100235).

Acknowledgments

Thanks to Prof. Dai (Tsinghua University) for the experimental procedure of GATFP.

References and links

1. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

2. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: a gigapixel superscope for biomedicine,” Opt. Photonics News 25(4), 26–33 (2014). [CrossRef]  

3. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]   [PubMed]  

4. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009). [CrossRef]   [PubMed]  

5. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

6. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

7. T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, “Theory of the synthetic aperture microscope,” Proc. SPIE 2566, 230–240 (1995). [CrossRef]  

8. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [CrossRef]   [PubMed]  

9. T. S. Ralston, D. L. Marks, P. Scott Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]   [PubMed]  

10. Y. Choi, M. Kim, C. Yoon, T. D. Yang, K. J. Lee, and W. Choi, “Synthetic aperture microscopy for high resolution imaging through a turbid medium,” Opt. Lett. 36(21), 4263–4265 (2011). [CrossRef]   [PubMed]  

11. C. Guo, Y. Zhao, J. Tan, S. Liu, and Z. Liu, “Adaptive lens-free computational coherent imaging using autofocusing quantification with speckle illumination,” Opt. Express 26(11), 14407–14420 (2018). [CrossRef]   [PubMed]  

12. J. Rosenblatt, “Phase retrieval,” Commun. Math. Phys. 95(3), 317–343 (1984). [CrossRef]  

13. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

14. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013). [CrossRef]  

15. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

16. J. Fu, P. Li, R. Tan, and L. Chen, “Performance evaluation of ptychographical iterative engine algorithm for coherent diffractive imaging,” in 2012 7th International Conference on System of Systems Engineering (IEEE, 2012), pp. 111–114.

17. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]   [PubMed]  

18. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]   [PubMed]  

19. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

20. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]   [PubMed]  

21. P. C. Konda, J. M. Taylor, and A. R. Harvey, “High-resolution microscopy with low-resolution objectives: correcting phase aberrations in Fourier ptychography,” Proc. SPIE 9630, 96300X (2015). [CrossRef]  

22. P. C. Konda, J. M. Taylor, and A. R. Harvey, “Calibration and aberration correction in multi-aperture Fourier ptychography,” in Imaging and Applied Optics 2016, OSA Technical Digest (online) (Optical Society of America, 2016), paper CT2D.2.

23. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7(11), 4543–4553 (2016). [CrossRef]   [PubMed]  

24. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]   [PubMed]  

25. L. Bian, J. Suo, J. Chung, X. Ou, C. Yang, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6(1), 27384 (2016). [CrossRef]   [PubMed]  

26. Y. Zhang, P. Song, and Q. Dai, “Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood,” Opt. Express 25(1), 168–179 (2017). [CrossRef]   [PubMed]  

27. E. Bostan, M. Soltanolkotabi, M. D. Ren, and L. Waller. “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv preprint arXiv:1803.03714 (2018).

28. G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving Systems of Random Quadratic Equations via Truncated Amplitude Flow,” IEEE Trans. Inf. Theory 64(2), 773–794 (2018). [CrossRef]  

29. G. Wang, L. Zhang, G. B. Giannakis, M. Akcakaya, and J. Chen, “Sparse phase retrieval via truncated amplitude flow,” IEEE Trans. Signal Process. 66(2), 479–491 (2018). [CrossRef]  

30. B. T. Polyak, “Some methods of speeding up the convergence of iteration methods,” USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964). [CrossRef]  

31. X. Jiang, S. Rajan, and X. Liu, “Wirtinger flow method with optimal stepsize for phase petrieval,” IEEE Signal Process. Lett. 23(11), 1627–1631 (2016). [CrossRef]  

32. Y. Nesterov, “A method of solving a convex programming problem with convergence rate O(1/k2),” Soviet Mathematics Doklady 2(27), 372–376 (1983).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1
Fig. 1 Comparison of the reconstruction results by the three state-of-the-arts and the proposed DTWFP. The upper left corner of each amplitude map is the RE at the end of 50 iterations.
Fig. 2
Fig. 2 Comparison of the reconstruction results by the three state-of-the-arts and the proposed DTWFP under real data.

Tables (1)

Tables Icon

Table 1 Comparison of running time between the three state-of-the-arts and the proposed DTWFP.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I j = | F 1 { P ( k ) F ( s ( r ) e i 2 π k j r ) } | 2 = | F 1 { P ( k ) S ( k k j ) } | 2 j = 1... N ,
b j = | F 1 { P ( k ) S ( k k j ) } | 2 = | A j S | 2 ,
[ I ˜ n ] i ~ P o i s s o n ( [ b n ] i ) ,
p ( [ I ˜ n ] i | [ b n ] i ) = [ b n ] i [ I ˜ n ] i [ I ˜ n ] i ! e [ b n ] i ,
min f ( S ) = n = 1 N i = 1 m | [ A n ] i S | 2 n = 1 N i = 1 m [ I ˜ n ] i log ( | [ A n ] i S | 2 ) ,
f ( S ) = d f ( S ) d S * = d { n = 1 N i = 1 m | [ A n S ] i | 2 n = 1 N i = 1 m [ I ˜ n ] i log ( | [ A n S ] i | 2 ) } d S * = n = 1 N i = 1 m d [ | [ A n S ] i | 2 [ I ˜ n ] i log ( | [ A n S ] i | 2 ) ] d S * = n = 1 N i = 1 m d [ ( [ A n S ] i ) * ( [ A n S ] i ) [ I ˜ n ] i log ( ( [ A n S ] i ) * ( [ A n S ] i ) ) ] d S * = n = 1 N i = 1 m ( [ A n ] i H [ A n S ] i [ I ˜ n ] i | [ A n S ] i | 2 [ A n ] i H [ A n S ] i ) ,
ζ = { 1 i m · N | | [ I ˜ n ] i | [ A n S ] i | 2 | a h [ I ˜ ] | [ A S ] | 2 1 m · N · | [ A n S ] i | 2 S | [ A n S ] i | 2 [ I ˜ n ] i 1 ( 1 + γ ) 2 } ,
ζ f ( S ) = n = 1 N i = 1 m { ( [ A n ] i H [ A n S ] i [ I ˜ n ] i | [ A n S ] i | 2 [ A n ] i H [ A n S ] i ) } ζ ,
ζ f ( S ) = n = 1 N { P * ( k + k n ) F { F 1 { P ( k + k n ) S ( k ) } I ˜ n | F 1 { P ( k + k n ) S ( k ) } | 2 F 1 { P ( k + k n ) S ( k ) } } } ζ ζ f ( P ) = n = 1 N { S * ( k k n ) F { F 1 { P ( k ) S ( k k n ) } I ˜ n | F 1 { P ( k ) S ( k k n ) } | 2 F 1 { P ( k ) S ( k k n ) } } } ζ ,
v t = β t * v t 1 + μ t * ζ f ( S ) ,
μ t = min ( 1 e t / t 0 , μ max ) m · N ,
β t = 0.9 0.9 / ( 1 + ( 3 · t / 2 · T ) ^ 8 ) ,
S t + 1 = S t + 0.9 v t ,
μ t = arg min f ( S t μ f ( S t ) ) ,
d f ( S t μ f ( S t ) ) d μ = n i { μ 3 | h t | 2 | h t | 2 3 μ 2 Re ( y t * h t ) | h t | 2 + μ [ | y t | 2 | h t | 2 ... + 2 Re ( y t * h t ) Re ( y t * h t ) [ I ˜ n ] i | h t | 2 ] + [ I ˜ n ] i Re ( y t * h t ) ... | y t | 2 Re ( y t * h t ) } , y t = [ A n ] i S t , h t = [ A n ] i f ( S t ) ,
R E = min φ [ 0 , 2 π ) e j φ S S r 2 S r 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.