Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noise-robust coded-illumination imaging with low computational complexity

Open Access Open Access

Abstract

Coded-illumination (CI) imaging is a feasible technique enabling resolution enhancement and high-dimensional information extraction in optical systems. It incorporates optical encoding and computational reconstruction together to help overcome physical limitations. Existing CI reconstruction methods suffer from a trade-off between noise robustness and low computational complexity, which are both requisite for practical applications. In this paper, we propose a novel noise-robust and low-complexity reconstruction scheme for CI imaging. The scheme runs in an iterative way, and each iteration consists of two phases. First, the measurements are input into a novel non-uniform and adaptive weighted solver, whose weight updates in each iteration. This enables effective identification and attenuation of various measurement noise from coarse to fine. Second, the preserved latent information enters an alternating projection optimization procedure, which reconstructs target image by imposing support constraints without matrix lifting. We have successfully applied the scheme to structured illumination imaging and Fourier ptychography. Both simulations and experiments demonstrate that the method obtains strong robustness, low computational complexity, and fast convergence. The scheme can be adopted for various incoherent and coherent CI imaging modalities with wide extensions.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Coded-illumination (CI) imaging is a feasible technique in optical systems to enhance imaging resolution [1, 2] and extract high-dimensional information (such as quantitative phase imaging [3], multimodal imaging [4] and three-dimensional [5] imaging). The imaging scheme incorporates optical encoding of illumination and computational reconstruction of target image together, to help overcome physical limitations such as light diffraction and intensity-only sensing [1, 2, 6, 7]. By varying encoding patterns of illumination, multiple measurements are usually captured to extract more information of the target scene. CI imaging modalities can be categorized into coherent and incoherent ones according to the coherence state of light. Coherent imaging such as Fourier ptychography (FP) [1] and coherent diffractive imaging (CDI) [6] use coherent light as illumination and modulate its wavefront. Systems such as structured illumination (SI) imaging [2] and single-pixel imaging (SPI) [7] belong to incoherent imaging, because they use incoherent light and encode its intensity.

In incoherent CI imaging, the forward image formation is mathematically described in the real domain, and the least-square (LS) based [8] and the compressive-sensing (CS) based [9, 10] modeling methods are commonly applied to reconstruct target image from captured data. The LS method reconstructs the underlying signal by minimizing the difference between measurements and corresponding simulated estimates. The CS method reconstructs target image by solving an optimization problem with the prior that natural images can be sparsely represented using over-complete bases. As a comparison, the CS method is more robust to measurement noise than the LS method, but it has higher computational complexity.

In coherent CI imaging, the reconstruction process can be mathematically described as a phase-retrieval optimization [11], which requires to retrieve a complex function given the captured intensity of its linear transform. The state-of-art phase-retrieval algorithms can be classified into three categories, including the alternating projection (AP) [12–17], the Wirtingerflow (WF) [18, 19] and the semidefinite programming (SDP) [20, 21] methods. The AP method implements reconstruction by iteratively imposing support constraints on the to-be-retrieved target signal at the object plane (such as nonnegative support) and the detector plane (consistency with measurements) [22]. In terms of the objective function, the AP method belongs to the amplitude-based LS optimization [23], and it is applicable for incoherent imaging as well [2, 24, 25]. The reconstruction is computationally efficient, but it is sensitive to measurement noise [11] because measurements are directly imposed as support constraints which introduces reconstruction errors. The WF method is intensity based and follows a gradient descent framework using the Wirtinger derivatives [18], which can reach the global optimum with a careful initialization but is time consuming than the AP algorithm [23]. The SDP algorithm is also intensity based, but it lifts matrix up and transforms the conventional quadratic phase retrieval model into a linear one in a higher dimension [26], and utilizes semidefinite relaxations to find the global optimum with high signal-to-noise ratio (SNR). Nevertheless, the matrix lifting makes the SDP method more computationally demanding than the other two methods [11, 23]. To conclude, there exists a trade-off between noise robustness and low computational complexity among the above methods, which however, are both requisite for practical applications.

In this paper, we propose a novel CI reconstruction scheme that enables both noise robustness and low computational complexity, and is applicable for both incoherent and coherent imaging modalities. The scheme shares its root with the AP method that iteratively updates variables by imposing support constraints alternatively in different domains, but differs in that the measurements are not directly imposed as consistency constraint. Specifically, each iteration of the proposed scheme consists of two phases. First, the measurements are input into a novel non-uniform and adaptive weighted solver for noise identification and latent signal detection. The solver’s weight updates in each iteration to approach ground truth and improve reconstruction quality. Second, the preserved latent information enters an backward reconstruction process to update target image. The proposed method is tested for incoherent structured illumination imaging and coherent Fourier ptychography. Both simulations and experiments demonstrate that the advantages of the proposed scheme lie in the following three aspects.

  • Strong robustness: the proposed non-uniform and adaptive weighted solver enables effective identification and attenuation of various kinds of measurement noise from coarse to fine, which effectively extracts latent information of target details.
  • Low computational complexity: the alternating projection based optimization updates variables by imposing support constraints without matrix lifting, which largely decreases computation cost compared to the conventional compressive sensing based and semidefinite programming based methods.
  • Fast convergence: the joint iterative framework incorporates noise attenuation and alternating projection reconstruction together, which prevents errors from last iteration flowing into subsequent optimization, and thus ensures fast objective decreasing and convergence.

The remainder of this paper is organized as follows: modeling and derivation of the proposed method are explained in Sec. 2. Then, simulation and experiment results are respectively presented in Sec. 3 and Sec. 4 to demonstrate the performance of the proposed method. Finally, we conclude this paper with discussions in Sec. 5.

2. Method

2.1. Image formation

Before introducing the proposed method in detail, we first review the image formation of coded-illumination imaging. In general, the formation model can be described as

I=|T(PO)|2,
where I

denotes the captured image, P stands for optical encoding, O represents the ground-truth information of the target, denotes the entry-wise multiplication, and T is the linear transform between the signals at the object plane and the detector plane.

 figure: Fig. 1

Fig. 1 Two examplar image formation models of coded illumination imaging. (a) Incoherent structured illumination (SI) imaging. (b) Coherent Fourier ptychograpic (FP) imaging.

Download Full Size | PDF

Incoherent imaging systems use incoherent light to illuminate the target scene, in whose formation model all the variables of Eq. (1) are in real space. Structured illumination (SI) technique [2] is a typical incoherent CI imaging method, which modulates incoherent light’s intensity as a sinusoidal pattern in the spatial domain. In this way, the high-frequency information that is conventionally filtered due to light diffraction is now moved to the low-frequency regions, and can be captured and recovered by algorithmic decorrelation for resolution enhancement. As shown in the SI formation schematic in Fig. 1(a), ORn×n represents the ground-truth target image, PRn×n denotes the sinusoidal illumination pattern, IRn×n is the captured image, T stands for low-pass filtering, and T1 becomes deconvolution in the SI imaging case.

Coherent imaging systems illuminate the target with coherent light. In this case, both the optical encoding function and object function belong to the complex field. Taking the Fourier ptychographic (FP) imaging [1] as an example, whose image formation is shown in Fig. 1(b), light’s incident angle is modulated, and the tilted illumination results in shifted spatial spectrum of the target denoted as OCn×n. After being filtered by the pupil function PCn×n and inverse Fourier transformed, the wavefront approaches to the detector which captures an intensity image IRn×n. Therefore, T=F1 represents the inverse Fourier transform in the FP case, and T1 stands for the Fourier transform. By stitching the images captured under different incident angles together in Fourier space using phase retrieval reconstruction, both amplitude and phase information of the target can be recovered with high resolution.

2.2. Reconstruction scheme

The goal of CI reconstruction is to recover the target information O from the measurement I and the optical encoding function P. If O is not in the spatial domain (such as the FP case), a domain transformation is needed to produce the target image. To make it easy for algorithm derivation, we set Ψ=PO (the illumination encoding at the object plane), Φ=T(Ψ) (the optical transform through the capture system), and rewrite the formation model as

I=|Φ|2,Φ=T(Ψ),Ψ=PO.

The recovery process begins with an initial guess of the target information as O0, and then it enters an iterative optimization process. The flowchart of the nth iteration is shown in Fig. 2, which consists of two phases, including extracting the latent amplitude |Φ| from measurement and imposing constraint for target updating.

 figure: Fig. 2

Fig. 2 Flowchart of the proposed reconstruction scheme for coded-illumination imaging.

Download Full Size | PDF

2.2.1. Extracting the latent amplitude |Φ| from measurement

With the knowledge of updated On from last iteration and optical encoding function P, we forward the formation process Ψn=POn, and the wavefront at the detector plane is Φn=T(Ψn). In the ideal case free from measurement noise, I=|Φ|2. As a result, the consistency constraint at the detector plane can be implemented by replacing the amplitude of Φn with I, namely Φn+1=IΦn|Φn|. This is what the conventional AP method assumes and follows. However, in the real case, there exists measurement noise due to limited exposure and system errors, which would introduce aberration into subsequent iteration and degrade final reconstruction.

To eliminate the negative influence of measurement noise, we use the Poisson distribution [27] to describe the nature of photon arrivals at detector as

IPoisson(|Φ|2).

The signal’s probability mass function is e(|Φ|2)(|Φ|2)II!, where e is the Euler’s number, and I! is the factorial of I. Note that I, |Φ|2 and corresponding calculations are all entry-wise. Based on the maximum-likelihood estimation theory [28], the latent amplitude |Φ| can be extracted by maximizing the probability as

arg max|Φ|log e(|Φ|2)(|Φ|2)II!arg max|Φ||Φ|2+Ilog |Φ|2.

Recalling that the amplitude should be consistent with its forward reconstruction, a quadratic regularization on the amplitude difference between Φ and its forward estimate T(Ψn)=T(POn) is introduced, and we obtain the optimization objective function for retrieving the latent amplitude |Φ| as

min L(|Φ|)=|Φ|2Ilog |Φ|2+1γ(|Φ||T(POn)|)2,
where γ is a weight parameter to balance the Poisson and residual optimization terms. It is worth noting that the joint objective function does not assume any noise model, and therefore it is applicable for the attenuation of various kinds of measurement noise [29]. This is validated by the simulations in Sec. 3.

The derivative of L(|Φ|) in Eq. (5) to |Φ| is derived as

L|Φ|=2|Φ|2I|Φ|+2γ(|Φ||T(POn)t|).

Setting the derivative being 0, we obtain

(1+1γ)|Φ|21γ|T(POn)||Φ|I=0,
from which the closed-form solution of |Φ| is derived as
|Φn+1|=1γ|T(POn)|+(1γ|T(POn)|)2+4(1+1γ)I2(1+1γ).

 figure: Fig. 3

Fig. 3 Different settings of k lead to different weight functions and different reconstruction quality.

Download Full Size | PDF

As stated before, γ is a weight parameter to balance the Poisson and residual optimization terms. For the entries corrupted with heavy noise, the value of γ should be small to decrease the weight of the Poisson term, which helps prevent noise of inaccurate measurement flowing into subsequent iteration. If the measurement is accurate and free from noise, the value of γ should be large to ensure that the latent information in I

successfully flows into subsequent iteration. Based on the above analysis, we propose an adaptive and non-uniform weight function of γ as

γ=1logk(1+|I|T(POn)|2|).

Specifically, the difference between measurement I and its forward estimation |T(POn)|2 is utilized to estimate the noise level at each entry. Large difference corresponds to high level of noise, and γ becomes small, which is consistent with the above analysis. When there exists no noise, γ+, |Φn+1| approaches I in Eq. (8), meaning that the measurement plays a dominant role to update |Φn| in this case. When there exists heavy noise, γ0, |Φn+1||T(POn)|, meaning that the amplitude keeps its forward formation to avoid noise degrading.

The parameter k is pre-determined by users, which tunes the change rate of γ. The weight function at different k settings and corresponding reconstructed images at different noise levels are shown in Fig. 3. As shown, the weight’s curve is steep when k is small (for example k=1.01). In this case, measurement noise is effectively attenuated. However, the latent target details are attenuated at the same time, which results in low resolution. When k is large (for example k=1000), the curve changes smoothly, and more information from I is preserved and flows into reconstruction, which results in high resolution at low noise level but low SNR at high noise level. In practice, k needs to be adjusted according to different noise levels for various systems to achieve satisfying reconstruction performance.

2.2.2. Imposing constraint for target updating

With the extracted amplitude |Φn+1|, imposing amplitude constraint on the wavefront at the detection plane produces

Φn+1=|Φn+1|Φn|Φn|.

The updated wavefront is utilized in the next to invert the formation process and update the target image. The backward updating rule follows the ptychographic iterative engine [30]. Specifically, the wavefront at the object plane Ψ is updated asΦn+1=|Φn+1|Φn|Φn|. Ψn+1=Ψn+T1(Φn+1T(Ψn)), (11)

and the updating rule of the target image is given by

On+1=On+αP|P|max 2(Ψn+1Ψn),
where α denotes learning rate.

So far, the nth iteration is completed, and the produced On+1 is set as the input of the next iteration. The entire iterative process is repeated sequentially for all the measurements (images captured under different illumination patterns in SI imaging and different illumination angles in FP imaging) until convergence, which is determined by the criterion that the difference between On+1 and On is smaller than a thresholding.

3. Simulation results

In this section, simulations are implemented to validate the noise robustness and low computational complexity of the proposed method for CI imaging. Both the conventional AP method and the reported scheme are applied to the reconstruction of incoherent structured illumination imaging and coherent Fourier ptychograpic imaging. To demonstrate that the proposed method is able to tackle various measurement noise, we use two common kinds including Gaussian and speckle noise for test.

To quantitatively evaluate reconstruction quality, we use the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index [31] as metrics. The former calculates the energy ratio between latent signal and reconstruction error, which measures the overall reconstruction precision. The latter is an index ranging from 0 to 1, and measures local structure similarity between the reconstructed image and the ground-truth image.

 figure: Fig. 4

Fig. 4 Simulation results using the conventional AP method and the reported scheme for incoherent structured illumination imaging. (a) Examples of raw images. (b) Reconstruction comparison under different levels of Gaussian noise. (c) Reconstruction comparison under different levels of speckle noise.

Download Full Size | PDF

3.1. Simulation results of structured illumination imaging

In the simulation of structured illumination imaging, we use the ‘cameraman’ image (256×256 pixels) as ground-truth target. The pixel values are normalized to [0,1]. The capture system has a numerical aperture (NA) being 0.1, which is the same as the illumination NA. The illumination strategy is the same as ref. [32]. Specifically, 4 sinusoidal illumination patterns are used in total, which include two complementary patterns (with the sinusoidal phase being 0 and π) for one sinusoidal orientation, and two patterns at the other two sinusoidal orientations (with the sinusoidal phase being 0). Correspondingly, 4 simulated raw images are acquired. Different levels of Gaussian noise and speckle noise are respectively added to the simulated captured images. The noise level is represented by its standard deviation. In reconstruction, the iteration stops when the difference of the reconstructed target image between two successive iterations is smaller than 108 at each entry.

 figure: Fig. 5

Fig. 5 Simulation results for coherent Fourier ptychograpic imaging. (a) The ground-truth amplitude and phase image. (b) Raw images corrupted by Gaussian and speckle noise. (c) Reconstruction comparison under different levels of Gaussian noise. (d) Reconstruction comparison under different levels of speckle noise.

Download Full Size | PDF

Both qualitative and quantitative reconstructed results are presented in Fig. 4. For comparison, Fig. 4(a) shows exemplar raw images captured under uniform and structured illumination, and Figs. 4(b) and 4(c) show the reconstructed images of the proposed method and the conventional AP method under Gaussian and speckle noise. The recovered images validate that the reported scheme produces less noise than the conventional AP method. From the quantitative results, we can see that as noise level increases, the PSNR of the conventional AP decreases much faster than that of the proposed scheme. As a demonstration, the PSNR of the reported scheme is nearly 6 dB better than that of the conventional AP when the Gaussian noise level is 0.06. What’s more, when the speckle noise level is 0.5, the PSNR of the proposed method is nearly 9 dB better. The comparison of the SSIM metric presents similar results, which validate that the proposed method is robust to both Gaussian and speckle measurement noise.

The convergence comparison is also shown in the curve graphs in Fig. 4. From the results we can see that as noise level increases, fewer iterations are needed using the proposed scheme compared to the conventional AP reconstruction. The advantage originates from the joint optimization framework that incorporates adaptive noise attenuation and alternating projection optimization. In the conventional AP reconstruction, measurement noise flows into variable updating in each iteration. As a result, the conventional AP requires a number of iterations to find a balanced solution. Differently, the proposed method adaptively identifies errors and prevents them from entering reconstruction, which ensures informative updating and objective decreasing at each iteration. This leads to fewer iterations for final convergence. To conclude, the reported scheme has strong robustness to both kinds of measurement noise and fast convergence.

3.2. Simulation results of Fourier ptychograpic imaging

In this simulation, we test the proposed method for coherent Fourier ptychograpic imaging. As shown in Fig. 5(a), we use the ‘cameraman’ image and the ‘westconcordorthophoto’ image (512 × 512 pixels) as the target’s amplitude and phase, respectively. The detection system has a numerical aperture being 0.08, and 15 × 15 LEDs are used for tilted illumination. The illumination NA is 0.5, which is determined by the maximum illumination angle of the LED source. The illumination configuration is the same as ref. [1]. In total, 225 low-resolution images are captured at different incident angles. The captured images are corrupted with different levels of Gaussian and speckle noise. The examples of raw images corrupted by Gaussian and speckle noise are shown in Fig. 5(b) for comparison. Both the conventional AP method and the reported scheme are used for FP reconstruction. The convergence criterion is the same as the above SI simulation that the difference of reconstructed target image between two successive iterations is smaller than 108 at each entry.

Figs. 5(c) and 5(d) shows the reconstructed results by the two methods under Gaussian noise and speckle noise, respectively. Both PSNR and SSIM are calculated on the amplitude images. From the results we can see that the images recovered by the conventional AP method become more and more blurred and distorted as noise level increases, while the proposed scheme produces less noise and more details. Specifically, when noise level is high (0.06 and 0.08 under Gaussian noise, 1.8 and 2.7 under speckle noise), AP fails to recover quantitative phase while the reported scheme still works well. The convergence comparison is also shown in Fig. 5. From the results, we can see that as noise level increases, fewer iterations are needed using the proposed scheme. For example, when the Gaussian noise level is 0.08, the proposed scheme requires only ∼68% of iterations of the conventional AP. When the speckle noise level is 2.7, the ratio becomes ∼56

4. Experiment results

To further validate the robustness of the proposed scheme, we implemented the algorithms on both SI and FP experiment data.

 figure: Fig. 6

Fig. 6 SI experiment results. (a) Raw image captured under uniform illumination. (b) Reconstruction results at different exposure time.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 FP experiment results. (a) Image captured at normal incidence. (b) Reconstruction results at different exposure time.

Download Full Size | PDF

In the SI experiment, we used fluorescent sphere (20nm) as target. The detection system has a numerical aperture being 1.49, and the wavelength of illumination is 620nm. The illumination strategy is the same as that in the simulation. Different exposure time was applied to capture images under different noise levels. Shorter exposure time leads to heavier measurement noise. Fig. 6(a) shows the raw image captured under uniform illumination. The recovered high-resolution images by the twomethods are shown in Fig. 6(b). When exposure time is long (20ms), both the two algorithms produce high reconstruction quality. As exposure time decreases, lower SNR and more aberration emerge in the reconstructed results of the conventional AP method. The image contrast decreases heavily when exposure time is less than 1ms. In contrast, the proposed method obtains less noise, smoother background and higher image contrast even in the heavy noise case.

In the FP experiment, we used a blood smear sample as target, and captured a sequence of 225 low-resolution images under the illumination of 15 × 15 LEDs. Similar to ref. [27], the numerical aperture of the detection system is 0.1, and the central wavelength of incident light is 632nm. The reconstructed results of both the methods are shown in Fig. 7. The image captured at normal incidence is shown in Fig. 7(a). Fig. 7(b) shows the recovered amplitude and phase images. From the results we can see that as exposure time decreases, the reconstruction of the conventional AP degrades a lot, while the proposed method produces more target details, smoother background and higher image contrast.

5. Conclusions and discussions

In summary, we proposed a noise-robust and low-complexity reconstruction scheme applicable for both incoherent and coherent coded-illumination imaging. The noise robustness originates from the proposed non-uniform and adaptive weighted solver, which enables effective noise attenuation and latent signal extraction from coarse to fine. The low computational complexity benefits from the utilized alternating projection optimization which does not need matrix lifting. Experiments demonstrate that the proposed method requires less iterations than the conventional alternating projection method to converge, which profits from the joint iterative optimization framework incorporating noise attenuation and alternating projection reconstruction. To conclude, the proposed scheme obtains strong noise robustness, low computational complexity and fast convergence, which help save both exposure and reconstruction time.

The reported scheme can be widely extended. First, it can be applied to various imaging modalities, by adapting the transform operation T in Eq. (1) to corresponding optical modulation. Second, the momentum optimization strategy [33] can be incorporated into the proposed scheme to further increase convergence speed. Third, the sparse representation prior of natural images from the compressive sensing theory [9, 10] can be incorporated into the proposed solver to extract more target details and further improve reconstruction quality.

Funding

Beijing Institute of Technology Research Fund Program for Young Scholars; Fundamental Research Funds for the Central Universities (3052019024); National Natural Science Foundation of China (61827901).

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

2. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008). [CrossRef]   [PubMed]  

3. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011). [CrossRef]   [PubMed]  

4. L. Ziji, T. Lei, L. Sijia, and W. Laura, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19, 106002 (2014). [CrossRef]  

5. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

6. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400, 342–344 (1999). [CrossRef]  

7. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). [CrossRef]  

8. C. L. Lawson and R. J. Hanson, Solving least squares problems(Prentice-Hall, Inc.,, 1974). Prentice-Hall Series in Automatic Computation.

9. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theor. 52, 1289–1306 (2006). [CrossRef]  

10. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008). [CrossRef]  

11. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging,” IEEE Signal Process. Mag. 32, 87–109 (2015). [CrossRef]  

12. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

13. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

14. C. C. Chen, J. Miao, C. W. Wang, and T. K. Lee, “Application of optimization technique to noncrystalline X-ray diffraction microscopy: Guided hybrid input-output method,” Phys. Rev., Ser. B , 76, 3009–3014 (2007). [CrossRef]  

15. V. Elser, “Solution of the crystallographic phase problem by iterated projections,” Acta Crystallogr. Sect. A 59, 201–209 (2014). [CrossRef]  

16. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Probl. 21, 37–50 (2004). [CrossRef]  

17. J. A. Rodriguez, R. Xu, C. C. Chen, Y. Zou, and J. Miao, “Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities,” J. Appl. Crystallogr. 46, 312–318 (2013). [CrossRef]   [PubMed]  

18. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory 61, 1985–2007 (2015). [CrossRef]  

19. Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in International Conference on Neural Information Processing Systems, (2015), pp. 739–747.

20. E. J. Candès, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math. 66, 1241–1274 (2013). [CrossRef]  

21. I. Waldspurger, A. D’aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. 149, 47–81 (2015). [CrossRef]  

22. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

23. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]  

24. S. Dong, K. Guo, S. Jiang, and G. Zheng, “Recovering higher dimensional image data using multiplexed structured illumination,” Opt. Express 23, 30393–30398 (2015). [CrossRef]   [PubMed]  

25. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35, 78–87 (2018). [CrossRef]  

26. E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imag. Sci 6, 199–225 (2013). [CrossRef]  

27. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using wirtinger flow optimization,” Opt. Express 23, 4856–4866 (2015). [CrossRef]   [PubMed]  

28. J. Rice, Mathematical Statistics and Data Analysis (Cengage Learning, 1988).

29. L. Bian, J. Suo, J. Chung, X. Ou, C. Yang, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6, 27384 (2016). [CrossRef]  

30. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]  

31. W. Zhou, B. Alan Conrad, S. Hamid Rahim, and S.P. Eero, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

32. S. Dong, J. Liao, K. Guo, L. Bian, J. Suo, and G. Zheng, “Resolution doubling with a reduced number of image acquisitions,” Biomed. Opt. Express 6, 2946–2952 (2015). [CrossRef]   [PubMed]  

33. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4, 736–745 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Two examplar image formation models of coded illumination imaging. (a) Incoherent structured illumination (SI) imaging. (b) Coherent Fourier ptychograpic (FP) imaging.
Fig. 2
Fig. 2 Flowchart of the proposed reconstruction scheme for coded-illumination imaging.
Fig. 3
Fig. 3 Different settings of k lead to different weight functions and different reconstruction quality.
Fig. 4
Fig. 4 Simulation results using the conventional AP method and the reported scheme for incoherent structured illumination imaging. (a) Examples of raw images. (b) Reconstruction comparison under different levels of Gaussian noise. (c) Reconstruction comparison under different levels of speckle noise.
Fig. 5
Fig. 5 Simulation results for coherent Fourier ptychograpic imaging. (a) The ground-truth amplitude and phase image. (b) Raw images corrupted by Gaussian and speckle noise. (c) Reconstruction comparison under different levels of Gaussian noise. (d) Reconstruction comparison under different levels of speckle noise.
Fig. 6
Fig. 6 SI experiment results. (a) Raw image captured under uniform illumination. (b) Reconstruction results at different exposure time.
Fig. 7
Fig. 7 FP experiment results. (a) Image captured at normal incidence. (b) Reconstruction results at different exposure time.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I = | T ( P O ) | 2 ,
I = | Φ | 2 , Φ = T ( Ψ ) , Ψ = P O .
I P o i s s o n ( | Φ | 2 ) .
arg  max | Φ | log  e ( | Φ | 2 ) ( | Φ | 2 ) I I ! arg  max | Φ | | Φ | 2 + I log  | Φ | 2 .
min  L ( | Φ | ) = | Φ | 2 I log  | Φ | 2 + 1 γ ( | Φ | | T ( P O n ) | ) 2 ,
L | Φ | = 2 | Φ | 2 I | Φ | + 2 γ ( | Φ | | T ( P O n ) t | ) .
( 1 + 1 γ ) | Φ | 2 1 γ | T ( P O n ) | | Φ | I = 0 ,
| Φ n + 1 | = 1 γ | T ( P O n ) | + ( 1 γ | T ( P O n ) | ) 2 + 4 ( 1 + 1 γ ) I 2 ( 1 + 1 γ ) .
γ = 1 log k ( 1 + | I | T ( P O n ) | 2 | ) .
Φ n + 1 = | Φ n + 1 | Φ n | Φ n | .
O n + 1 = O n + α P | P | max  2 ( Ψ n + 1 Ψ n ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.