## Abstract

The high-frequency vibration of the imaging system degrades the quality of the reconstruction of ptychography by acting as a low-pass filter on ideal diffraction patterns. In this study, we demonstrate that by subtracting the deliberately blurred diffraction patterns from the recorded patterns and adding the properly amplified subtraction to the original data, the high-frequency components lost by the vibration of the setup can be recovered, and thus the image quality can be distinctly improved. Because no prior knowledge regarding the vibrating properties of the imaging system is needed, the proposed method is general and simple and has applications in several research fields.

© 2017 Optical Society of America

## 1. Introduction

Coherent diffraction imaging (CDI) is a promising technology for obtaining the complex transmission function of a specimen from the recorded diffraction intensity. As a lens-free technique, CDI can bypass the resolution limits imposed by the poor focusing optics available at short wavelengths [1,2] and can theoretically reach the diffraction-limited resolution. With X-ray and high-energy electrons, a resolution of nanometers or angstroms can be achieved; thus, CDI is becoming an important tool in material and biological sciences [3–7]. Because the performance of traditional CDI algorithms is not very satisfying with regard to convergence, accuracy, and reliability, several improved CDI methods have been proposed [8–11]. The ptychographic iterative engine (PIE) [12] is a scanning version of the CDI technique where the specimen is scanned through a localized illumination beam to a grid of positions and the resulting diffraction patterns are recorded. Using an iterative scheme with a proper overlapping ratio between two adjacent scanning positions, the modulus and phase of the transmission functions of the specimen and the illumination beam can be reconstructed accurately and rapidly [13,14]. In theory, PIE can easily generate images with a resolution only limited by the numerical aperture of the detector; however, in practice, the resolution is affected by the flaws of the imaging system, especially for imaging with X-ray and electron beams. The coherence of synchrotron radiation sources is not as good as that of a laser beam and has been proven to be the largest barrier for PIE to reach the theoretical diffraction-limited resolution [15,16]. Several algorithms have been investigated to improve the image quality of PIE with partially coherent illumination [17–22]. Although these methods require complete or partial prior knowledge of the properties of illumination or time-consuming computation and may slightly compromise the spatial resolution, the quality of the reconstruction can be distinctively improved in many cases. Furthermore, the recently developed technique of Fourier ptychography has potential to avoid the influence of the incoherence for achieving high-quality reconstruction [23, 24]. Another factor that limits the practical resolution of PIE is the instability of the imaging system, including the vibration of the mechanical scanning system, the tiny pointing-direction change, and the transverse shifting of the radiation beam [25, 26]. Because the wavelengths of the X-ray and the electron beams can be much smaller than 1 nm, the tiny departure of the illumination beam from the right position and direction can generate obvious errors in the final reconstruction. Numerous methods have been proposed to correct the low-frequency vibration with a period far longer than the exposure time of the detector [27–30].The high-frequency vibration of the experimental system makes the recorded diffraction intensity a summation of many diffraction intensities formed by the changing illumination during the exposure of the detector [31]. Although the influence of the high-frequency vibration can be treated as the incoherence of the illumination, these methods [17–19, 22] mentioned above require complete or partial prior knowledge of the properties of vibration-including the frequencies and amplitude-or massive data processing. However, in practice, the parameters of the vibration of the imaging system are difficult to measure in real time. The dynamic vibration of the sample or equivalently the probe can be solved by the mixed state method [20,21] with multiple probe modes [31] or be identified by the “low-rank ptychography” method with tunable solution rank number [32]. In these two methods, the illumination mode number or the solution rank number is related to the vibration property, and it will greatly increase when the characteristic of vibration is complex. To effectively deal with the image-quality degradation induced by the setup instability, we need to examine the principle of its influence on the recorded data and the final reconstruction and then develop a simple method to circumvent this problem.

In this study, we provide an easily-understood mathematical explanation on how the recorded diffraction patterns and the final reconstruction are blurred by the instability of the imaging system and then propose a simple numerical method for enhancing the lost high-frequency components. In this proposed method, the prior knowledge regarding the characteristics of the high-frequency vibration of the imaging system is not required. And can be extended to other CDI methods using X-ray and electron beams to solve the problems related to imaging-system instability.

## 2. Principle of the method

In the PIE method, the specimen *O*(*r*) is fixed on a two-dimensional (2D) translation stage and illuminated by a localized probe *P*(*r*). Assuming that the specimen is sufficiently thin, the exiting wave from the specimen is *φ*(*r*, *R*) = *O*(*r* − *R*)*P*(*r*), and the recorded diffraction intensity is *I*(*k*, *R*) ∝ |ℑ[*φ*(*r*, *R*)]|^{2} in most experiments with short-wavelength sources, where *k* is the reciprocal coordinate with respect to the real space coordinate *r* in the specimen plane, and *R* denotes the position of the specimen during the raster scanning. With the recorded diffraction patterns, the complex amplitudes of the specimen and the probe can be reconstructed. The flowchart of the reconstruction process of PIE is shown in Fig. 1, and a detailed description is found in the literature [12,13].

#### 2.1. Influence of the setup instability

In the PIE algorithm, the illumination probe is assumed to be absolutely static during the data acquisition; however, its direction and position change continually within a small range with respect to the specimen and detector owing to the instability of the imaging system. In most cases, the direction and position of the illumination beam change simultaneously during the exposure of the detector, but for simplicity, we analyze them separately to determine how the recorded data are influenced mathematically and then consider their combined effects.

According to the principles of Fourier optics, any illumination beam can be decomposed into a series of spherical waves of different strengths; thus, we can assume a spherical illuminating probe without a loss of generality. The spherical illumination is expressed in Fourier form as *P̃*(*k*) = *W*(*k*)*exp*(−*jπλzk*^{2}), where *λ* is the wavelength of the probe, and *z* is the distance between the focal spot of the probe and the specimen, and *W*(*k*) is determined by the numerical aperture of the illuminating optics. For a short wavelength or in the far field, the diffraction distribution in the detector plane is the Fourier transform of the wave exiting the specimen. The complex amplitude of the diffraction pattern in the detector plane can be expressed as

*Õ*(

*k*) is the Fourier transform of the transmission function of the specimen. The recorded intensity with static illumination is

*I*

_{0}(

*k*) is the intensity summation of all diffracted beams, and the second item ${\sum}_{m\ne n}{A}_{m}(k){A}_{n}(k)\mathit{cos}\left[\theta (k)\right]$ represents the interference between different spatial-frequency components.

*k*−

_{m}*k*is the frequency of the interference fringe formed by the

_{n}*m*and

^{th}*n*diffraction orders, and

^{th}*ϕ*is the additional phase introduced by the specimen. Considering the coordinate transformation

_{m,n}*r*=

_{c}*λLk*on the CCD plane, where

*L*is the distance between the specimen and the CCD, the diffraction pattern intensity can be described in the frequency domain as

*u*is the reciprocal coordinate with respect to the real space coordinate

*r*in the CCD plane.

_{c}When considering the pointing instability of the imaging system, the illumination beam tilted an angle of *α* can be expressed as,

*P̃′*(

*k*) =

*P̃*(

*k*− Δ

*k*), where Δ

*k*=

*sinα/λ*. The recorded intensity with tilted illumination becomes

*H*

_{1}(Δ

*k*) =

*exp*(−Δ

*k*

^{2}/

*K*

^{2}), where

*K*is a constant related to the vibrating properties of the imaging system. The recorded intensity with high-frequency vibration in the pointing direction of the illuminating beam can be interpreted as a summation of the diffraction patterns of all possible illuminations with different pointing directions. Thus, it can be expressed as

*r*=

_{c}*λLk*in the detector plane, the recorded intensity can be expressed in the frequency domain as

On the other hand, when the illumination probe suffers from transverse positioning vibration, the recorded diffraction pattern can be expressed as a summation of the diffraction intensities formed by all possible illumination beams with diverse transverse shifts. The probe with a transverse shift of *δ* is expressed as *P″*(*r*) = *P*(*r* + *δ*). And its Fourier transform is *P̃″*(*k*) = *P̃*(*k*)*exp*(*j*2*πkδ*). The corresponding diffraction-pattern intensity is

*H*

_{2}(Δ) =

*exp*(−Δ

^{2}/

*D*

^{2}), where

*D*is determined by the standard deviation of Δ =

*δ/λz*. The recorded intensity with high-frequency positioning vibration is a summation of different intensities for different illumination shifts:

In practice, the pointing and transverse vibration of the illumination beam can occur simultaneously; thus, the Fourier transform of the recorded diffraction intensity is

*Ĩ*

_{0}(

*u*) has a narrow frequency band for spherical illumination, Eq. (12) can be approximated as

#### 2.2. Compensation method

At first glance, it appears that the Fourier component of the ideal diffraction pattern can be obtained by dividing *exp*(−*C*^{2}*u*^{2}) to the *Ĩ _{v}* (

*u*) by each possible

*C*until a satisfactory reconstruction is achieved. However, because this operation can seriously amplify the errors or noise, and the expired resolution improvement can be submerged by them, it cannot be adopted in practice.

In the proposed method, the recorded intensities *Ĩ _{v}* (

*k*) are modified before the conventional PIE process, as shown in Fig. 1. The recorded diffraction intensities are deliberately blurred via convolution with the Gaussian function $\mathit{exp}\left(-{k}^{2}/{K}_{v}^{2}\right)$, where

*K*is a constant chosen to slightly blur the recorded diffraction pattern. The deliberately blurred intensity pattern is expressed in the frequency domain as

_{v}*C*=

_{v}*πλLK*. We subtract the blurred pattern ${\tilde{{I}^{\prime}}}_{v}(k)$ from the original recorded diffraction

_{v}*Ĩ*(

_{v}*k*) and then add the subtracted pattern to

*Ĩ*(

_{v}*k*) after multiplying it by a constant

*β*: The Fourier transform of the modified intensity pattern is

*exp*(−

*C*

^{2}

*u*

^{2}),

*Ĩ*(

^{c}*u*) is close to the Fourier transform of the vibration-free diffraction pattern, and then the influence of the imaging system instability on the reconstructed image can be remarkably suppressed.

## 3. Results

#### 3.1. Simulation result

A numerical simulation is performed using the proposed algorithm to check its feasibility. A divergent spherical wave with a wavelength of 632.8 nm is used for the illumination. Two pictures are used as the modulus and phase, respectively, of the specimen. The diameter of the illuminated area on the specimen is 0.74 mm, and 10 × 10 diffraction intensities are recorded with a step size of 0.185 mm. The charge-coupled device (CCD) camera has a resolution of 256 × 256 pixels and a pixel size of 7.4 *μ*m.

For comparison, the diffractive intensities with stable illumination are calculated, one of which is shown in Fig. 2(a). When the unstable imaging system results in diffraction pattern vibration with variance of 46*μm* in the detector plane, the intensity distributions are calculated by adding diffraction intensities under many slightly different illuminations with varying incident angles and transverse shifts. The diffraction patterns corresponding to the same illuminating position of Fig. 2(a) is shown in Fig. 2(b), where the intensity is obviously blurred. This coincides with Eq. (13). The reconstructed modulus and phase of the specimen with stable illumination are very clear as shown in Figs. 3(a) and 3(e), respectively. Figs. 3(b) and 3(f) show the reconstructed complex transmission of the specimen with the unstable illumination, where the resolution is apparently decreased compared with Figs. 3(a) and 3(e). Corrected diffraction patterns are obtained from the raw recorded data using the proposed method. A Gaussian function is used to slightly blur the recorded intensities with *K _{v}* = 1.24

*mm*

^{−1}. And the obtained blurred diffraction patterns are subtracted from the raw recorded diffraction patterns. Then the modified intensity patterns are obtained by adding the amplified subtractions to the raw data with

*β*= 2.5. As shown in Fig. 2(c), the contrast of the corrected diffraction pattern is obviously improved compared with the raw data and the distribution is similar to that of Fig. 2(a). The reconstructed modulus and phase of the specimen using the corrected intensities are shown in Figs. 3(c) and 3(g), respectively, where the resolution is remarkably improved compared with Figs. 3(b) and 3(f). The insets in Figs. 3(a)–(c) and Figs. 3(e)–(g) show the reconstructed modulus and phase, respectively, of the illumination field, indicating that the quality of the retrieved illumination is also improved using the proposed method. In addition, we also use the mixed state method [31] to reconstruct the image from the vibrating data, and the reconstructed modulus and phase are shown in Figs. 3(d) and 3(h), respectively. The fine structures that cannot be resolved in Figs. 3(b) and 3(f) are clearly distinguished.

To quantify the performance of the proposed method, the normalized root-mean-square error metric [13] is calculated, as shown in Fig. 4. The accuracy of the reconstruction are obviously improved when the proposed method is used to correct the recorded diffraction patterns before the iterative computation. The quality of the image obtained by mixed state method is obviously improved after 500 iterations with eight illumination modes. This demonstrates that the proposed compensation method and the mixed state method are comparable in dealing with the high frequency vibration for simulation data.

#### 3.2. Experimental result

The proof-of-principle experiment is conducted with visible light as shown in Fig. 5, where the divergent laser beam from a He-Ne laser is slightly diffused by a rotating plastic diffuser before irradiating on the sample to simulate the instability of the imaging system. A cross section of a monocotyledon placed on a 2D translation stage is used as the sample. The illumination area on the specimen is ∽2.5 mm, and 10 × 10 diffraction patterns are recorded by the CCD camera while the sample is scanned relative to the unstable beam with a step size of 0.37 mm.

Recorded intensities obtained with stable and unstable laser beams are shown in Figs. 6(a) and 6(b), respectively. It is clear that the recorded intensities using unstable imaging system are totally blurred. In order to overcome this limitation, the newly proposed method is applied to modify the recorded patterns. The Gaussian function with *K _{v}* = 0.9355

*mm*

^{−1}is adopted to slightly blur the recorded diffraction intensities. According to the proposed method, the corrected diffraction patterns are obtained with

*β*= 5. As shown in Fig. 6(c), the contrast of the corrected diffraction pattern is remarkably improved and similar to that shown in Fig. 6(a). Figs. 7(a) and 7(e) show the reconstructed modulus and phase, respectively, of the sample with stable illumination, where the fine structures of individual cells are clear. Figs. 7(b) and 7(f) show the reconstructed image of the specimen with the unstable imaging system, where the individual cell is hardly resolved. With the corrected diffraction patterns, the reconstructed image of the specimen shown in Figs. 7(c) and 7(g) are generated. Here, the quality of the reconstructed images is distinctly improved, and the images obtained are roughly identical to those for the stable illumination. We also used the mixed state method to deal with the unstable data, and the sample was reconstructed using twelve illumination modes. As shown in Figs. 7(d) and 7(h), the cellular wall that cannot be resolved in Figs. 7(a) and 7(e) can be observed now, but the reconstructed image quality is not as good as that obtained with method proposed in this paper. It seems that, the vibration generated by the high-speed rotating plastic is too complex to be modeled by the periodical vibrations like in Ref. [31], so numerous illumination modes are needed in the mixed state method in this situation. Consequently, it is difficult to get satisfying reconstruction quality by using only limited number of illumination modes in the reconstruction.

A slice of pumpkin stem was also measured using the experimental setup shown in Fig. 5. Firstly, conventional PIE method was used to retrieve the modulus and phase distribution of the sample under unstable situation. As shown in Figs. 8(a) and 8(d), only the vascular tissues which have relatively large structure can be observed, and the fine structures are completely blurred. The mixed state method was used to deal with the unstable data, and the modulus and phase of the sample reconstructed using twelve illumination modes are shown in Figs. 8(b) and 8(e), respectively. The cellular walls that are unresolvable in Figs. 8(a) and 8(d) can be clearly distinguished using mixed state method. Then, we also used the proposed compensation method to reconstruct the sample, the values of parameter *k _{v}* and

*β*are the same as these used in the experiment mentioned above. The reconstructed modulus and phase are shown in Figs. 8(c) and 8(f), the tissues with fine structure can be distinctly resolved and the image quality is obviously improved compared with those in Figs. 8(a) and 8(d). Compared with the results obtained by mixed state method, the background is more flat and clear, and there is no artifacts in the edge of the image.

To quantify the resolvability of the proposed method, a USAF1951 target is measured. As shown in Fig. 9, group 5, which is resolvable for the stable illumination, is obviously blurred for the unstable system. After the proposed method is applied, the elements in Group 5 become distinguished, as shown in Figs. 9(c) and 9(f).

## 4. Discussion

#### 4.1. The influence of *β*

To show the dependency of the resolution-improvement effect on the value of *β*, a numerical simulation is performed with the same parameters that were used for the simulation in Fig. 2. Fig. 10 shows the effect of the proposed method with various curves, where the blue dashed line indicates the low-pass filter related to the vibration of the imaging system, and the other curves show the increased width of the low-pass filter with varying *β*. The width of the low-pass filter increases remarkably with increasing *β*, and for *β* = 2.5, the width of
$\left[1+\beta -\beta \mathit{exp}\left(-{C}_{v}^{2}{u}^{2}\right)\right]\mathit{exp}(-{C}^{2}{u}^{2})$ is about twice that of the *exp*(−*C*^{2}*u*^{2}), and accordingly the resolution of the final reconstruction with these generated diffraction patterns can be remarkably improved. However, when *β* = 5, the low-pass filter has the shape of the black dashed line in Fig. 10; that is, the strength of the high-frequency components is over-amplified. In fact, according to Eq. (16), the parameter *K _{v}* and

*β*jointly influence the final reconstruction quality. The parameter

*K*is chosen to obviously blur the intensity patterns further and does’t decide the quality of the final reconstruction directly, and usually we can use a

_{v}*K*with the value corresponding to a few CCD pixels. The parameter

_{v}*β*decides the strength to recover the attenuated high-frequency components, so it decides the reconstruction quality directly. Thus, the constant

*β*should be carefully selected to assure a satisfying final reconstruction.

Fig. 11 shows the reconstruction results with different *β* values according to the aforementioned analysis, where Figs. 11(a) and 11(b) are the reconstructed modulus and phase images, respectively, obtained using the raw data acquired with the unstable system. Figs. 11(c)–(h) show the reconstructed modulus and phase images with *β* = 2.5, 1, and 5. These results indicate that the modulus and phase reconstructed using the proposed method produce a better reconstruction quality than those obtained directly with unstable data. On the other hand, the effect of the proposed method depends on the properly chosen value of *β*, which is coincident with the curves shown in Fig. 10.

#### 4.2. Robustness of the proposed method

In the aforementioned theoretical analysis, the vibration of the illumination is assumed to have a normal distribution, and another Gaussian function is used to convolve with the recorded intensities to slightly blur the patterns. But the characteristics of a real imaging system are unknown and may be more complex, so it appears that the proposed method is difficult to implement for real experiments. However, the fundamental principle of the proposed method is to recover the high-frequency components lost because of the instability of the radiation source, and other functions that can realize this purpose can replace the Gaussian function in the analysis. For example, circular functions, triangular functions, and para-curves can be adopted to slightly blur the diffraction patterns for improving the resolution without considering the exact properties of the vibration of the imaging system. This feature makes the proposed method more applicable to real experiments and is demonstrated by the following simulations and experiments.

When the vibration of the imaging system follows a normal distribution, the parameters are the same as that were used for the simulations in Fig. 2. As previously discussed, when the diffraction patterns are deliberately blurred via convolution with another Gaussian function
$\mathit{exp}\left(-{k}^{2}/{K}_{v}^{2}\right)$, the width of the low-pass filter shown by the red line in Fig. 12(a) becomes much wider than that for the dashed line related to the instability. The same results can be obtained by using the circular function *circ*(*k*/*K _{v}*) (Fig. 12(b)) and the triangular function Λ(

*k*/

*K*) (Fig. 12(c)). For the case where the vibration of the imaging system follows a random distribution with the maximum extent of 52.5

_{v}*μm*in the detector plane, the low-pass filter window of the recorded intensity pattern is shown by the black dashed line in Fig. 12(d). The modified results obtained by the deliberately blurring diffraction patterns via convolution with a Gaussian function, circular function, and triangular function are shown as the red lines in Figs. 12(d)–(f), respectively. When the vibration of the imaging system has a triangular distribution and results in an extent of 36.2

*μm*in the detector plane, the low-pass filter window of the recorded intensity pattern has the profile of the black dashed line shown in Fig. 12(g). The modified results obtained by the deliberately blurring diffraction patterns via convolution with a Gaussian function, circular function, and triangular function are shown as the red lines in Figs. 12(g)–(i), respectively.

It is worth noticing that the parameter *K _{v}* and

*β*have various values in the simulation and experiments mentioned above. This is because the characteristics of the vibration were different from each other in these examples. This indicates that the values of parameter

*K*and

_{v}*β*are related to the vibration property. For practical experimental system, the characteristics of the vibration can’t be known exactly in advance. At the first glance, the values of

*K*and

_{v}*β*can’t be decided properly, and accordingly it seems difficult for the proposed method to be applied for real experiments. However, since the characteristics of the high frequency vibration of a practical imaging system doesn’t change remarkably with the time, then the values of

*K*and

_{v}*β*can be predetermined by imaging a sample with known structure (known phase and amplitude) via several trial reconstructions, and in all subsequent experiments the predetermined

*K*and

_{v}*β*can be treated as known parameters.

And an experiment is carried out to verify the practicality of the proposed method, and the parameters are the same with Fig. 5. The reconstructed images shown in Figs. 13(a) and 13(b) are the reconstructions using the raw data recorded with instable illumination, which are seriously blurred. The reconstructed images shown in Figs. 13(c) and 13(d) are obtained by using the Gaussian function with *K _{v}* = 0.9355

*mm*

^{−1}and

*β*= 5 to do the resolution improvement with the proposed method. Figs. 13(e) and 13(f) are obtained by using intensities modified with circular function with a diameter

*K*= 1.4033

_{v}*mm*

^{−1}and

*β*= 8 to do the reconstruction. Figs. 13(g) and 13(h) are obtained by using intensities modified with triangular function with a diameter

*K*= 1.1694

_{v}*mm*

^{−1}and

*β*= 6 to do the reconstruction. We can find that their effects on the resolution improvement are almost the same. These results shows that the effect of the proposed method is independent of the vibrating model of the imaging system; thus, the method does not require prior knowledge of the vibrating properties.

## 5. Conclusion

In conclusion, a simple method is proposed to relax the requirement of ptychography for the stability of the imaging system. The influence of the instability of the imaging system can be described as a low-pass filter acting on the ideal diffraction intensities; the high-frequency components are lost in the recorded data, generating a blurred image in the final reconstruction. Using the proposed method, the recorded intensity patterns are corrected by subtracting the deliberately blurred diffraction patterns from the original ones and then adding the amplified subtracted patterns to the recorded data. The resolution of the final reconstruction can be significantly improved by using the corrected diffraction patterns. The feasibility of the proposed method is demonstrated via both numerical simulations and experiments on an optical bench. Because the proposed method does not require any prior knowledge of the characteristics of the illumination beam or massive calculation during the iterative process, it is an easy approach for acquiring a high-quality reconstruction with an unstable imaging system. The method can be extended to other CDI techniques for imaging with short-wavelength irradiation, such as free electrons or soft X-ray lasers.

## Funding

National Natural Science Foundation of China (No. 61675215).

## References and links

**1. **R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik **35**, 237 (1972).

**2. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**3. **J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature **400**, 342–344 (1999). [CrossRef]

**4. **I. K. Robinson, I. A. Vartanyants, G. Williams, M. Pfeifer, and J. Pitney, “Reconstruction of the shapes of gold nanocrystals using coherent X-ray diffraction,” Phys. Rev. Lett. **87**, 195505 (2001). [CrossRef] [PubMed]

**5. **D. Shapiro, P. Thibault, T. Beetz, V. Elser, M. Howells, C. Jacobsen, J. Kirz, E. Lima, H. Miao, and A. M. Neiman, “Biological imaging by soft X-ray diffraction microscopy,” Proceedings of the National Academy of Science **102**, 15343–15346 (2005). [CrossRef]

**6. **J. Zuo, I. Vartanyants, M. Gao, R. Zhang, and L. Nagahara, “Atomic resolution imaging of a carbon nanotube from diffraction intensities,” Science **300**, 1419–1421 (2003). [CrossRef] [PubMed]

**7. **J. Miao, T. Ishikawa, I. K. Robinson, and M. M. Murnane, “Beyond crystallography: Diffractive imaging using coherent X-ray light sources,” Science **348**, 530–535 (2015). [CrossRef] [PubMed]

**8. **P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. **33**, 309–311(2008). [CrossRef] [PubMed]

**9. **V. Y. Ivanov, M. Vorontsov, and V. Sivokon, “Phase retrieval from a set of intensity measurements: theory and experiment,” J. Opt. Soc. Am. A **9**, 1515–1524 (1992). [CrossRef]

**10. **F. Zhang and J. Rodenburg, “Phase retrieval based on wave-front relay and modulation,” Phys. Rev. B **82**, 121104 (2010). [CrossRef]

**11. **H. Tao, S. P. Veetil, X. Pan, C. Liu, and J. Zhu, “Lens-free coherent modulation imaging with collimated illumination,” Chinese Optics Letters **14**, 071203 (2016).

**12. **H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. **93**, 023903 (2004). [CrossRef] [PubMed]

**13. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy **109**, 1256–1262 (2009). [CrossRef] [PubMed]

**14. **H. Faulkner and J. Rodenburg, “Error tolerance of an iterative phase retrieval algorithm for moveable illumination microscopy,” Ultramicroscopy **103**, 153–164 (2005). [CrossRef] [PubMed]

**15. **K. Stachnik, I. Mohacsi, I. Vartiainen, N. Stuebe, J. Meyer, M. Warmer, C. David, and A. Meents, “Influence of finite spatial coherence on ptychographic reconstruction,” Appl. Phys. Lett. **107**, 011105 (2015). [CrossRef]

**16. **N. Burdet, X. Shi, D. Parks, J. N. Clark, X. Huang, S. D. Kevan, and I. K. Robinson, “Evaluation of partial coherence correction in X-ray ptychography,” Opt. Lett. **3**, 5452–5467 (2015).

**17. **L. Whitehead, G. Williams, H. Quiney, D. Vine, R. Dilanian, S. Flewett, K. Nugent, A. G. Peele, E. Balaur, and I. McNulty, “Diffractive imaging using partially coherent x rays,” Phys. Rev. Lett. **103**, 243902 (2009). [CrossRef]

**18. **J. N. Clark and A. G. Peele, “Simultaneous sample and spatial coherence characterisation using diffractive imaging,” Appl. Phys. Lett. **99**, 154103 (2011). [CrossRef]

**19. **B. Chen, R. A. Dilanian, S. Teichmann, B. Abbey, A. G. Peele, G. J. Williams, P. Hannaford, L. Van Dao, H. M. Quiney, and K. A. Nugent, “Multiple wavelength diffractive imaging,” Phys. Rev. A **79**, 023809 (2009). [CrossRef]

**20. **P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature **494**, 68–71 (2013). [CrossRef] [PubMed]

**21. **D. J. Batey, D. Claus, and J. M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy **138**, 13–21 (2014). [CrossRef] [PubMed]

**22. **W. Yu, S. Wang, S. Veetil, S. Gao, C. Liu, and J. Zhu, “High-quality image reconstruction method for ptychography with partially coherent illumination,” Phys. Rev. B **93**, 241105 (2016). [CrossRef]

**23. **S. Dong, P. Nanda, K. Guo, J. Liao, and G. Zheng, “Incoherent fourier ptychographic photography using structured light,” Photon. Res. **3**, 19–23 (2015). [CrossRef]

**24. **K. Guo, S. Dong, and G. Zheng, “Fourier ptychography for brightfield, phase, darkfield, reflective, multi-slice, and fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. **22**, 77–88 (2016). [CrossRef]

**25. **F. Wei, J.-Y. Choi, and S. Rah, “Experiences of the long term stability at sls,” Proc. AIP **879**, 38–41 (2007). [CrossRef]

**26. **V. Schlott, M. Boge, B. Keil, P. Pollet, and T. Schilcher, “Fast orbit feedback and beam stability at the Swiss Light Source,” Proc. AIP **732**, 174–181 (2004). [CrossRef]

**27. **F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express **21**, 13592–13606 (2013). [CrossRef] [PubMed]

**28. **A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy **120**, 64–72 (2012). [CrossRef] [PubMed]

**29. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express **16**, 7264–7278 (2008). [CrossRef] [PubMed]

**30. **M. Odstrcil, P. Baksh, S. Boden, R. Card, J. Chad, J. Frey, and W. Brocklesby, “Ptychographic coherent diffractive imaging with orthogonal probe relaxation,” Opt. Express **24**, 8360–8369 (2016). [CrossRef] [PubMed]

**31. **J. N. Clark, X. Huang, R. J. Harder, and I. K. Robinson, “Dynamic imaging using ptychography,” Phys. Rev. Lett. **112**, 113901 (2014). [CrossRef] [PubMed]

**32. **R. Horstmeyer, R. Chen, X. Ou, B. Ames, J. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” N. J. Phys. **17**, 053044 (2015). [CrossRef] [PubMed]