## Abstract

Coherent diffractive imaging of objects is made considerably more practicable by using ptychography, where a set of diffraction patterns replaces a single measurement and introduces a high degree of redundancy into the recorded data. Here we demonstrate that this redundancy allows diffraction patterns to be extrapolated beyond the aperture of the recording device, leading to superresolved images, improving the limit on the finest feature separation by more than a factor of 3.

© 2011 Optical Society of America

## 1. INTRODUCTION

Ptychography is a form of coherent diffractive imaging (CDI, the process of recovering an image of a specimen from diffraction data) in which a specimen is stepped through a localized coherent “probe” wavefront, generating a series of diffraction patterns at the plane of a detector [1]. By stepping the specimen such that the illuminated area at each position overlaps with its neighbors, redundancy is introduced into ptychographical data that can be exploited during the reconstruction of an image. Although the principle of ptychography was first discussed in the late 1960s, indicating that this type of data could provide a solution to the phase-retrieval problem, it was many years before a computational inversion process (in this case, a form of deconvolution) was suggested and implemented for light and x rays (for reviews of this early work see [1, 2]). A variation of ptychography was first demonstrated using high-energy electrons at subnanometer resolution for the simplified case of a crystalline object [3]. Since then, much faster and more robust iterative methods for solv ing this type of phase problem have been developed [4, 5, 6, 7, 8]. Iterative phase-retrieval ptychography was first demonstrated experimentally with visible light [9], then with hard x rays [10], and, recently, with electrons [11]. It has been implemented extensively at third generation x-ray sources, where it has become a valuable research tool (see, for example, [12, 13]). But often in ptychographical experiments, where the illuminated areas of the specimen overlap by around 70%, redundancy in the recorded data is under-utilized by existing reconstruction algorithms. We make better use of it here by extrapolating each diffraction pattern beyond the aperture of the detector to provide greatly improved image resolution. One use of our method at visible light wavelengths is to provide long working distances while retaining a high numerical aperture (NA); in our experiments we have achieved a resolution of 406 line pairs per millimeter ($\mathrm{lp}/\mathrm{mm}$) at a working distance of $95\text{\hspace{0.17em}}\mathrm{mm}$, extending by over 3 times the NA of the detector.

Three mechanisms, applicable to ptychography and already used to enhance resolution in other imaging modalities, suggest that ptychographic superresolution may meet with a degree of success:

- The “synthetic aperture”: in digital Fourier holography, a synthetic aperture having an effective spatial cutoff frequency higher than the cutoff of the optical system can be realized by recording a series of Fourier holograms, each corresponding to illumination of the specimen by a plane wave incident at a different angle [14]. Each illumination condition allows a different range of scattering angles to pass through the optical system, and can be considered to translate different areas of a much larger “synthetic” hologram onto the area of the detector. The recorded data can be combined to recreate this larger hologram, whose Fourier transform produces a superresolved image of the specimen. Similar ideas are used for tilt series imaging in electron microscopy [15] and to obtain superresolution using sinusoidally structured illumination in conventional white-light microscopy [16]. A roughly analogous relationship exists in ptychography, where the probe in a ptychographic experiment can be considered an amalgamation of localized phase gradients, each approximating a plane wave incident at a different angle. A lateral translation causes a given region of the specimen to be illuminated by a different phase gradient, resulting in a different part of its scattering cross section being directed onto the detector and contributing to the recorded data. This synthetic aperture effect is the primary contributor to the success of our experiments and we exploit it to the full by introducing a diffuser into our experimental setup to broaden the spatial frequency content of the illuminating probe.
- Analytic continuation: it has long been known that, in theory at least, measurement of a finite object’s spatial frequency spectrum over a given area can be extrapolated beyond this range by analytic continuation [17]. According to this theory, a measured diffraction pattern can be considered as the complete spatial frequency spectrum of the specimen, convolved with the Fourier transform of the optical system’s exit pupil and multiplied by the aperture of the detector. Since the exit pupil is of finite extent, its Fourier transform is not band limited; the convolution operation then ensures that data in the recorded region of the diffraction pattern contains information from the entire spectrum of the specimen. Extrapolation of this sort is known to be inherently ill conditioned, susceptible to failure given only minute levels of noise or distortion [18]; nevertheless, Gerchberg proposed a method of exploiting this property in CDI to iteratively retrieve unmeasured higher spatial frequencies [19]. In practice, Gerchberg’s method cannot extrapolate more than a couple of pixels beyond the recorded part of the spectrum. We greatly improve upon this limit here by employing ptychography to extrapolate diffraction patterns out to as much as 4 times the extent of the measured data and by using the diffuser to strengthen the influence on the recorded part of the unrecorded region of the diffraction pattern.
- Subpixel shifting: in conventional imaging, improved resolution can be achieved using a series of images of a static specimen that are laterally offset by a noninteger number of pixels [20] (a technique that might better be described as de-aliasing rather than superresolution). Although not strictly analogous, the specimen in a ptychographic experiment is translated in just such a manner, and so it is reasonable to expect correct (and fractional) encoding of the specimen or probe movements in the reconstruction algorithm to also enhance resolution. Below, we will see that convergence of our superresolution algorithm depends on the use of these subpixel shifts.

This paper explores the possibilities of ptychographic superresolution at optical wavelengths using an experimental geometry inspired by the list above. Section 2 explains the reconstruction process and the modifications we have made to an existing ptychographical algorithm to incorporate superresolution. Section 3 details the experimental setup and the data collection process, the results from which are presented in Section 4. In Section 5, a preliminary assessment of the limits to the superresolution method are presented before concluding.

## 2. RECONSTRUCTION PROCESS

Our superresolution algorithm is a modification of the extended ptychographical iterative engine (ePIE) [8], a reconstruction algorithm able to recover from a set of ptychographical data both the complex-valued transmission function of the specimen and the complex-valued illuminating probe wavefront. The superresolution ptychographical iterative engine (SR-PIE) also produces specimen and probe reconstructions and uses identical data, but, in addition, attempts to significantly improve their resolution by extrapolating the recorded diffraction patterns beyond the aperture of the detector. The required inputs to the SR-PIE are:

- a set of
*J*diffracted intensities, ${I}_{j}(\mathbf{u})$, recorded by a detector of $M\times N$ pixels on a $\mathrm{\Delta}p$ pitch. Here $\mathbf{u}=[u,v]$ is a coordinate vector addressing the pixels of each recording and $j=1\dots J$. Our goal will be to recover the data that would have been captured were we to have a detector*c*times larger, spanning $cM\times cN$ pixels on the same $\mathrm{\Delta}p$ pitch; - rough initial guesses of the probe and specimen, ${P}_{0}(\mathbf{r})$ and ${O}_{0}(\mathbf{r})$, where $\mathbf{r}=[x,y]$ is a coordinate vector addressing the pixels of the reconstruction, which are set on a pitch of$$[\mathrm{\Delta}x,\mathrm{\Delta}y]=\frac{\lambda z}{\mathrm{\Delta}p}[\frac{1}{cM},\frac{1}{cN}].$$Here
*λ*is the illumination wavelength and*z*is the distance between the specimen and the detector (see Fig. 2). ${P}_{0}(\mathbf{r})$ will span $cM\times cN$ pixels but ${O}_{0}(\mathbf{r})$ will be somewhat larger to allow for the specimen translations. For example, the experimental results in Figs. 8, 10 used $c=4$, giving a $512\times 512$ pixel “virtual” detector extrapolated from the $128\times 128$ pixel measured data, with the resulting images spanning $1088\times 1088$ pixels; and - the
*J*measured positions of the specimen, ${\mathbf{R}}_{j}=[{R}_{(x,j)},{R}_{(y,j)}]$. Since it is unlikely that these positions will fall exactly at integer values of the pixel pitch in the reconstruction, when converted into this form, they will consist of an integer pixel shift, ${\mathbf{p}}_{j}=[{p}_{(x,j)},{p}_{(y,j)}]$, plus a fractional pixel shift, ${\mathbf{q}}_{j}=[{q}_{(x,j)},{q}_{(y,j)}]$, so that

In a single iteration of the SR-PIE, these inputs are used to update images of the probe and specimen a number of times equal to the number of recorded diffraction patterns. We will follow the progress of one of these updates, forming the *j*th estimates from the $(j-1)$th, as illustrated in Fig. 1. The diffraction patterns are addressed in a random sequence, $s(j)$: the first update step uses diffraction pattern $s(1)$, the next $s(2)$, and so on. To carry out the update, an $cM\times cN$ pixel region, denoted ${o}_{j-1}(\mathbf{r})$, whose central pixel is at $[{p}_{(x,s(j))},{p}_{(y,s(j))}]$, is extracted from ${O}_{j-1}(\mathbf{r})$. The probe estimate is subpixel shifted by $-{\mathbf{q}}_{s(j)}$ and multiplied by ${o}_{j-1}(\mathbf{r})$, to form an exit wave, ${\psi}_{j}(\mathbf{r})$. To implement the fractional shift, ${P}_{j-1}(\mathbf{r})$ is Fourier transformed and the result multiplied by a linear phase ramp whose phase is calculated according to

Next, ${\psi}_{j}(\mathbf{r})$ is Fourier transformed to give ${\mathrm{\Psi}}_{j}(\mathbf{u})$, an estimate of the wavefront at the plane of the detector that resulted in the recorded intensity ${I}_{s(j)}(\mathbf{u})$. ${\mathrm{\Psi}}_{j}(\mathbf{u})$ extends over $cM\times cN$ pixels, with the central $M\times N$ pixels corresponding to the area of the detector. The moduli of the pixels in this region are replaced by the square root of the recorded data, $\sqrt{{I}_{s(j)}(\mathbf{u})}$, while their phases are retained. The modulus and phase of the remaining pixels are left unchanged, as described by Gerchberg. However, in ptychography at least, without an additional constraint, the intensity at the edges of the extrapolated diffraction patterns tends to build up as the reconstruction progresses, causing an encroachment of Fourier repeats and introducing noise into the reconstructed images (an effect illustrated in Section 4). To counter this, the additional step of forcing the border of ${\mathrm{\Psi}}_{j}(\mathbf{u})$ to zero ensures that each Fourier repeat also falls to zero at its extremities. The width of this border is nominally a single pixel, but can be increased to reduce high frequency noise in the reconstruction at the expense of resolution—the border can also be tapered in a fashion similar to that reported by Guizar-Sicairos and Fienup in a slightly different context [18].

The revised version of ${\mathrm{\Psi}}_{j}(\mathbf{u})$ is inverse Fourier transformed to produce an updated exit wave, ${\psi}_{j}^{\prime}(\mathbf{r})$. New specimen and probe estimates are then calculated according to the two update functions:

The process described in Fig. 1 is repeated until ${O}_{J}(\mathbf{r})$ and ${P}_{J}(\mathbf{r})$ have been calculated, completing a single iteration of the SR-PIE. The next iteration can then begin using ${O}_{J}(\mathbf{r})$ and ${P}_{J}(\mathbf{r})$ as the initial specimen and probe estimates and a fresh random sequence to address the diffraction patterns.

## 3. EXPERIMENT

Figure 2 shows the experimental setup used to collect ptychographical data. The expanded and collimated beam from a fiber-coupled $675\text{\hspace{0.17em}}\mathrm{nm}$ diode laser was used as a source of illumination. The probe was formed using two doublet lenses of $3\text{\hspace{0.17em}}\mathrm{cm}$ focal length in a $4f$ configuration to image a $100\mathrm{}\text{\hspace{0.17em}}\mathrm{\mu m}$ pinhole covered by a diffuser onto the specimen. Layers of a thin plastic film were used as a diffuser. The strength of the diffuser could be increased by increasing the number of layers of film—a single layer produced a moderate effect, such that the intensity of each diffraction pattern fell to a low value within the area of the detector [Fig. 3a], while adding a second layer produced highly diffuse diffraction patterns whose speckles remained almost uniform in intensity across the detector area [Fig. 3b]. Specimens were mounted on an $x/y$ stage with a specified practical resolution of $0.1\text{\hspace{0.17em}}\mathrm{\mu m}$ and bidirectional repeatability of $0.3\text{\hspace{0.17em}}\mathrm{\mu m}$. Each ptychographical scan consisted of 400 diffraction patterns collected from a grid of $20\times 20$ specimen positions on a nominal pitch of $30\text{\hspace{0.17em}}\mathrm{\mu m}$, with the addition of a $\pm 5\text{\hspace{0.17em}}\mathrm{\mu m}$ random $x/y$ offset to avoid the so-called “raster grid pathology” [22]. The detector was an AVT Pike F421B $16\text{\hspace{0.17em}}\mathrm{bit}$ CCD with $2048\times 2048$ pixels on a $7.4\text{\hspace{0.17em}}\mathrm{\mu m}$ pitch, the output of which was down-sampled after collection to $128\times 128$ pixels. The use of a diffuser conferred the appreciable advantage of being able to capture diffraction patterns in a single CCD exposure [23]. The patterns nevertheless contained significant readout noise, artifacts generated from dust on the sensor and probe-forming optics, and reflections primarily from the chrome surface of the resolution target used as a specimen. The robustness of the SR-PIE to this noise is remarkable given the sensitivity of Gerchberg’s original method.

We have obtained optimal results from the SR-PIE when the recorded diffraction patterns consist of uniform speckle that decays to zero within the larger area of the virtual detector. This ensures consistency of the real diffracted intensity with the high spatial frequency suppression carried out during the algorithm’s Fourier update step. For this reason, two ptychographical scans were carried out using a positive chrome-on-glass resolution target as a test specimen, one for evaluation purposes and a second to demonstrate an impressive NA at a large working distance. In the evaluation scan (referred to as dataset 1), the weak diffuser was used and *z* was set at $86\text{\hspace{0.17em}}\mathrm{mm}$, producing diffraction patterns such as the example shown in Fig. 3a. The central $32\times 32$ pixels of these diffraction patterns (with approximately uniform speckle intensity) were inputted to the SR-PIE, which then attempted to recover the remaining portion of the recorded data (which fell to a low intensity at the edge of the detector). The extrapolated and recorded diffraction patterns could then be compared to assess the performance of the algorithm. In the high NA scan (dataset 2), the system was set up using the strong diffuser such that the resulting diffraction patterns exhibited approximately uniform speckle intensity across the entire area of the detector [Fig. 3b]. A value of $z=94.4\text{\hspace{0.17em}}\mathrm{mm}$ then ensured that this intensity dropped to a low level at the perimeter of the extrapolation area, since the NA of the probe-forming optics fell between the edges of the true and virtual detectors. Further scans were subsequently carried out using this setup: first, to demonstrate our method for less strongly diffracting specimens by replacing the resolution target with a prepared microscope slide containing lily pollen grains, and second, to verify the predictions made by a preliminary theory concerning the fundamental limits of the technique (presented in Section 5).

## 4. RESULTS

To provide a reference from which to assess the reconstructions produced by the SR-PIE, 100 iterations of the conventional ePIE were carried out using the full $128\times 128$ pixel extent of the diffraction patterns comprising dataset 1. Free space was used as the initial estimate of the specimen and an aperture of $100\mathrm{}\text{\hspace{0.17em}}\mathrm{\mu m}$ diameter as the initial estimate of the probe. Figure 4a shows the modulus of the resulting reconstruction; this would be the result were the SR-PIE to achieve a perfect extrapolation of the central $32\times 32$ pixels of the diffraction patterns. In this figure and in the reconstructions appearing in subsequent figures, a $350\text{\hspace{0.17em}}{\mathrm{\mu m}}^{2}$ crop from the center of the full $1\text{\hspace{0.17em}}{\mathrm{mm}}^{2}$ image is shown. To provide initial inputs to the SR-PIE, a second ePIE reconstruction was carried out, this time using only the central $32\times 32$ pixels of the recorded data. Figure 4b shows a crop from the modulus of this reconstruction; this would be the approximate result were no accurate extrapolation realized by the SR-PIE. The image appears noisy because the intensity remains high at the edges of the diffraction patterns, resulting in aliasing problems [18]; however, the finest resolved features here are in group 5, element 2 ($36\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$), which agrees with the spatial frequency at the edge of the $32\times 32$ pixel diffraction patterns ($33\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$) given the offset to the centers of the bar features that generally afflicts coherent imaging [24].

The low-resolution specimen estimate of Fig. 4b together with the low-resolution probe reconstruction also generated by the ePIE were up-sampled by 4 times using a bicubic interpolator and provided as the ${O}_{0}(\mathbf{r})$ and ${P}_{0}(\mathbf{r})$ inputs to the SR-PIE. Four versions of the algorithm were tested over 1000 iterations. In the first, neither subpixel shifting of the probe nor suppression of high spatial frequencies in the Fourier update step was implemented, resulting in the reconstructed modulus shown in Fig. 4c. Although the resolution here is clearly much improved over Fig. 4b, the image is degraded by a high level of background noise due to the detrimental influence of Fourier repeats. In the second version of the SR-PIE, subpixel shifting was introduced, producing the modulus shown in Fig. 4d, where a small increase in resolution is evident but a high level of random noise has been retained. In the third version, the subpixel shifting was deactivated, but the high spatial frequency suppression was included, with a single-pixel border of the extrapolated diffraction patterns clamped at zero. This produced Fig. 4e, where resolution has been reduced slightly from Fig. 4d, but background noise has also been considerably reduced. In the final implementation, both subpixel shifts and high spatial frequency suppression were included, leading to the low background noise and high resolution shown in Fig. 4f. In each of Figs. 4c, 4d, 4e, 4f, the resolution has been increased by a considerable margin—at least 2.24 times from 36 to $80.6\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$ (group 6, element 3), or $\approx 0.65\times $ the resolution achieved in Fig. 4a.

The performance of each version of the algorithm was quantified using the error metric:

*E*over the 1000 iterations of each algorithm implementation. Only with the inclusion of both the high spatial frequency constraint and subpixel shifting of the probe does

*E*converge. In each of the alternative cases, the error begins to diverge after around 100 iterations. Note, however, that the smallest error is achieved when the high spatial frequency suppression is omitted from the algorithm, although the error rapidly increases subsequent to this minimum. We have observed that, over several thousand iterations, the fully implemented SR-PIE convergences for every dataset we have collected, but a mathematical proof of its behavior is still required to confirm the findings of our experimental results.

The additional features introduced into the ePIE have, then, resulted in an apparently stable and robust algorithm that considerably improves image resolution. Concentrating on this fully implemented algorithm, Fig. 6 provides further detail of its performance. Figures 6a, 6b give a visual comparison of a randomly chosen recorded diffraction pattern (its square root is shown) and the modulus of the corresponding pattern extrapolated by the SR-PIE. The white square demarks the $32\times 32$ pixel region of the measured data used in the reconstruction. There is a clear correlation between the speckle structure extrapolated by the SR-PIE and the actual diffracted intensity, but the intensity of the recovered diffraction pattern has a more rapid radial decay than the measured data. This is attributable to the outer-border suppression enacted by the algorithm, combined with the requirement for smoothness in the Fourier transform of a band-limited function.

Further detail of the SR-PIE’s performance was gained using the following error metric, which compares the recorded and extrapolated diffraction patterns at each pixel location:

Having investigated the performance of the SR-PIE using dataset 1, dataset 2 was used to attempt extrapolation beyond the extent of the detector and realize a high NA at a $95\text{\hspace{0.17em}}\mathrm{mm}$ working distance. A conventional ePIE reconstruction using the full extent of the diffraction patterns was up-sampled and used as the seed input to the SR-PIE. Figures 7a, 7b show the modulus and phase of the seed probe estimate, and Fig. 8a shows the modulus of the seed specimen estimate. The specimen reconstruction is again noisy because the diffraction patterns have significant intensity at their edges, but the finest resolved features in Fig. 8a belong to group 7, element 1 ($128\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$), in line with the highest spatial frequency captured by the detector. This figure can be compared to the resolution obtained in Fig. 8b, showing the modulus of the image recovered after 150 iterations of the SR-PIE. A single-pixel border of the extrapolated patterns was clamped at zero for this reconstruction, which used $c=4$ and, so, extrapolated the diffraction patterns from $128\times 128$ to $512\times 512$ pixels. In physical terms, this equates to a $60.6\text{\hspace{0.17em}}{\mathrm{mm}}^{2}$ virtual detector. The inset of Fig. 8b shows that element 5 of group 8 of the target, whose features are at $406\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$, is resolved, giving a resolution gain of $3.17\times $; it should be noted, however, that the resolution can be increased further by reducing the illuminating wavelength. The superresolved reconstruction of the probe shown in Figs. 7c, 7d is consistent with a slightly defocused and aberrated image of the pinhole and diffuser. In fact, the probe reconstruction can be backpropagated and a spherical aberration term removed to show a reasonably sharp-edged pinhole.

Figure 9 provides further details of the superresolved image. Figure 9a is an example of the extrapolated diffraction patterns the SR-PIE produces (the modulus is shown here), where the square indicates the extent of the detector. An interesting possibility is that the data shown here are not those that would have been recorded by a flat detector equal in size to the extrapolated diffraction patterns. This could be the case if the real detector falls in the Fresnel zone of diffraction, where wavefront curvature is accurately approximated by a parabolic phase, but the virtual detector extends beyond this region, where a Fourier transform relationship exists between the specimen plane and a spherical shell of radius *z* [25]. This may mean that the SR-PIE solves for the intensity that would have been recorded on a curved detector array, although we have not investigated this idea fully. Figure 9b plots on a log scale the power spectrum of the recovered image, where the circle represents the spatial frequency of group 8, element 5 of the resolution target. The power in the higher diffraction orders here is minute: a diffraction peak near the plotted circle is approximately ${10}^{8}$ times less intense than the zeroth order. This lends credence to our assertion that accurate extrapolation of the diffraction patterns is made possible by the synthetic aperture effect discussed in the introduction, and not by the convolution argument on which Gerchberg’s method relies.

While it is a good way to quantify the various aspects of our method, using a resolution target as the sole specimen in these experiments could be somewhat misleading since it diffracts strongly and into distinct orders. A further experiment was therefore undertaken using an identical configuration to that of dataset 2 and a more representative specimen, a sample of lily pollen mounted on a microscope slide. Figures 10a, 10b show a crop from the modulus and phase, respectively, of a conventional ePIE reconstruction carried out on the pollen data. These initial reconstructions were up-sampled and used as inputs to the SR-PIE, again using a value $c=4$. A first reconstruction contained high-frequency noise, especially evident in the featureless regions of the specimen and attributable to the very weak scatter to higher diffraction angles of this sample. To counteract this, the constrained region of each extrapolated diffraction pattern was extended from a single-pixel border to one of 16 pixels, which resulted in the images in Figs. 10c, 10d. The structure of the exine layer, the tough outer shell that protects the lily pollen as it passes through the anther, has been revealed in both the modulus and phase of the superresolved reconstruction, neither of which has suffered from an appreciable increase in background noise. It is difficult to estimate the resolution of these images accurately, but the spars visible in the exine layer of the pollen grains are roughly $5\text{\hspace{0.17em}}\mathrm{\mu m}$ apart.

## 5. DISCUSSION AND CONCLUSIONS

It is clear from the results presented above that ptychographic data encodes a great deal of untapped information—but how much? We offer here a preliminary commentary on the degree of this redundancy.

In conventional CDI, where a single diffraction pattern is recorded and the additional constraint of a known specimen support conditions the phase-retrieval process, a minimum degree of redundancy is ensured provided the over-sampling ratio

Arguments along similar lines can be used to give a rough estimate of the redundancy in a ptychographical dataset. The most straightforward measure, similar to the over-sampling ratio, is

*J*is the total number of diffraction patterns recorded). The requirement for an accurate reconstruction is ${\sigma}_{\text{pty}}>1$. Taking dataset 2 as an example, the full reconstruction from which the crop shown in Fig. 8b was extracted consisted of $1088\times 1088$ pixels. Each diffraction pattern used to carry out this reconstruction consisted of $512\times 512$ pixels, of which only the central $128\times 128$ pixels corresponded to measured data. The number of measured data points was therefore $400\times {128}^{2}$ (400 recorded diffraction patterns each of $128\times 128$ pixels), while the number of unknown variables solved for by the SR-PIE was $2({1088}^{2}+{512}^{2})$: the number of unknown pixels in the specimen reconstruction, plus the unknown pixels in the probe reconstruction, multiplied by two to account for the fact that both probe and specimen are complex valued and, thus, each has a real and an imaginary part. For dataset 2, this gives a redundancy value of 2.3, implying the phase-retrieval problem is well conditioned, and perhaps that, given a higher NA in the illumination optics, a larger degree of superresolution could be achieved.

${\sigma}_{\text{pty}\text{}}$ gives no account to the independence of the measurements in each diffraction pattern, nor does it consider the fact that the areas of the probe and specimen reconstructions that are accurately recovered by the SR-PIE do not span every pixel. A further subtlety not addressed by the metric is the independence not of the measurements within a single diffraction pattern, but of the measurements in several patterns recorded from neighboring specimen positions. As such, Eq. (9) is intended only to give a useful indication of the degree of redundancy in ptychographical data and further research is needed in this area to expand upon this initial discussion. Nevertheless, Eq. (9) has some interesting implications. For example, decreasing the specimen translation step size should increase the obtainable degree of superresolution, since this will give a smaller number of pixels in the specimen reconstruction for the same number of diffraction patterns. To test this theory, scans were carried out using the strong-diffuser experimental setup (realignment of the system led to a slightly smaller value of $z=80.1\text{\hspace{0.17em}}\mathrm{mm}$) and the resolution target as a specimen, with average step sizes of 30, 20, and $10\mathrm{}\text{\hspace{0.17em}}\mathrm{\mu m}$. Table 1 summarizes the parameters for these datasets. Central squares of decreasing size were taken from the recorded data and input to the SR-PIE, each time extrapolating the diffraction patterns to $512\times 512$ pixels. Figure 11 plots the resolution observed in the reconstructions as the extent of the data used by the algorithm was reduced. Clearly, the maximum resolution (governed by the NA of the illumination optics) can be realized from significantly less data when the step size is reduced, as suggested by Eq. (9)— the point at which ${\sigma}_{\text{pty}}=1$ for each set of data is indicated by the dashed lines. The redundancy metric underestimates the point at which the clarity of the reconstructions begins to degrade, as should be expected given diffraction pattern noise and other experimental inaccuracies. It is interesting that, although noise increases substantially and resolution reduces, the reconstruction does not fail completely when ${\sigma}_{\text{pty}}\ll 1$.

To conclude, we have shown in this paper that superresolved imaging using ptychographic data is not only possible, but is also practical, and can be carried out robustly. Two modifications to a conventional ptychographic algorithm have been described that control the convergence of its superresolution extension: subpixel translations and a modified Fourier modulus update step. Our surprising findings are that, by using this algorithm, large increases in resolution, over 3 times, can be achieved without the introduction of substantial noise and that diffraction orders containing very little power can be accurately recovered. An initial study of the limits on the process has been presented that suggests extrapolation by larger factors should be possible. In fact, very recent work has demonstrated resolution improvements of $>5\times $, realizing a resolution of $367\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$ at a $191\mathrm{mm}$ working distance and a resolution $>645\text{\hspace{0.17em}}\mathrm{lp}/\mathrm{mm}$ at a $71\text{\hspace{0.17em}}\mathrm{mm}$ working distance. The methods presented here have applications beyond long working distance optical microscopy, including, for example, solving for dark field data in electron microscopy, replacing missing data due to sectioned detectors or beam stops in x-ray microscopy, or broader applications in imaging [29].

## ACKNOWLEDGMENTS

The authors thank Phase Focus Ltd for the use of their equipment and for technical assistance and gratefully acknowledge the support of the Engineering and Physical Sciences Research Council (EPSRC) for funding this work, which was part of the Basic Technology (EP/E034055/1)–Ultimate Microscopy Grant.

**1. **J. M. Rodenburg, “Ptychography and related diffractive imaging methods,” in *Advances in Imaging and Electron Physics*, P. W. Hawkes, ed. (Elsevier, 2008), Vol. 150, pp. 87–184. [CrossRef]

**2. **W. Hoppe, “Trace structure analysis, ptychography, phase tomography,” Ultramicroscopy **10**, 187–198 (1982). [CrossRef]

**3. **P. D. Nellist, B. C. McCallum, and J. M. Rodenburg, “Resolution beyond the ‘information limit’ in transmission electron microscopy,” Nature **374**, 630–632 (1995). [CrossRef]

**4. **H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. **93**, 023903 (2004). [CrossRef] [PubMed]

**5. **J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. **85**, 4795–4797 (2004). [CrossRef]

**6. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express **16**, 7264–7278 (2008). [CrossRef] [PubMed]

**7. **P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning x-ray diffraction microscopy,” Science **321**, 379–382 (2008). [CrossRef] [PubMed]

**8. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy **109**, 1256–1262 (2009). [CrossRef] [PubMed]

**9. **J. M. Rodenburg, A. C. Hurst, and A. G. Cullis, “Transmission microscopy without lenses for objects of unlimited size,” Ultramicroscopy **107**, 227–231 (2007). [CrossRef]

**10. **J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. **98**, 034801 (2007). [CrossRef] [PubMed]

**11. **F. Hüe, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, “Wave-front phase retrieval in transmission electron microscopy via ptychography,” Phys. Rev. B **82**, 121415 (2010). [CrossRef]

**12. **M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic x-ray computed tomography at the nanoscale,” Nature **467**, 436–439 (2010). [CrossRef] [PubMed]

**13. **A. Schropp, P. Boye, A. Goldschmidt, S. Hönig, R. Hoppe, J. Patommel, C. Rakete, D. Samberg, S. Stephan, S. Schöder, M. Burghammer, and C. G. Schroer, “Non-destructive and quantitative imaging of a nano-structured microchip by ptychographic hard x-ray scanning microscopy,” J. Microsc. **241**, 9–12 (2011). [CrossRef]

**14. **V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A **23**, 3162–3170 (2006). [CrossRef]

**15. **A. Kirkland, W. Saxton, K. L. Chau, K. Tsuno, and M. Kawasaki, “Super-resolution by aperture synthesis: tilt series reconstruction in CTEM,” Ultramicroscopy **57**, 355–374 (1995). [CrossRef]

**16. **M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. **198**, 82–87 (2000). [CrossRef] [PubMed]

**17. **J. W. Goodman, *Introduction to Fourier Optics*3rd ed. (Roberts, 2005), Chap. 6, pp. 162–167.

**18. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with Fourier-weighted projections,” J. Opt. Soc. Am. A **25**, 701–709 (2008). [CrossRef]

**19. **R. W. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta **21**, 709–720 (1974). [CrossRef]

**20. **H. Ur and D. Gross, “Improved resolution from subpixel shifted pictures,” CVGIP Graph. Models Image Process. **54**, 181–186 (1992). [CrossRef]

**21. **G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with trans verse translation diversity,” Opt. Express **17**, 624–639 (2009). [CrossRef] [PubMed]

**22. **M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, K. Jefimovs, I. Schlichting, K. von König, O. Bunk, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. **12**, 035017 (2010). [CrossRef]

**23. **A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. **35**, 2585–2587 (2010). [CrossRef] [PubMed]

**24. **G. O. Reynolds, J. B. D. Velis, G. B. Parrent, and B. J. Thompson, *The New Physical Optics Notebook: Tutorials in Fourier Optics* (American Institute of Physics, 1998), Chap. 13, p. 107.

**25. **Y. Takaki and H. Ohzu, “Fast numerical reconstruction tech nique for high-resolution hybrid holographic microscopy,” Appl. Opt. **38**, 2204–2211 (1999). [CrossRef]

**26. **J. Miao, D. Sayre, and H. Chapman, “Phase retrieval from the magnitude of the Fourier transforms of nonperiodic objects,” J. Opt. Soc. Am. A **15**, 1662–1669 (1998). [CrossRef]

**27. **V. Elser and R. P. Millane, “Reconstruction of an object from its symmetry-averaged diffraction pattern,” Acta Crystallogr. A **64**, 273–279 (2008). [CrossRef] [PubMed]

**28. **J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A **4**, 118–123 (1987). [CrossRef]

**29. **J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express **14**, 498–508 (2006). [CrossRef] [PubMed]