Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Parallel data acquisition and reconstruction method of near-field ptychography for large samples

Open Access Open Access

Abstract

Near-field ptychography is an attractive modality of coherent diffraction imaging, which can provide quantitative phase of samples at sub-pixel resolution and shows low requirement on beam coherence and detector dynamic range. In the case of studying extensive samples, a large dataset would be recorded, resulting in a long data acquisition time and high requirements for computer memory and computing power. Here, we proposed a simple experimental arrangement for parallel data acquisition and the corresponding image reconstruction algorithm. The scheme can dramatically increase the overall imaging speed. The algorithm can be efficiently implemented on graphic processing units (GPUs). The feasibility and effectiveness of the method have been validated with numerical simulation and optical experiments. The proposed approach would be helpful for imaging using large-array cameras.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ptychography is a scanning coherent diffractive imaging (CDI) technology that can provide high-resolution quantitative phase images with large field-of-view (FOV) [18]. In ptychography, a set of diffraction patterns of the sample is recorded on which a probe is used to scan with a step size smaller than the probe extent. The high degree of data redundancy, introduced from illuminated area overlapping, leads to a fast convergence rate of image reconstruction [1]. Compared with far-field ptychography [2], near-field ptychography has a mild requirement on the dynamic range of detector [911] and can image a larger sample area with fewer sample scans [9,12]. The spatial resolution of near-field ptychography is usually determined by the detector pixel size [9], which is typically about several micrometers with visible light [12]. Methods that can improve the resolution below the pixel size have also been proposed [13].

While the near-field ptychography can obtain a sub-micrometer resolution and large-FOV image, the whole data acquisition and reconstruction process are relatively slow for large samples, limiting its broader use in many fields. For large samples, the scanning nature of data acquisition would take a great deal of time to record an entire dataset, while the image reconstruction step involves enormous amounts of data and high computation complexity. For far-field ptychography, a parallel reconstruction algorithm has been demonstrated [14]. The method requires data partition and image-stitching steps and uses the DIY parallel programming library to accelerate its calculation. The final stitched image could have artifacts at the overlapping regions [15]. As both discussed in Ref. [14], the asynchronous implementation requires less running time than the synchronous one. In recent years, some parallel image acquisition methods [1621] have been developed to avoid time loss in sample scanning. It has been demonstrated in Ref. [17] that a pinhole array can be used to form tens of partially overlapping illumination beams simultaneously, which allows for the retrieval of complex objects with diffraction-limited resolution and large FOV. A pinhole array was used to form a large structured illumination to save the scan time and to prevent slow convergence in using a single large aperture [21].

This work proposes a parallel data acquisition and the corresponding reconstruction method for near-field ptychography. We merge the advantages of divisional sample illumination and asynchronous parallel reconstruction algorithm for a fast implementation of near-field ptychography. In the proposed method, diffraction patterns from isolated illuminating spots formed by pinhole array were divided into several sub-region datasets. All the sub-datasets were processed on GPU simultaneously. The reconstruction quality is equivalent to traditional ptychography, while the computing speed can be significantly increased. Notably, the processing time increases mildly with the FOV or the size of the samples.

2. Materials and method

2.1 Samples

To compare the resolution of the reconstructions from different algorithms, a standard 2"×2” positive 1951 USAF resolution target (Edmund) was utilized in the optical experiments. The USAF target is also used for verifying large-FOV imaging. In addition, a microscopic slide of lily bud was used to test the phase imaging capability of the proposed method. Here we assume the samples are of two-dimensional, optically thin and weakly scattering.

2.2 Parallel data acquisition method

Our parallel data acquisition method is based on the fact that the diffraction intensity distribution in the near-field will only expand mildly in size. For a segmented illumination, the light diffracted from one segment at near-field will not overlap with neighboring regions, as shown in Fig. 1(a), in big contrast to the far-field case. As a result, one can segment the near-field diffraction patterns into smaller regions and process them separately and parallelly. The schematic of our near-field ptychography is shown in Fig. 1(b). A pinhole array segments a coherent wave into several sub-beams. The spacing between pinholes should be kept as small as possible, provided that their diffraction patterns do not overlap strongly with each other.

 figure: Fig. 1.

Fig. 1. (a) Illustration of beam expansion in the near-field and far-field diffraction of a pinhole array. The diffraction spots in the near-field will not overlap as it is the case for the far-field. (b) Schematic of the setup for parallel data recording in near-field ptychography.

Download Full Size | PDF

The arrangement of the sample scan is shown in Fig. 2(a). Here, different pinholes are indicated by colors. With the movement of the step-motor, each pinhole covers a small scanning area of the sample. The scanned areas by adjacent pinholes need to have some degree of overlapping. Once the scanning is completed, a set of diffraction patterns Ij (j=1, 2, ···, J) (j is the scanning positions index) are recorded. The diffraction patterns can be divided into K groups (K is the number of pinholes in the array), each group corresponds to one pinhole. Therefore, the diffraction data of the sample with a large FOV can be obtained using only a few scanning points, which dramatically reduces the acquisition time.

 figure: Fig. 2.

Fig. 2. Flowchart for parallel reconstruction method. For the case of using a 2×2 pinhole array, the diffraction patterns are evenly divided into four sub-datasets, indicated by four colors. Each sub-dataset is then processed by an extended ptychographical iterative engine (ePIE) [22] to provide an image. Stitching up all the images for each sub-regions yields the final image having an enlarged size.

Download Full Size | PDF

2.3 Parallel reconstruction method

A parallel reconstruction algorithm is designed to reduce the computational load and reconstruction time. The main idea is that the sub-datasets can be processed quickly and simultaneously. Figure 2(b) shows the flowchart of the parallel reconstruction algorithm, which contains diffraction patterns segmentation, parallel-ePIE reconstruction, and sub-images stitching.

Figure 3 gives the detailed steps of the parallel algorithm. In near-field ptychography, the measured diffraction patterns correspond to

$${I_j} = {|{AS({{\psi_j}({\boldsymbol r} )} )} |^2},$$
where r = (x, y) is the 2D spatial coordinates in the object plane. AS denotes the angular spectrum propagation operator from the object plane to the detector plane, and ψj (r) indicates the sample exit wave. Ij represents the diffraction patterns of the whole pinhole array, so we need to segment them into a series of sub-pattern datasets Ik,j (k = 1, 2, ···, K) corresponding to each pinhole. For example, as shown in Fig. 2(b), the diffraction patterns are separated into four groups with J patterns each.

 figure: Fig. 3.

Fig. 3. The procedure of the parallel-ePIE algorithm.

Download Full Size | PDF

In Fig. 3, Ok (r)and Pk (r) are the initial estimates of the sub-object and sub-illumination-probe function, respectively. In our experiment, we moved the object, so the sub-object function in the jth the position is given as Ok,j (r,sj). Multiplying the sub-object function and sub-probe function provides the exit wave of the sample:

$${\psi _{k,j}}({\boldsymbol r}) = {O_{k,j}}({\boldsymbol {r,s}}_j^i) \cdot {P_{k,j}}({\boldsymbol r}).$$
Propagate the exiting wave ψk,j (r) of the sample to the detector plane, which can be described as
$${\varPsi _{k,j}}({\boldsymbol u}) = AS[{{\psi_{k,j}}({\boldsymbol r})} ].$$
The amplitude of Ψk,j (u) will be replaced with the measured intensity after segmentation $\sqrt {{I_{k,j}}}$ at the detecting plane, and then backpropagate it to the object plane, resulting in the wave field of Ψ’k,j (r):
$${\psi ^{\prime}_{k,j}}({\boldsymbol r}) = A{S^{ - 1}}\left[ {\frac{{{\Psi_{k,j}}({\boldsymbol u})}}{{|{{\Psi_{k,j}}({\boldsymbol u})} |}}\sqrt {{I_{k,j}}} } \right].$$
Apply the overlap constraint to update the sub-object function and sub-probe function according to the following [22],
$${O^{\prime}_{k,j}}({\boldsymbol {r,s}}_j^i) = {O_{k,j}}({\boldsymbol {r,s}}_j^i) + \alpha \frac{{conj({{P_{k,j}}({\boldsymbol r})} )[{{{\psi^{\prime}}_{k,j}}({\boldsymbol r}) - {\psi_{k,j}}({\boldsymbol r})} ]}}{{\max ({{{|{{P_{k,j}}({\boldsymbol r})} |}^2}} )}},$$
$${P^{\prime}_{k,j}}({\boldsymbol r}) = {P_{k,j}}({\boldsymbol r}) + \alpha \frac{{conj({{O_{k,j}}({\boldsymbol {r,s}}_j^i)} )[{{{\psi^{\prime}}_{k,j}}({\boldsymbol r}) - {\psi_{k,j}}({\boldsymbol r})} ]}}{{\max ({{{|{{O_{k,j}}({\boldsymbol {r,s}}_j^i)} |}^2}} )}},$$
where ‘conj’ denotes the complex conjugate operation, the updated sub-object function is normalized by the maximum of the squared amplitude of the sub-probe function, while the updated sub-probe function is normalized by the maximum value of the squared sub-object function to keep the value between the two functions within a proportional range. The coefficient $\alpha $, a constant set to a value of 1 throughout our simulation and experiment, could affect the convergence speed and robustness of the phase retrieval algorithm. The above steps will be repeated until all scan positions have been reached to complete one full iteration circle. The reconstruction process will terminate after a predefined number of iterations.

An entire reconstruction is constituted of the reconstruction processes of K sub-datasets, which are parallel and independent. Finally, as shown in Fig. 2, we retrieved K sub-images for different sub-FOVs. We formed a complete image by stitching the sub-images with a simple method in Ref. [14]. However, the obtained image had uneven brightness and artifacts around the boundary between sub-regions, caused by the inconsistency of different sub-region reconstruction results. To eliminate these problems, we run reconstruction for a few iterations for the full diffraction dataset.

3. Results

Our parallel acquisition and reconstruction method was verified and evaluated with simulations and optical experiments. The reconstruction code was written in MATLAB and was run on GPU (NVIDIA Quadro M2000, 4GB RAM) via the parallel computing toolbox.

3.1 Numerical simulation

To validate the feasibility of our proposed approach, we implemented a visible-light simulation with a 3×3 pinhole array. We set up a 405nm laser and a detector with 2.4-µm pixels to simulate the actual situation. Cameraman and lifting-body images included in MATLAB were used as amplitude and phase, respectively. The distance between the object and CCD is 3 mm. The pinholes are circular with a diameter of 0.6 mm, and the center point distance between adjacent pinholes is 0.9 mm. We used a 7×7 scan grid and 85% overlap ratio [23] to generate diffraction data (1125×1125 pixels). As shown in Fig. 4(a), 49 simulated diffraction patterns were generated in total. Nine complex sub-images were obtained after 30 iterations of the parallel-ePIE algorithm, which are shown in Fig. 4(b), and each of them was of high quality. Multiple sub-images cover the same area of the sample in the horizontal and vertical directions, which is necessary for image stitching. Here, we used the phase cross-correlation method to align the overlapping region of the sub-images and adjust their brightness. However, as shown in Fig. 4(c), the stitched image still shows uneven brightness and noticeable artifacts around the overlapping area. An additional iteration of ePIE was run on the whole dataset to eliminate those artifacts, providing the final high-quality image as shown in Fig. 4(d). A better stitching method is currently under investigation.

 figure: Fig. 4.

Fig. 4. The results of simulations. Nine sub-images (b) were reconstructed by parallel-ePIE from 49 diffraction patterns (a) which were divided into 9 groups. After stitching up the images (c), one additional ePIE reconstruction (d) was run on the entire dataset to eliminate the uneven brightness around the boundary between sub-images.

Download Full Size | PDF

Different from the combination of parallel data acquisition and reconstruction, a single large illumination probe combined with the parallel-ePIE algorithm proposed in Ref. [14] is an alternative method to reduce imaging time. However, a larger illumination probe may need more iterations to converge than a small probe. We simulated the effects of pinhole sizes on the reconstruction when the pinhole occupancy ratio, the scan grid (7×7), and the overlap ratio (85%) were kept the same. The probe size is 256×256 pixels when the pinhole size is 400 µm.

In the simulation, we compare the reconstructed object Orec (m,n) with the original object function Oref (m,n) and use the error metric RMSE [24] defined below to quantify the reconstruction quality

$$a = \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {FFT\{{{O_{rec}}(m,n)} \}} } .\ast conj({FFT\{{{O_{ref}}(m,n)} \}} )$$
$${e_1} = \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {FFT\{{{O_{rec}}(m,n)} \}} } .\ast conj({FFT\{{{O_{rec}}(m,n)} \}} )$$
$${e_2} = \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {FFT\{{{O_{ref}}(m,n)} \}} } .\ast conj({FFT\{{{O_{ref}}(m,n)} \}} )$$
$$RMSE\textrm{ = }\sqrt {\left|{1\textrm{ - }\frac{{a\ast conj(a)}}{{{e_1}\ast {e_2}}}} \right|} ,$$
where ‘FFT’ refers to the fast Fourier transform.

As shown in Fig. 5(a), the number of iterations required for convergence increases rapidly with the rise of pinhole size when the same reconstruction qualities are to be obtained (one of the reconstructions using 1200-µm pinhole is shown in Fig. 5(c) and 5(d)). Since a planar illumination was used here, the data diversity in the ptychographic recording mainly comes from the aperture boundaries. The larger the aperture is, the smaller the percentage of boundary area would be; hence, more iterations are needed to provide good image quality when a large pinhole is used. Moreover, a large pinhole needs more computing time per iteration as shown in Fig. 5(b). Although using a large pinhole can expand the imaging FOV with the same number of scans, there is a sharp increase in reconstruction time, and a very large pinhole may cause the diffraction data to be too large to handle. Hence, our method has significant advantages in imaging large samples.

 figure: Fig. 5.

Fig. 5. Dependence of (a) number of iterations and (b) running time per iteration on the pinhole size. One typical reconstructed (c) amplitude and (d) phase using a 1200-µm pinhole. The number of iterations required and the computing time per iteration increases rapidly as the pinhole size increases.

Download Full Size | PDF

3.2 Optical experiment

We have conducted several visible-light experiments to validate the performance of the proposed method. A 5×5 pinhole array (photo of the device is given in Fig. 6(a)) was used for illumination beam partition. Light from a 405nm laser was collimated by an achromatic lens and then shined on the pinhole array. Mounted on an XYZ stepper motor assemble, the object was placed 3 mm away from the detector. An industrial camera with 3088×2076 pixels, each 2.4-µm wide and a 12-bits dynamic range (uEye LE, IDS) was used to record the diffraction patterns. The scanning of the sample followed a 7×7 positions on which a random offset was added to prevent the raster scan pathology artifact [25], with an overlap ratio of about 85%.

 figure: Fig. 6.

Fig. 6. (a) A custom-made 5×5 pinhole array with a hole size of 0.6 mm; (b) One typical diffraction pattern recorded by the detector; (c) The reconstructed amplitude of the pinhole array.

Download Full Size | PDF

Figure 6(b) shows one of the measured diffraction patterns with a size of 2000×2000 pixels, and its central area (1875×1875 pixels) was used in the reconstruction algorithm. The propagated exit wave of the pinhole array formed the initial guesses of the sample illumination probe. After the retrieval of the probe function, the pinhole array exit wave can be provided by backpropagating it to the aperture plane via angular spectrum method or Fresnel propagation, as shown in Fig. 6(c). One can see they are not identical.

Figure 7 shows the reconstructed amplitudes of the USAF target. It takes only 15 iterations of parallel-ePIE and 3 iterations of ePIE for the parallel reconstruction method to provide the same reconstruction quality and resolution (2.46 µm). It takes about 120 seconds, averaged from three trials. In comparison, the traditional ePIE algorithm needs 32 iterations; it takes about 510 seconds. Our method shows 4.3 times speed-up.

 figure: Fig. 7.

Fig. 7. The amplitude of the resolution target reconstructed by (a) the traditional algorithm and (b) the proposed parallel algorithm. The parallel algorithm takes about 4.3 times less time to provide results of similar quality.

Download Full Size | PDF

To show the advantages of the parallel method for large-FOV imaging, we also conducted experiments using a single pinhole instead of a pinhole array for generating the illumination probe. Other parameters are the same as in the previous experiment. Figure 8 shows the reconstructed amplitudes of resolution target by parallel ptychographic system and traditional method, respectively. Compared to the result using a 600-µm pinhole, the FOV using pinhole array (5×5) is expanded by 16 times. The reconstruction time is only increased 3 times, and the overall imaging time is only increased by 1.5 times (see Table 1). Moreover, we used a 3000-µm pinhole, whose effective illumination area is the same as that of pinhole array, to test the large FOV imaging speed of the traditional method. The whole imaging time of the conventional method is increased 25 times compared to the parallel approach (see Table 1), even using the parallel reconstruction method in Ref. [14], it still cost 6.7 times the imaging time. The parallel method’s imaging time and computational load increase slower than the expansion of FOV. Hence, the parallel approach is suitable for fast imaging of large samples.

 figure: Fig. 8.

Fig. 8. The amplitude of resolution target reconstructed by (a) the parallel near-field ptychographic system using 5×5 pinhole-array and the traditional near-field ptychographic system using (c) a single 600-µm pinhole and (d) a single 3000-µm pinhole, respectively. (b) The zoom-in of the central square area in (a). The FOV of the parallel system is 16 times larger than that of the traditional system using a 600-µm pinhole when other acquisition parameters are kept the same. The imaging speed of the parallel method is 25 times faster than the traditional method with a 3000-µm pinhole whose efficient illumination area is consistent with the pinhole array.

Download Full Size | PDF

Tables Icon

Table 1. Results of parallel and traditional near-field ptychography.

The resolution target can be assumed as a pure amplitude sample. We also used a lily bud sample to validate the phase imaging capability of the proposed method. Figure 9 shows the reconstructed amplitude and phase of the lily bud sample. One can easily distinguish the individual cells and the cell walls. The whole data acquisition and reconstruction process took 3.6 minutes (50% of the time was spent on data acquisition). Note that for the used sample, a smaller FOV would be sufficient. For a given sample, the size and spacing of the pinhole, as well as the pinhole array size can be optimally chosen to reduce the overall imaging time.

 figure: Fig. 9.

Fig. 9. The amplitude (a) and phase (c) of a lily bud sample reconstructed by the parallel near-field system. (b) The zoom-in of the red square area in (a). (d-f) The zoom-in of the red, blue, and green square areas in (c).

Download Full Size | PDF

4. Discussion

Limited by the experimental hardware, we conducted simulations for a variety of pinhole arrays to find the influence of pinhole size and array size. Our extensive simulations show that good results are obtained when the ratio of pitch-to-diameter is 1.5, and the overlap rate is around 85%. Those values are a good compromise of the illumination overlap ratio required for sub-images reconstruction and the overlap area required for adjacent sub-images stitching. The other simulation parameters are consistent with Sec. 3.1 except array size and pinhole size are used as variables.

The FOV can be easily enlarged with the parallel method by increasing the number of pinholes in the pinhole array. As shown in Fig. 10(a), with the expansion of the FOV, the image quality decreases slowly, and the RMSE value tends to be stable after the array size is greater than 5×5. Compared to the method using a large pinhole in Sec. 3.1, our approach has little impact on the image quality. It should be noted that when the pinhole array reaches a size of 10×10, the data volume exceeds the memory of the GPU device we used.

 figure: Fig. 10.

Fig. 10. (a) Effect of pinhole array size. (b) Reconstruction time and image quality measured by RMSE versus pinhole sizes. For our proposed method, expanding the FOV by increasing the number of pinholes will not have a major impact on image quality. The pinhole size from 200µm to 600µm gives better image quality.

Download Full Size | PDF

The size of a single pinhole is also a critical parameter for a pinhole array. As shown in Fig. 10(b), pinhole sizes from 120 µm to 1000 µm were tested. In the simulation, a 3×3 pinhole array was used. One can see that the reconstruction time increases rapidly with the increase of pinhole size, which is consistent with our conclusion in Sec. 3.1. However, the concave curve of RMSE indicates that good image quality can be obtained when the pinhole size is within 200 µm and 600 µm. Therefore, one needs to consider both the FOV and the image quality in determining the optimal pinhole size.

5. Conclusion

In conclusion, we have proposed a parallel data acquisition experimental arrangement and its corresponding parallel reconstruction algorithm to speed up the whole imaging process of near-field ptychography. Numerical simulations and optical experiments have been used to demonstrate the feasibility and effectiveness of the proposed method. Our method increases the calculation speed by four times compared with the traditional ptychography. The FOV is 16 times larger than the traditional one using a 600-µm pinhole under the same scan parameters. Moreover, our method can speed up the imaging time 25 times compared to the traditional one with a 3000-µm pinhole with the same illumination area. We found that the size of the pinholes from 200 µm to 600 µm and the pitch-to-pinhole ratio of 1.5 is optimal parameters for the pinhole array. The proposed method may suffer reduced resolution in the overlapped regions if the stitching cannot be conducted to high precision. Other stitching methods are under investigation. Currently, the ultra-large-array camera has started to become available, for which our work provides a suitable method to make full use of its pixels.

Funding

National Natural Science Foundation of China (11775105, 12074167); Science and Technology Planning Project of Shenzhen Municipality (KQTD20170810110313773); Centers for Mechanical Engineering Research and Education at MIT and SUSTech (6941806).

Disclosures

The authors declare no conflicts of interest. This work is original and has not been published elsewhere.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. M. Rodenburg, “Ptychography and Related Diffractive Imaging Methods,” Adv. Imaging Electron Phys. 150, 87–184 (2008). [CrossRef]  

2. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning X-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

3. F. Zhang, I. Peterson, J. Vila-comamala, A. Diaz, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

4. M. Yusuf, F. Zhang, B. Chen, A. Bhartiya, K. Cunnea, U. Wagner, F. Cacho-Nerin, J. Schwenkea, and I. K. Robinsona, “Procedures for cryogenic X-ray ptychographic imaging of biological samples,” IUCrJ 4(2), 147–151 (2017). [CrossRef]  

5. F. Pfeiffer, “X-ray ptychography,” Nat. Photonics 12(1), 9–17 (2018). [CrossRef]  

6. A. R. Lupini, M. P. Oxley, and S. V. Kalinin, “Pushing the limits of electron ptychography,” Science 362(6413), 399–400 (2018). [CrossRef]  

7. D. Claus, D. J. Robinson, D. G. Chetwynd, Y. Shuo, W. T. Pike, J. J. De, J. T. Garcia, and J. M. Rodenburg, “Dual wavelength optical metrology using ptychography,” J. Opt. 15(3), 035702 (2013). [CrossRef]  

8. Y. Jiang, Z. Chen, Y. Han, P. Deb, H. Gao, S. Xie, P. Purohit, M. W. Tate, J. Park, S. M. Gruner, V. Elser, and D. A. Muller, “Electron ptychography of 2D materials to deep sub-ångström resolution,” Nature 559(7714), 343–349 (2018). [CrossRef]  

9. M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3(1), 1927 (2013). [CrossRef]  

10. M. Stockmar, I. Zanette, M. Dierolf, B. Enders, R. Clare, F. Pfeiffer, P. Cloetens, A. Bonnin, and P. Thibault, “X-Ray near-field ptychography for optically thick specimens,” Phys. Rev. Appl. 3(1), 014005 (2015). [CrossRef]  

11. R. M. Clare, M. Stockmar, M. Dierolf, I. Zanette, and F. Pfeiffer, “Characterization of near-field ptychography,” Opt. Express 23(15), 19728–19742 (2015). [CrossRef]  

12. S. McDermott and A. Maiden, “Near-field ptychographic microscope for quantitative phase imaging,” Opt. Express 26(19), 25471–25480 (2018). [CrossRef]  

13. W. Xu, H. Lin, H. Wang, and F. Zhang, “Super-resolution near-field ptychography,” Opt. Express 28(4), 5164–5178 (2020). [CrossRef]  

14. Y. S. Nashed, D. J. Vine, T. Peterka, J. Deng, R. Ross, and C. Jacobsen, “Parallel ptychographic reconstruction,” Opt. Express 22(26), 32082–32097 (2014). [CrossRef]  

15. X. Wen, Y. Geng, C. Guo, X. Zhou, J. Tan, S. Liu, C. Tan, and Z. Liu, “A parallel ptychographic iterative engine with a co-start region,” J. Opt. 22(7), 075701 (2020). [CrossRef]  

16. X. He, X. Pan, C. Liu, and J. Zhu, “Single-shot phase retrieval based on beam splitting,” Appl. Opt. 57(17), 4832–4838 (2018). [CrossRef]  

17. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

18. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]  

19. J. Park, D. J. Brady, G. Zheng, L. Tian, and L. Gao, “Review of bio-optical imaging systems with a high space-bandwidth product,” Adv. Photonics 3(04), 044001 (2021). [CrossRef]  

20. R. S. Weinstein, M. R. Descour, C. Liang, G. Barker, K. M. Scott, L. Richter, E. A. Krupinski, A. K. Bhattacharyya, J. R. Davis, A. R. Graham, M. Rennels, W. C. Russum, J. F. Goodall, P. Zhou, A. G. Olszak, B. H. Williams, J. C. Wyant, and P. H. Bartels, “An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study,” Hum. Pathol. 35(11), 1303–1314 (2004). [CrossRef]  

21. X. He, S. P. Veetil, X. Pan, A. Sun, C. Liu, and J. Zhu, “High-speed ptychographic imaging based on multiple-beam illumination,” Opt. Express 26(20), 25869 (2018). [CrossRef]  

22. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

23. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108(5), 481–487 (2008). [CrossRef]  

24. J. R. Fienup, “Invariant error metrics for image reconstruction,” Appl. Opt. 36(32), 8352–8357 (1997). [CrossRef]  

25. M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. 12(3), 035017 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Illustration of beam expansion in the near-field and far-field diffraction of a pinhole array. The diffraction spots in the near-field will not overlap as it is the case for the far-field. (b) Schematic of the setup for parallel data recording in near-field ptychography.
Fig. 2.
Fig. 2. Flowchart for parallel reconstruction method. For the case of using a 2×2 pinhole array, the diffraction patterns are evenly divided into four sub-datasets, indicated by four colors. Each sub-dataset is then processed by an extended ptychographical iterative engine (ePIE) [22] to provide an image. Stitching up all the images for each sub-regions yields the final image having an enlarged size.
Fig. 3.
Fig. 3. The procedure of the parallel-ePIE algorithm.
Fig. 4.
Fig. 4. The results of simulations. Nine sub-images (b) were reconstructed by parallel-ePIE from 49 diffraction patterns (a) which were divided into 9 groups. After stitching up the images (c), one additional ePIE reconstruction (d) was run on the entire dataset to eliminate the uneven brightness around the boundary between sub-images.
Fig. 5.
Fig. 5. Dependence of (a) number of iterations and (b) running time per iteration on the pinhole size. One typical reconstructed (c) amplitude and (d) phase using a 1200-µm pinhole. The number of iterations required and the computing time per iteration increases rapidly as the pinhole size increases.
Fig. 6.
Fig. 6. (a) A custom-made 5×5 pinhole array with a hole size of 0.6 mm; (b) One typical diffraction pattern recorded by the detector; (c) The reconstructed amplitude of the pinhole array.
Fig. 7.
Fig. 7. The amplitude of the resolution target reconstructed by (a) the traditional algorithm and (b) the proposed parallel algorithm. The parallel algorithm takes about 4.3 times less time to provide results of similar quality.
Fig. 8.
Fig. 8. The amplitude of resolution target reconstructed by (a) the parallel near-field ptychographic system using 5×5 pinhole-array and the traditional near-field ptychographic system using (c) a single 600-µm pinhole and (d) a single 3000-µm pinhole, respectively. (b) The zoom-in of the central square area in (a). The FOV of the parallel system is 16 times larger than that of the traditional system using a 600-µm pinhole when other acquisition parameters are kept the same. The imaging speed of the parallel method is 25 times faster than the traditional method with a 3000-µm pinhole whose efficient illumination area is consistent with the pinhole array.
Fig. 9.
Fig. 9. The amplitude (a) and phase (c) of a lily bud sample reconstructed by the parallel near-field system. (b) The zoom-in of the red square area in (a). (d-f) The zoom-in of the red, blue, and green square areas in (c).
Fig. 10.
Fig. 10. (a) Effect of pinhole array size. (b) Reconstruction time and image quality measured by RMSE versus pinhole sizes. For our proposed method, expanding the FOV by increasing the number of pinholes will not have a major impact on image quality. The pinhole size from 200µm to 600µm gives better image quality.

Tables (1)

Tables Icon

Table 1. Results of parallel and traditional near-field ptychography.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I j = | A S ( ψ j ( r ) ) | 2 ,
ψ k , j ( r ) = O k , j ( r , s j i ) P k , j ( r ) .
Ψ k , j ( u ) = A S [ ψ k , j ( r ) ] .
ψ k , j ( r ) = A S 1 [ Ψ k , j ( u ) | Ψ k , j ( u ) | I k , j ] .
O k , j ( r , s j i ) = O k , j ( r , s j i ) + α c o n j ( P k , j ( r ) ) [ ψ k , j ( r ) ψ k , j ( r ) ] max ( | P k , j ( r ) | 2 ) ,
P k , j ( r ) = P k , j ( r ) + α c o n j ( O k , j ( r , s j i ) ) [ ψ k , j ( r ) ψ k , j ( r ) ] max ( | O k , j ( r , s j i ) | 2 ) ,
a = m = 1 M n = 1 N F F T { O r e c ( m , n ) } . c o n j ( F F T { O r e f ( m , n ) } )
e 1 = m = 1 M n = 1 N F F T { O r e c ( m , n ) } . c o n j ( F F T { O r e c ( m , n ) } )
e 2 = m = 1 M n = 1 N F F T { O r e f ( m , n ) } . c o n j ( F F T { O r e f ( m , n ) } )
R M S E  =  | 1  -  a c o n j ( a ) e 1 e 2 | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.