Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sensor fusion in ptychography

Open Access Open Access

Abstract

Ptychography is a lensless, computational imaging method that utilises diffraction patterns to determine the amplitude and phase of an object. In transmission ptychography, the diffraction patterns are recorded by a detector positioned along the optical axis downstream of the object. The light scattered at the highest diffraction angle carries information about the finest structures of the object. We present a setup to simultaneously capture a signal near the optical axis and a signal scattered at high diffraction angles. Moreover, we present an algorithm based on a shifted angular spectrum method and automatic differentiation that utilises this recorded signal. By jointly reconstructing the object from the resulting low and high diffraction angle images, the resolution of the reconstructed image is improved remarkably. The effective numerical aperture of the compound sensor is determined by the maximum diffraction angle captured by the off axis sensor.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ptychography is a computational imaging method that relies on recording multiple diffraction patterns of an object while varying the illumination condition, often named the probe beam. These diffraction patterns are then used to create a reconstructed image of the object. A thin object or 3D object [1,2] can be reconstructed, and the reconstruction consists of the amplitude and phase of the object [3]. The probe beam can be simultaneously recovered with the object [4]. The process of data acquisition introduces redundant information into the data, which ptychographic algorithms can use to accomplish compensation for experimental errors. This includes compensation for lateral scanning errors [57], to compensate axial positioning errors [810], and work with partially incoherent light sources [11].

Many algorithms exist to reconstruct ptychographical data, such as the ptychographical iteration engine (PIE) [12] family with ePIE [4], mPIE [13] and zPIE [10]. Other notable algorithms are the difference map approach [14,15] and the maximum likelihood (ML) approach [16]. Multiple different approaches can be combined and compared in a software framework PtyPy [17].

Recently, gradient-descent-based optimisation algorithms using automatic differentiation (AD) have been shown to be effective at solving the inverse problem in ptychography [18,19]. There is an essential distinction between conventional algorithms, such as the PIE family of algorithms, and algorithms using AD. The PIE algorithm iteratively applies an update function, and this update function needs to be derived in closed form for every experimental modification. For AD-based algorithms there is no need to derive such an update function manually. Instead, AD and an optimizer are employed to minimise the loss function that is calculated from the experimental data and the predicted diffraction patterns in the forward model of the system [19]. As a result, automatic differentiation ptychography (ADP) is flexible to adaptations to the imaging system [18,20] and can be used to work with complicated or difficult to invert systems.

The smallest resolvable object period, resolution $R$, of a ptychographic system is expressed as [21],

$$R = \lambda/(\textrm{NA}_{\textrm{illum}} + \textrm{NA}_{\textrm{det}}),$$
where $\lambda$ is the illumination light’s wavelength, $\textrm {NA}_{\textrm{illum}}$ numerical aperture of the illumination probe and $\textrm {NA}_{\textrm{det}}$ the numerical aperture of the detector. Enlarging the numerical aperture of the measured diffraction patterns offers a clear path to improved reconstruction resolution. Similarly, the use of structured illumination, such as speckle illumination, increases the NA of the incident light and enables higher resolution reconstructions [2224]. Additionally, if the object of study can be characterised using a given set of parameters, we can optimise the illumination strategy using Fisher information [25].

In this paper, we introduce a ptychography system that includes a signal detected at high diffraction angles, hereafter coined the off-axis signal. This signal carries high spatial frequencies diffracted from the object, which is used to achieve an improved reconstruction resolution. We show our method of capturing the signal and method of reconstruction.

2. Problem formulation

The additional signal is included using sensor fusion, by which we mean that data from different, physical or virtual, sensors is combined. In this work our physical detector is split into multiple sensor areas. The setup is shown in Fig. 1(a). In the detector plane in Fig. 1(a), two areas are distinguished representing the two sensors. One sensor measures the diffracted intensity close to the optical axis and the second one covers a disjunct area at higher diffraction angles.

 figure: Fig. 1.

Fig. 1. (a) A diagram of the experimental setup. An aperture with a diameter of 500 µm is illuminated with 561 nm laser light and imaged to the object plane to form the probe field. The imaging optics and aperture are not shown in this figure. The probe field has a diameter of 1.2 mm at the object. Together with the object it forms the exit field $\psi _{exit}(r,R_i)$ on the object plane. The object is laterally moved to scan positions $R_i$. At the detection plane, $\Psi _{det}(r',R_i)$, the orange region $a$ and teal region $b$ are recorded. (b) and (c) show experimental diffraction patterns of the regions of interest respectively. The colour bars represent the intensity.

Download Full Size | PDF

We use the framework presented by Seifert et al. [20] to implement an algorithm that combines the on-axis and off-axis data. The algorithm consists of custom-built layers that together resemble the forward physics model of the experimental setup. The physics model is described as follows. At position vector $r$, the exit-field $\psi _\mathrm {exit}(r)$ in the object plane is formed by illuminating a complex-valued object $O(r)$ with a probe beam $P(r)$ at different positions $R_i$:

$$\psi_\mathrm{exit}(r,R_i)=O(r-R_i)P(r) \ .$$

The exit-field is then propagated along the optical axis to the detector plane to form $\Psi _\mathrm {det}(r',R_i)$. The wave field propagation to the detector plane is calculated using the angular spectrum method. See supplemental document section 1 and accompanying code [26] or Code 1 [27] for exact implementation. The shifted angular spectrum method [28] is implemented to propagate the wave field to an area of interest with a lateral offset to the optical axis. The intensity distribution $I_\mathrm {det}$ at the detector plane is given by

$$I_\mathrm{det}(r',R_i)=|\Psi_\mathrm{det}(r',R_i)|^{2} \ .$$

A flowchart of the physics model and reconstruction routine is shown in Fig. 2. The object $O(r)$ and probe field $P(r)$ described in Eq. (2) compose the first layer. After the first layer the flowchart splits and the data flows diverge. The second layer is where the wave field in the object plane is diffracted to form the wave field in the detector plane by using the angular spectrum method. Here, the data is split into a path along the optical axis and a path away from the optical axis. Apart from selecting the correct diffraction model, this layer remains constant during operation. The last layer of the physics model converts the wave field at the detector plane to its intensity distribution using Eq. (3). At this point, there are two intensity distributions. One representing sensor $a$ and one representing sensor $b$. This constitutes the model prediction of the diffraction pattern $I_\mathrm {pred}(r,R_i)$ at a specific scanning position $R_i$. The cost function to be minimized is the mean squared error of the model prediction,

$$\mathrm{MSE}=\frac{1}{N}\displaystyle\sum_{i=1}^{N}(I_\mathrm{meas}(r,R_i)-I_\mathrm{pred}(r,R_i))^{2} \ ,$$
with $N$ being the total number of points of the scanning pattern and $I_\mathrm {meas}(r,R_i)$ the experimentally measured diffraction pattern at position $R_i$. The MSE is computed for both the signal of sensor $a$ and the signal of sensor $b$. The relative weight of the two different MSE values is controlled by a mixing factor $\gamma$, which results in the final loss function $L$ given by
$$L =(1-\gamma)\cdot \mathrm{MSE}_\mathrm{on-axis}+\gamma \cdot \mathrm{MSE}_\mathrm{off-axis}\ .$$

There is an essential distinction between ptychography reconstruction based on conventional algorithms, such as PIE, and algorithms using AD. In comparison, PIE uses a closed-form update function; in ADP, the object retrieval results from minimising a loss function using the reverse mode of AD. In this model, $L$ is minimised using multiple iterations of the Adam optimiser [29], which implements automatic differentiation. Consequently, this algorithm leads to the reconstruction of the object transmission function stored in the first layer of the model.

 figure: Fig. 2.

Fig. 2. A flowchart of the ADP algorithm presented in this work adapted from the flowchart shown by Seifert et al. [20], expanded to include the off-axis signal. The path of the data from left to right is the forward physics model. The on-axis data path is shown in orange, and the additional data path of the off-axis signal is shown in teal. The object layer starts with an initial guess, which is adapted by the automatic differentiation optimiser to minimise the MSE between the predicted and experimental diffraction patterns.

Download Full Size | PDF

ADP uses efficient optimisers developed by the machine learning communities. The algorithm is implemented using the Tensorflow [30] and Keras [31] libraries. The Keras library allows us to map the reconstruction task in ptychography onto an architecture similar to deep or multi-layered neural networks. The physics-based layer-by-layer approach makes the algorithm highly modular, as layers can be modified or new layers added to extend the physics model. Additionally, to control the optimisation process, the ability to use alternative loss functions is paramount, such as incorporating two data flows as in Eq. (5).

3. Experimental setup

The probe field $P(r)$ is formed by shining a laser of wavelength 561 nm (Cobolt Jive 100) at an aperture with a diameter of 500 µm. The aperture is imaged to the object plane to form the probe field. The probe field has a diameter of 1.2 mm at the object plane. The object is mounted on a stage (stepper motor Thorlabs ZFS25B, controller Thorlabs KST101) that is laterally moved to a total of 52 positions $R$ with an overlap of $85\%$, above the overlap requirement shown by Bunk et al. [32]. Section 2 in the supplemental material describes the scanning pattern. Our test object, a Thorlabs NBS 1952 resolution target, consists of sets of three horizontal and vertical lines. Numbers indicate the lines per mm of each set. The light that is diffracted by the object is lenslessly captured on a camera chip (Basler ace acA2440-35um) placed 61 mm behind the object. We define two virtual sensors on the camera chip as shown in Fig. 1(a). Sensor a captures the diffraction pattern propagating close to the optical axis and sensor b captures a part of the diffraction pattern shifted 2649 µm away from the optical axis. The effect of shifted distance and sensor size on reconstruction quality is discussed in section 3 of the supplemental material.

In Fig. 1(b) and Fig. 1(c) we show examples of diffraction patterns captured by sensors a and b respectively. Each virtual sensor region contains $512\times 512$ pixels. Sensor $b$ only captures light that is diffracted at large horizontal angles. The $\textrm {NA}$ of the system is not changed isotropically. While the $\textrm {NA}$ is increased in the horizontal direction, the $\textrm {NA}$ of the system remains the same in the vertical direction. As Eq. (1) shows, the resolution of the system depends on its $\textrm {NA}$, we expect the extra signal captured by sensor $b$ not to contribute isotropically to a higher resolution and instead only to a higher horizontal resolution.For each of the 52 scanning positions, two images were recorded by the detector. To properly capture the off-axis signal, the exposure time for sensor b was increased to 10 times the exposure time of sensor a. One image was taken using an exposure time of 50 ms for sensor $a$ and one using an exposure time of 500 ms sensor $b$.

4. Results

In Fig. 3, we show a composite image of two individual reconstructions. In the upper half, the object reconstructed only from diffraction patterns of sensor $a$ is shown. In the lower half, both sensors are utilised to reconstruct the object. Both images are retrieved using an Nvidia RTX 2070 GPU by running our reconstruction algorithm for 40 epochs taking approximately 120 seconds. The mixing factor $\gamma$ in Eq. (5) is set to $0$ for the reconstruction in the upper half of Fig. 3. For the reconstruction in the lower half, $\gamma$ is switched from $0$ to $0.5$ after 20 epochs. A mixing factor of $0.5$ means that from that point onward, the MSE of the on-axis signal and off-axis signal are both taken into account with similar weight. The measured diffraction patterns are normalized to the exposure time.

 figure: Fig. 3.

Fig. 3. A composite image of two reconstructions. The upper half of the image represents the transmittance of an object reconstruction from experimental data using only sensor $a$ along the optical axis. The lower half of the image, represents the transmittance using sensor $b$ additionally to sensor $a$. The active sensor areas are indicated in the inset images in the bottom left and top right corner. The colour bars represent the transmittance of the reconstructed object. The reconstructed transmittance was normalised to 1 for an area on the target known to be clear glass.

Download Full Size | PDF

We observe for the sets of vertical lines that the highest density of lines that is resolved in the upper half of the image is 48 lines/mm, i.e. a spacing of 20.8 µm. Any denser set of lines remains unresolved for the on-axis reconstruction. In the reconstruction that incorporates the off-axis signal, 80 lines/mm are resolved, equivalent to 12.5 µm. The 80 lines/mm set is the finest set of lines present in our target. Therefore, in the particular geometry of our experimental setup, we gain an improvement in resolution of at least a factor 1.6. Note that, as expected, we do not observe an improved vertical resolution.

In Fig. 4, a cropped view of two sets of vertically oriented lines is shown, the 48 lines/mm and the 80 lines/mm set. The upper and lower half are the same reconstruction images as shown in Fig. 3. Aside of the reconstructed image, lines show the intensity values as if we look at one row of pixels along the horizontal axis. These values are averaged values from each row along the full length of the lines. In this image, low intensity represents a line where the light is blocked, and high intensity the surrounding area where the light passes. In Fig. 4 at 48 lines/mm, we see a pattern of three $I_{min}$ values and two $I_{max}$ values, which is the three-lined pattern of each line set. The pattern is visible for both reconstructions at 48 lines per mm. However, for 80 lines per mm, this pattern is only visible for the combined on-axis and off-axis signal reconstruction. The line pattern is not resolved for the on-axis signal reconstruction.

 figure: Fig. 4.

Fig. 4. A cropped view of the vertical sets of 48 and 80 lines per mm from Fig. 3. The upper half of each image shows the on-axis reconstruction, and the lower half the mixed on-axis and off-axis reconstruction. The red intensity curves depict the averaged values along the full vertical direction in their respective.

Download Full Size | PDF

The fringe visibility ($V$) [33] is a measure to quantify the resolution of these line patterns, and is given by

$$V = \frac{I_\mathrm{max}-I_\mathrm{min}}{I_\mathrm{max}+I_\mathrm{min}} \ .$$

Here, $I_\mathrm {max}$ and $I_\mathrm {min}$ denote the maximum and minimum intensities of the signal, respectively. The calculation of $V$ is performed using the mean values of the two bright $I_{max}$ values between the three $I_{min}$ lines and the mean value of these three $I_{min}$ values. All mean values are calculated along the full lengths of the lines.

Table 1 shows several fringe visibility values computed for the transmittance reconstructions shown in Fig. 3 and Fig. 4. Empty values indicate that the line set is not clearly resolved, which occurs when $I_\mathrm {max}$ and $I_\mathrm {min}$ are not distinguishable. For the lines with horizontal orientation, we observe no significant difference in resolution between the on- and off-axis object reconstructions. Both reconstructions have a decent fringe visibility for the set 48 lines/mm, and a poor fringe visibility for the set of 56 lines/mm.

Tables Icon

Table 1. Fringe visibility of the line sets within the transmittance reconstructions. Empty values indicate that the line set is not clearly resolved which occurs when $I_\mathrm {min}$ and $I_\mathrm {max}$ are not distinguishable.

Analysing the fringe visibility for the line sets with vertical orientation, we observe a significant improvement in fringe visibility for the object reconstruction that incorporates the off-axis signal. The fringe visibility of this reconstruction at 80 lines/mm is similar to the fringe visibility of the on-axis signal reconstruction at 48 lines/mm.

5. Discussion

The reconstruction using only on-axis data, with $\textrm {NA}_{\textrm{illum}} = 0.01\pm 0.003$, and $\textrm {NA}_{\textrm{det}} = 0.015$ has a theoretical resolution between 20.0 µm and 25.5 µm according to Eq. (1). Using the off axis data as well increases the detection NA to $\textrm {NA}_{\textrm{det}} = 0.036$ leading to a resolution between 11.4 µm and 13.0 µm in the horizontal direction. The off axis data does not change vertical NA, so no effect on the vertical resolution is expected.

Both our reconstruction based on solely on-axis data and our reconstruction based on the fused on-axis and off-axis data show a resolution that corresponds well to the initially estimated resolution. For the on-axis case the best measured resolution is 20.8 µm and for the fused data we find a resolution of 12.5 µm. This is within the margin of error with which we estimate the illumination numerical aperture.

Remarkably, the gain in resolution does not require a sensor that spans the entire solid angle. In fact the off-axis sensor samples only light reflected to the right of the optical axis. For samples with a pronounced 3D structure, for example a blazed transmission grating, this may lead to incorrect reconstructions and a symmetric sensor configuration may be preferable. However if the object is known to be two-dimensional it is sufficient to capture only angles on one side, comparable to the use of single-sideband methods in signal processing [34].

The presented results demonstrate the strength of optimization-based approaches to ptychography, allowing it to be extended to incorporate sensors at different positions and orientations, and in principle even of entirely different types. If the relative position of different sensors is unknown, an intriguing possibility is to use the inherent redundancy in ptychographic data to correct such detectors’ positions, similar to the axial and lateral corrections previously shown.

6. Conclusion

To conclude, we show that the fusion of two laterally displaced sensors in the detector plane improves the resolution in ptychographic reconstructions. We achieve this improvement by extending an optimization-based ptychography framework with two separate data branches to calculate the diffraction intensity distributions in two spatially separated sensors in the detection plane, of which one is placed off the optical axis. By tuning the weight of the individual sensors in the optimization cost fucntion, we can then choose the relative contribution of each sensor to the reconstruction process. Experimentally, sensor fusion results in a a significant improvement in the resolution of a reconstructed target object, which agrees with a resolution estimated based on the maximum diffraction angle captured by the off-axis sensor.

By fusing multiple sensors, our work enables new applications of ptychography where several small sensors, possibly with a tuned sensitivity and gain, cover a wide numerical aperture in a way that is more flexible than can be achieved with a single sensor. For instance a robust low-sensitivity sensor could be used to detect the high on-axis intensities while sensitive high-gain sensors are used for the weak diffraction patterns at large angles.

Funding

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Perspective P16-08, Vici 68047618).

Disclosures

The authors declare no conflict of interest.

Data Availability

Data and analysis methods supporting this work are available in Ref. [26]. Analysis methods are also available in Code 1 Ref. [27]

Supplemental document

See Supplement 1 for supporting content.

References

1. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

2. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

3. J. M. Rodenburg and A. M. Maiden, Ptychography (Springer International Publishing, Cham, 2019), pp. 819–904.

4. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

5. A. C. Hurst, T. B. Edo, T. Walther, F. Sweeney, and J. M. Rodenburg, “Probe position recovery for ptychographical imaging,” J. Phys.: Conf. Ser. 241, 012004 (2010). [CrossRef]  

6. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

7. P. Dwivedi, A. P. Konijnenberg, S. F. Pereira, and H. P. Urbach, “Lateral position correction in ptychography using the gradient of intensity patterns,” Ultramicroscopy 192, 29–36 (2018). [CrossRef]  

8. J. Dou, Z. Gao, J. Ma, C. Yuan, Z. Yang, and L. Wang, “Iterative autofocusing strategy for axial distance error correction in ptychography,” Opt. Lasers Eng. 98, 56–61 (2017). [CrossRef]  

9. L. Lötgering, M. Rose, K. Keskinbora, M. Baluktsian, G. Dogan, U. Sanli, I. Bykova, M. Weigand, G. Schütz, and T. Wilhein, “Correction of axial position uncertainty and systematic detector errors in ptychographic diffraction imaging,” Opt. Eng. 57(08), 1–7 (2018). [CrossRef]  

10. L. Lötgering, M. Du, K. S. E. Eikema, and S. Witte, “zPIE: an autofocusing algorithm for ptychography,” Opt. Lett. 45(7), 2030–2033 (2020). [CrossRef]  

11. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]  

12. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

13. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

14. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

15. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

16. P. Thibault and M. Guizar-Sicairos, “Maximum-likelihood refinement for coherent diffractive imaging,” New J. Phys. 14(6), 063004 (2012). [CrossRef]  

17. B. Enders and P. Thibault, “A computational framework for ptychographic reconstructions,” Proc. R. Soc. A 472(2196), 20160640 (2016). [CrossRef]  

18. S. Kandel, S. Maddali, M. Allain, S. O. Hruszkewycz, C. Jacobsen, and Y. S. G. Nashed, “Using automatic differentiation as a general framework for ptychographic reconstruction,” Opt. Express 27(13), 18653 (2019). [CrossRef]  

19. S. Ghosh, Y. S. G. Nashed, O. Cossairt, and A. Katsaggelos, “ADP: Automatic differentiation ptychography,” 2018 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2018), pp. 1–10.

20. J. Seifert, D. Bouchet, L. Lötgering, and A. P. Mosk, “Efficient and flexible approach to ptychography using an optimization framework based on automatic differentiation,” OSA Continuum 4(1), 121 (2021). [CrossRef]  

21. D. Claus and J. M. Rodenburg, “Diffraction-limited superresolution ptychography in the Rayleigh–Sommerfeld regime,” J. Opt. Soc. Am. A 36(2), A12–A19 (2019). [CrossRef]  

22. H. Zhang, S. Jiang, J. Liao, J. Deng, J. Liu, Y. Zhang, and G. Zheng, “Near-field Fourier ptychography: super-resolution phase retrieval via speckle illumination,” Opt. Express 27(5), 7498–7512 (2019). [CrossRef]  

23. M. Odstrčil, M. Lebugle, M. Guizar-Sicairos, C. David, and M. Holler, “Towards optimized illumination for high-resolution ptychography,” Opt. Express 27(10), 14981–14997 (2019). [CrossRef]  

24. M. Guizar-Sicairos, M. Holler, A. Diaz, J. Vila-Comamala, O. Bunk, and A. Menzel, “Role of the illumination spatial-frequency spectrum for ptychography,” Phys. Rev. B 86(10), 100103 (2012). [CrossRef]  

25. D. Bouchet, J. Seifert, and A. P. Mosk, “Optimizing illumination for precise multi-parameter estimations in coherent diffractive imaging,” Opt. Lett. 46(2), 254–257 (2021). [CrossRef]  

26. K. Maathuis, J. Seifert, and A. P. Mosk, “Sensor Fusion In Ptychography - Data Publication platform of Utrecht University,” Utrecht University (2022), https://doi.org/10.24416/UU01-9363HC.

27. K. Maathuis, J. Seifert, and A. P. Mosk, “Sensor Fusion In Ptychography Supplementary Code,” figshare (2022), https://doi.org/10.6084/m9.figshare.19829194.

28. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18(17), 18453–18463 (2010). [CrossRef]  

29. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, eds. (2015).

30. M. Abadi, A. Agarwal, P. Barham, et al., “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” (2016).

31. F. Chollet, “Keras,” https://keras.io (2015).

32. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108(5), 481–487 (2008). [CrossRef]  

33. J. D. Ellis, Field Guide to Displacement Measuring Interferometry (SPIE, Bellingham, WA, 2014).

34. S. S. Haykin, Communication systems (Wiley, New York, 1978).

Supplementary Material (2)

NameDescription
Code 1       Source code for an automatic differentiation ptychography implementation using multiple sensors fused in reconstruction
Supplement 1       Supplemental Document

Data Availability

Data and analysis methods supporting this work are available in Ref. [26]. Analysis methods are also available in Code 1 Ref. [27]

26. K. Maathuis, J. Seifert, and A. P. Mosk, “Sensor Fusion In Ptychography - Data Publication platform of Utrecht University,” Utrecht University (2022), https://doi.org/10.24416/UU01-9363HC.

27. K. Maathuis, J. Seifert, and A. P. Mosk, “Sensor Fusion In Ptychography Supplementary Code,” figshare (2022), https://doi.org/10.6084/m9.figshare.19829194.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) A diagram of the experimental setup. An aperture with a diameter of 500 µm is illuminated with 561 nm laser light and imaged to the object plane to form the probe field. The imaging optics and aperture are not shown in this figure. The probe field has a diameter of 1.2 mm at the object. Together with the object it forms the exit field $\psi _{exit}(r,R_i)$ on the object plane. The object is laterally moved to scan positions $R_i$. At the detection plane, $\Psi _{det}(r',R_i)$, the orange region $a$ and teal region $b$ are recorded. (b) and (c) show experimental diffraction patterns of the regions of interest respectively. The colour bars represent the intensity.
Fig. 2.
Fig. 2. A flowchart of the ADP algorithm presented in this work adapted from the flowchart shown by Seifert et al. [20], expanded to include the off-axis signal. The path of the data from left to right is the forward physics model. The on-axis data path is shown in orange, and the additional data path of the off-axis signal is shown in teal. The object layer starts with an initial guess, which is adapted by the automatic differentiation optimiser to minimise the MSE between the predicted and experimental diffraction patterns.
Fig. 3.
Fig. 3. A composite image of two reconstructions. The upper half of the image represents the transmittance of an object reconstruction from experimental data using only sensor $a$ along the optical axis. The lower half of the image, represents the transmittance using sensor $b$ additionally to sensor $a$. The active sensor areas are indicated in the inset images in the bottom left and top right corner. The colour bars represent the transmittance of the reconstructed object. The reconstructed transmittance was normalised to 1 for an area on the target known to be clear glass.
Fig. 4.
Fig. 4. A cropped view of the vertical sets of 48 and 80 lines per mm from Fig. 3. The upper half of each image shows the on-axis reconstruction, and the lower half the mixed on-axis and off-axis reconstruction. The red intensity curves depict the averaged values along the full vertical direction in their respective.

Tables (1)

Tables Icon

Table 1. Fringe visibility of the line sets within the transmittance reconstructions. Empty values indicate that the line set is not clearly resolved which occurs when Imin and Imax are not distinguishable.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

R=λ/(NAillum+NAdet),
ψexit(r,Ri)=O(rRi)P(r) .
Idet(r,Ri)=|Ψdet(r,Ri)|2 .
MSE=1Ni=1N(Imeas(r,Ri)Ipred(r,Ri))2 ,
L=(1γ)MSEonaxis+γMSEoffaxis .
V=ImaxIminImax+Imin .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.