Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate method for correcting the translation position error of ptychography based on quantum particle swarm optimization

Open Access Open Access

Abstract

In ptychography, the translation position error will cause the periodic grid deviation and tremendously decrease the reconstruction quality. It is crucial to attain the precise translation position of the probe with respect to the object. The current correction methods may fall into a local optimal value, and miss the better results. An accurate method based on the quantum particle swarm optimization is proposed to globally correct the translation position error and add the randomness to avoid trapping in local optimum. In our proposed method, particles in a quantum bound state can appear at any point in the solution space with a certain probability density. In order words, the corrected translation position can be spread over the searching space, which can acquire the possibility of jumping out of the local optimum. Experiments are conducted to verify that our proposed method can be used to enhance the correction accuracy of the translation position error as well as avoid local optimum.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ptychography is an imaging technique that comes from the coherent diffractive imaging (CDI) [12]. It is regarded as a powerful high-resolution imaging tool, which has been widely used in the visible light [3,4], X-ray [56], electron [79], extreme ultraviolet [10,11], and terahertz waves [12,13]. In the traditional optical ptychographic system, the probe or object is fixed on a 2-D translation stage that can be laterally shifted, which can expand the field of view. Multiple diffraction patterns including the overlapping regions of the object are collected by charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS), which are used to reconstruct the image by the phase recovery algorithm [1416].

The reconstruction accuracy of ptychography is limited by the position error including the axial distance error and the translation position error [17]. Due to the existence of the protective glass of CCD, it is difficult to measure the distance from the object to the recording plane, which will generate the axial distance error. It can result in scaling of the reconstruction pixel size and thus introduce spatially dependent artifacts [18]. Several algorithms [1820] have been reported to accurately correct the axial distance. The translation position error is generated by the random vibration and idling of the displacement platform, and this error can lead to the periodic grid deviation in iterative computations and degrade the reconstruction quality. There are some methods proposed to correct the translation position error in ptychography, including the annealing algorithm (pcPIE) [21], the cross-correlation function [22], and the conjugate gradient method [23]. In pcPIE, several random offset values are searched near the optimal position of the previous correction, and then the optimal position is updated. The searching range will be narrowed continuously with iterations until it reaches the termination condition. But it is easy to fall into the local optimal in the subsequent corrections. Different from the random search such as pcPIE, a simple and effective method based on the cross-correlation function [22] is proposed to look for the subpixel offset for each correction, and the reasonable refinement rate determines the success of correcting the position errors. As the refinement rate increases, it will waste a lot of time. Chen et al. introduced 2-D particle swarm optimization algorithm (PSO) to correct the translation position error, and demonstrated the superiorities of the method in the correction accuracy, convergence speed and robustness [24]. In the PSO algorithm, the particle will constantly update its position by speed. Due to the constraint of the velocity of the particle, the search of the feasible solution is difficult to cover the entire searching space [25]. When multiple local optimal positions exist in the searching space, it will be difficult to find the best correction position, which will decrease accuracy of the reconstruction. Although PSO provides the possibility of the global search, it cannot guarantee to obtain a better global optimal solution.

To enhance the correction accuracy of the translation position error as well as avoid local optimum, we elaborate a method based on quantum particle swarm optimization (QPSO) [26]. Compared with PSO, QPSO can improve the cooperative ability of all the particles, and the particles in a quantum system have richer diversity to increase the global search performance. In our method, the searching range and the particle number are firstly set for each translation position. Then, the initial translation position is randomly generated for each particle. By comparing the correlation coefficient function between the low-resolution image related to each particle with the real image collected by CCD, the maximum value of each particle is donated as personal best position. Finally, the translation position of each particle is randomly updated by QPSO. In this method, the particle can appear anywhere in the searching range, which can add the randomness into the searching path and avoid to fall into a local optimal solution.

This paper is organized as follows. Section 2 summarizes the theory of our method. Section 3 demonstrates the implementation of the proposed method on a conventional ptychography system and shows that this method can be used to accurately correct the translation position error. The performance of our method under Gaussian noise at different dose levels is discussed in Section 4. Finally, the conclusion is summarized in Section 5.

2. Correction algorithm

The experimental apparatus of ptychography used for our method is shown in Fig. 1. The sample is fixed on a 2-D translation platform. A probe through the beam spread system is localized within a finite area of the sample surface by using an aperture. 2-D translation platform is programmed to shift the sample through a grid of overlapping scanning positions. The diffraction patterns from the sample are collected by detector. In this paper, the regularized ptychographic iterative engine (rPIE) [16] is chosen as the phase recovery algorithm.

 figure: Fig. 1.

Fig. 1. The experimental system of ptychography.

Download Full Size | PDF

The flow chart of the correction algorithm is summarized in Algorithm 1. In ptychography, the diffraction intensities $I^{\prime}({{{\bf u}^j}} )$ for different object position ${\bf r}_0^j = ({x_0^j\textrm{, }y_0^j} )$ with respect to the probe are collected by CCD. Here, u is the recording plane coordinates, j = 1, 2,…, J, and J donates the number of diffraction patterns. The initial guesses of the object function and the probe function are set as ${O_0}({{\bf r}_0^j} )$ and ${P_0}({{\bf r}_0^j} )$. In order to find the optimal position in the subsequent correction process, ${O_0}({{\bf r}_0^j} )$ and ${P_0}({{\bf r}_0^j} )$ are taken into rPIE algorithm with S iterations to perform the initial reconstruction, and $O({{\bf r}_0^j} )$ and $P({{\bf r}_0^j} )$ are attained.

In the correction stage, according to the initial coordinate ${\bf r}_0^j$ on the object plane, the searching range $[{{\bf r}_{\textrm{min}}^j\textrm{, }{\bf r}_{\textrm{max}}^j} ]$ is determined, which is represented as

$$\left\{ \begin{array}{l} {\bf r}_{\textrm{min}}^j = {\bf r}_0^j - {\bf \delta }\\ {\bf r}_{\textrm{max}}^j = {\bf r}_0^j + {\bf \delta } \end{array} \right. \leftarrow \left\{ \begin{array}{l} x_{\textrm{min}}^j = x_0^j - {\delta_x}\\ x_{\textrm{max}}^j = x_0^j + {\delta_x}\\ y_{\textrm{min}}^j = y_0^j - {\delta_y}\\ y_{\textrm{max}}^j = y_0^j + {\delta_y} \end{array} \right.$$
where the correction deviation δ=(δx, δy) is a 2-D vector, and it must ensure that the actual translation position is within the searching range. In this paper, the value of δx is equal to δy, and the searching range is a square area centered on ${\bf r}_0^j$.

In the searching range, the total number of particles is set as M, and the coordinate of the mth particle is randomly generated as ${\bf r}_m^j = ({x_m^j\textrm{, }y_m^j} )$ by Eq. (2), where m = 1, 2,…, M.

$${\bf r}_m^j = {\bf r}_{\textrm{min}}^j + \alpha ({{\bf r}_{\textrm{max}}^j - {\bf r}_{\textrm{min}}^j} )\leftarrow \left\{ \begin{array}{l} x_m^j = x_{\textrm{min}}^j + \alpha ({x_{\textrm{max}}^j - x_{\textrm{min}}^j} )\\ y_m^j = y_{\textrm{min}}^j + \alpha ({y_{\textrm{max}}^j - y_{\textrm{min}}^j} )\end{array} \right.$$
where α is uniformly distributed in [0,1]. The coordinates of $O({{\bf r}_0^j} )$ and the probe function $P({{\bf r}_0^j} )$ are replaced by the coordinate of each particle, and donated as $O({{\bf r}_m^j} )$ and $P({{\bf r}_m^j} )$, respectively. Then, the calculated diffraction intensity can be expressed as
$$I({{\bf u}_m^j} )= {|{F\{{O({{\bf r}_m^j} )P({{\bf r}_m^j} )} \}} |^2}$$
where F donates the diffraction calculation.

As an evaluation index, the cross-correlation function is used to evaluate the similarity between the collected diffraction intensity $I^{\prime}({{{\bf u}^j}} )$ and $I({{\bf u}_m^j} )$, which can be written as

$$C_m^j = \frac{{\sum\nolimits_{\bf u} {\{{I^{\prime}({{{\bf u}^j}} )- \bar{I^{\prime}}({{{\bf u}^j}} )} \}\{{I({{\bf u}_m^j} )- \bar{I}({{\bf u}_m^j} )} \}} }}{{\sqrt {\sum\nolimits_{\bf u} {{{|{I^{\prime}({{{\bf u}^j}} )- \bar{I^{\prime}}({{{\bf u}^j}} )} |}^2}} } \sqrt {\sum\nolimits_{\bf u} {{{|{I({{\bf u}_m^j} )- \bar{I}({{\bf u}_m^j} )} |}^2}} } }}$$
where $\bar{I^{\prime}}({{{\bf u}^j}} )$ and $\bar{I}({{\bf u}_m^j} )$ are the mean value of the intensity. The larger the value of $C_m^j$, the more accurate its corresponding position ${\bf r}_m^j$. In the tth iteration, for the maximum correlation coefficient of the mth particle, its position ${\bf r}_m^j$ is denoted as the personal best position ${\bf P}_m^j$. Meanwhile, the position related to the maximum correlation coefficient of all particles is expressed as the global best position Gj. When all M particles are taken into account, there are M personal best positions and one global best position. In the t + 1th iteration, if it occurred the larger value of the cross-correlation, the personal best position ${\bf P}_m^j$ and the global best position Gj would be replaced by the corresponding coordinate ${\bf r}_m^j$ of the t + 1th iteration.

In QPSO, a particle swarm system is assumed to be a quantum system in which each particle will converge to a local attraction position ${\bf A}_m^j$ defined by Eq. (5), and the local attraction position ${\bf A}_m^j$ combines the advantages of the personal best position ${\bf P}_m^j$ and the global best position Gj.

$${\bf A}_m^j = \beta {\bf P}_m^j + ({1 - \beta } ){{\bf G}^j}$$
where β is uniformly distributed in [0,1].

Different from PSO, the particle aggregation of QPSO does not depend on the constraint of the velocity, but on the constraint of the attractive potential field established near the optimal solution. A particle in a quantum bound state can appear at any point in the solution space with a certain probability density, which enables each particle to move in a larger space. Therefore, compared to PSO, the global search capability of QPSO is stronger, and the correction position will be more accurate. The renewal formula of the mth particle is expressed as follow:

$$\left\{ \begin{array}{l} {\bf r}_m^j = {\bf A}_m^j + a|{{{\bf k}^j} - {\bf r}_m^j} |{\textrm{ln}} \frac{1}{u}\textrm{, }u > 0.5\\ {\bf r}_m^j = {\bf A}_m^j - a|{{{\bf k}^j} - {\bf r}_m^j} |{\textrm{ln}} \frac{1}{u}\textrm{, }u \le 0.5 \end{array} \right.$$
where u is uniformly distributed in [0,1]. a is contraction-expansion coefficient, which determines the convergence speed of the algorithm. A smaller value of a results in a smaller conditional expected value of $|{{{\bf k}^j} - {\bf r}_m^j} |\ln \frac{1}{u}$ and thus a narrower oscillation range of ${\bf r}_m^j$[27]. In this case, the local search ability of the particle will be stronger, and the correction process will converge faster. On the other hand, a larger a can increase the global search performance. But the excessive global search may result in the slow convergence, and the excessive local search may cause the premature convergence, which will affect the correction accuracy. Therefore, it is important to choose an appropriate value of a to balance global search and local search. In this paper, a is decreased linearly with iteration given by Eq. (7).
$$a = 1 - 0.5\frac{t}{T}$$
where t is the correction iteration number, and T is the total correction number.

In the late correction period, due to the rapid deterioration of the population diversity, the individual is easy to fall into the local optimal. Compared with PSO that corrects translation position error only through the individual experience, QPSO introduces the average personal best position of all particles kj to improve the cooperative ability of all the particles. Therefore, the global search performance of QPSO is much better. kj in Eq. (8) can be donated as

$${{\bf k}^j} = \frac{1}{M}\sum\limits_{m = 1}^M {{\bf P}_m^j} $$

After each correction, Gj is taken into rPIE algorithm with iterating one time, and a new updated object function $O({{{\bf G}^j}} )$ and a new probe function $P({{{\bf G}^j}} )$ can be attained. Finally, through the above steps, the translation position will be accurately corrected, and the high-resolution image can be attained.

oe-31-25-42464-i002

3. Experiment

To show the performance of our method, the experiments are conducted in ptychography system. The helium–neon laser beam (λ=632.8 nm) is passed through a beam expanding collimation system, and the illumination is within a finite region with the radius of about 2 mm on the object. The object is laterally displaced by a 2-D translation stage. 7 × 7 diffraction patterns are collected by 8-bit CCD camera that has 3672 × 5496 pixels with a pixel size of 2.4 μm × 2.4 μm. To verify the high correction precision, pcPIE [21] and the corrected method based on the cross-correlation function [22] that we named ccPIE are conducted as comparison methods, respectively. We set the parameters δx=δy = 10 pixels, S = 30, and T = 100. In our experiments, USAF 1951 resolution chart, the fern stem and the rat tail are chosen as the tested samples. All the computations are performed on the same computer with the configuration of CPU: Intel Core i9-10980XE 3.00 GHz, RAM: 256 G, and GPU: RTX 6000 24 G. There are three parts making up the algorithm of our method, including the data preprocessing, the correction process and the reconstruction process, which corresponds to line 1-line 5, line 6-line 29 and line 30 of Algorithm 1, respectively. In pcPIE, ccPIE and our method, the data preprocessing codes are the same, and the same $O({{\bf r}_0^j} )$ and $P({{\bf r}_0^j} )$ can be attained. Then, three different correction methods are carried out to correct the translation position errors, and the same reconstruction process are performed after each correction.

3.1 Retrieve the amplitude of USAF resolution chart

Figures 2(a)–2(h) display the retrieved amplitude of USAF resolution chart by pcPIE and our method with 5, 10, 15, 30 random searches for each correction method. It can be obviously seen that when the number of the random searches is equal to 5 or 10, the lines of group 4 have serious defects in pcPIE. Especially, within the blue box, the lines of group 4/element 5 and group 4/element 6 are difficult to distinguish. Similarly, the lines of group 4 in the blue box are poorly recovered by our method with 5 random searches. As the number of the random searches increases, these line widths become cleaner. Meanwhile, under the same number of the random searches, the reconstruction quality of our method is higher than that of pcPIE. Different from pcPIE and our proposed method, ccPIE looks for the subpixel offset of the reconstruction to correct the translation position, and the magnitude of the shift error signal is typically in the order of 0.01 pixels or less [22]. Therefore, we split evenly a pixel into 100 parts, 1000 parts and 10000 parts to look for the subpixel offset, and the number of the split parts is defined as the refinement rate. The retrieved amplitude is shown in Fig. 2(i)–2(k). The lines of group 4 in the blue box reconstructed by ccPIE with the position refinement rate of 100 cannot be distinguished until the position refinement rate reaches 1000, where the position refinement rate of 100 means that the image will be registered to within 1/100 of a pixel, and the minimum subpixel offset can reach 0.01 pixels. Figure 2(l) is the image collected by the optical microscope. Because the amplitude values of the microscopic image are too larger than those of reconstruction results, the microscopic image is normalized to make a facilitate comparison. Next, we compare the normalized intensity distribution of reconstruction results in the marked area (yellow line), and the results are displayed in Fig. 3(a). It is shown in Fig. 3(a) that the line width of group 4/element 5 in the microscopic image is about 8.2 pixels, which is equal to 19.68 μm, and this is almost consistent with the line width in the standard USAF resolution chart. Compared with pcPIE and ccPIE, the lines retrieved by our method are closest to the lines in Fig. 2(l). Meanwhile, our method has a higher contrast ratio than other methods.

 figure: Fig. 2.

Fig. 2. Experimental results of USAF. (a)-(d) amplitudes retrieved by pcPIE with 5, 10, 15, 30 random searches; (e)-(h) amplitudes retrieved by our method with 5, 10, 15, 30 random searches; (i)-(k) amplitudes retrieved by ccPIE with the position refinement rates of 100, 1000 and 10000; (l) the image collected by an optical microscope; the yellow line is the marked area for the intensity comparison.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Results analysis of USAF resolution chart. (a) Normalized intensity distribution of the marked area; (b) The MSE changing curves of three methods; (c) comparisons of the computation time when three methods reach the threshold MSE < 0.02.

Download Full Size | PDF

To further display the comparison of the reconstruction results, the Mean Square Error (MSE) in Eq. (9) is utilized as an evaluation index for the convergence and the reconstruction quality, which is shown in Fig. 3(b). The number in parenthesis behind pcPIE and our method represents the number of the random searches. As can be seen in Fig. 3(b), we find that the values of MSE in our method suddenly rise after 30 iterations. This arises from the fact that the global best position Gj is less accurate than the initial coordinate ${\bf r}_0^j$ during the first correction. The error of our method drops rapidly in several corrections. High correction accuracy is not attained in our method with 5 random searches. When the number of the random searches reaches 10, the reconstruction quality in our method is greatly improved. For pcPIE, when the number of the random searches reaches 10, the correction accuracy is still very poor. Compared MSE of ccPIE with different refinement rates, we find that under the refinement rate of 100, ccPIE has little effect on correcting translation position errors, which illustrates the subpixel offset signal is typically small than 0.01 pixels. When the refinement rate reaches 1000, the reconstruction quality is improved. In order to compare the convergence speed and the computation time, MSE < 0.02 is selected as the threshold condition, and comparisons of the computation time of three methods are displayed in Fig. 3(c). After 55 corrections and 29 corrections, pcPIE and our method with 15 random searches both reach the threshold condition MSE < 0.02, and their computation time is 15739.9 s and 8243.1 s, respectively. It is calculated that the computation time of pcPIE with 15 random searches is 1.91 times that of our method with 15 random searches. The finial correction accuracy of our method with 15 random searches is improved 6.79% (|MSE of pcPIE-MSE of our method|/MSE of pcPIE) than that of pcPIE with 15 random searches. When the number of the random searches reaches 30, the computation time of pcPIE and our method increases, but the reconstruction quality is improved. pcPIE and our method with 30 random searches cost 17698.5 s and 14215.0 s to reach the threshold condition MSE < 0.02, and the finial correction accuracy of pcPIE with 30 random searches is almost the same as that of our method. For ccPIE, after the position refinement rate is increased from 1000 to 10000, the reconstruction quality is improved very little, but the computation time of ccPIE with the refinement rate of 10000 is 3.16 times that of ccPIE with the refinement rate of 1000. Compared our method with 15 and 30 random searches with ccPIE with the refinement rate of 1000, we find that the computation time of ccPIE is 1.97 times and 1.15 times those of our method with 15 and 30 random searches, respectively. The finial reconstruction quality of our method with 15 and 30 random searches is improved 24.88% and 28.86% compared with ccPIE with the refinement rate of 1000. These experiment results illustrate that when the number of the random searches is smaller than 15, the reconstruction quality of pcPIE is much lower than that of our method. When the number of the random searches is more than 15, although the accuracy of our method is not greatly improved compared with pcPIE, the calculation time is much shorter. Compared with ccPIE, our method can still achieve higher accuracy in a shorter time.

$$MSE = \frac{{\sum\nolimits_{{{\bf u}^j}} {{{|{I^{\prime}({{{\bf u}^j}} )- I({{\bf u}_{\bf G}^j} )} |}^2}} }}{{\sum\nolimits_{{{\bf u}^j}} {{{|{I^{\prime}({{{\bf u}^j}} )} |}^2}} }}$$
where ${\bf u}_{\bf G}^j$ is the recording plane coordinates corresponding to Gj.

3.2 Retrieve the amplitude of the fern stem

The fern stem is chosen as a new sample to repeat the contrast experiment. In this experiment, in order to clearly demonstrate the recovery quality of the fern stem, the area with more cells is selected for presentation, and the reconstruction results of three methods are shown in Figs. 4(a)–4(f). The image collected by the optical microscope is shown in Fig. 4(g) that is chosen as a standard value. The number at the bottom right corner represents Root Mean Squared Error (RMSE) between the reconstruction result and the microscopic image, which is calculated by Eq. (11). Under the same number of random searches, RMSE of our method is smaller than that of pcPIE, which means the reconstruction quality of our method is better. Compared with ccPIE, although the reconstruction quality of our method with 15 random searches is relatively poor, it is improved as the number of random searches increases. When the number of random searches is 30, the reconstruction quality of our method is almost same as that of ccPIE. MSE of three methods is presented in Fig. 4(h). Compared with our method with 15 random searches, the finial correction accuracy of our method with 30 random searches is improved 71.77%. Under the same number of the random searches, the finial construction accuracy of our method is almost same as that of pcPIE, but our method can converge faster. It is obvious that when the correction number reaches 15, MSE of our method with 15 random searches begins to level off, and the value of MSE is equal to 0.0747, while pcPIE with 15 random searches needs 94 corrections to reach MSE < 0.0747. It is calculated that the computation time of pcPIE with 15 random searches is 6.83 times that of our method with 15 random searches. MSE < 0.03 is selected as the threshold to calculate the consumption time of pcPIE with 30 random searches, our method with 30 random searches, ccPIE with the refinement rate of 1000 and 10000. Their corresponding computation time is 29565.9 s, 19392.6 s, 11736.4 s and 38255.2 s, respectively. The experiment results show that compared with pcPIE, our method can improve correction speed, and compared with ccPIE, the corrected result of our method is more stable and accurate.

$$RMSE = \frac{{\sqrt {\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{|{{O_{\textrm{mic}}}({m,n} )- {O_{\textrm{rec}}}({m,n} )} |}^2}} } } }}{{M \times N}}$$
Where (m, n) is the coordinate of the object, and M, N are the length and width of the object, respectively. Omic denotes the microscopic image, and Orec is the reconstruction result.

 figure: Fig. 4.

Fig. 4. Reconstruction results of the fern stem. (a)-(d) amplitudes retrieved by pcPIE and our method with 15 and 30 random searches; (e) and (f) amplitudes retrieved by ccPIE with the position refinement rates of 1000 and 10000; the number at the bottom right corner represents RMSE between the reconstruction result and the microscopic image; (g) the image collected by an optical microscope; (h) comparisons of MSE of three methods.

Download Full Size | PDF

3.3 Retrieve the phase of the rat tail

In order to verify the effectiveness of our method, the rat tail with rich phase information is chosen as a new sample to repeat the contrast experiment, and results are displayed in Figs. 5(a)–5(f). Similar to the results of pine stems, the reconstruction quality of both pcPIE and our method become higher as the number of the random searches increases. Compared with pcPIE, MSE of our method can converge faster. In this experiment, MSE < 0.03 is selected as the threshold to calculate the consumption time. Under 15 random searches, the computation time of pcPIE is 5.21 times that of our method, and the finial reconstruction quality of our method is improved 21.25%. Meanwhile, the reconstruction quality of our method with 15 random searches can be almost consistent with that of ccPIE with the refinement rate of 1000. When the number of the random searches reaches 30, the reconstruction quality of our method is improved 16.74% compared with ccPIE with the refinement rate of 1000. The experiment results show that compared with pcPIE, our method can achieve the same correction accuracy in less time, and compared with ccPIE, our method can sacrifice computational efficiency in exchange for the improvement of the correction accuracy.

 figure: Fig. 5.

Fig. 5. Reconstruction results of the rat tail. (a)-(d) phases retrieved by pcPIE and our method with 15 and 30 random searches; (e) and (f) phases retrieved by ccPIE with the position refinement rates of 1000 and 10000; (g) comparisons of MSE of three methods.

Download Full Size | PDF

4. Discussion

Numerical simulation of the ptychographic reconstruction is carried out with an original complex image composed of amplitude “cameraman” and phase “westconcordorthophoto” to further compare the reconstruction accuracy. In the simulation, the working wavelength is λ=632.8 nm. The complex image is illuminated by an aperture with the radius 209 pixels, and the image is moved by a 2-D translation platform, and a grid of 7 × 7 positions with a scanning step of 42 pixels is used. The axial distance between the object and CCD is set as 35 mm. Gaussian noise at different levels are added into the simulation. The diffraction patterns are composed of 1400 pixels × 1400 pixels with pixel size of 2.4 μm × 2.4 μm. A random error within 8 pixels is added into the simulation. In order to ensure that the actual coordinate is within the searching range, the parameter δ is set as 10 pixels. Other parameters in the simulations are consistent with those in the experiments.

In this section, the performance of these three methods under 20 dB, 30 dB, and 40 dB Gaussian noise is discussed, respectively. The comparisons of reconstruction results under 30 dB Gaussian noise are presented in Table 1, and error analyses under different Gaussian noise are displayed in Fig. 6. It can be shown in Table 1 that as the number of the random searches increases, both the amplitude error and the phase error of pcPIE and our method decrease. Under the same number of the random searches, the reconstruction quality of our method is higher than that of pcPIE. For ccPIE with the refinement rate of 1000, periodic grids are still obviously evident in the amplitude, which means the translation position errors are not effectively corrected in this situation. When the refinement rate is increased to 10000, the periodic grids are effectively eliminated. Figure 6 displays that under different Gaussian noise, these three methods can adjust different parameters (the number of random searches or the refinement rate) to effectively correct the translation position error. Under 20 dB Gaussian noise, pcPIE with 15 random searches fails to get an accurate translation position. Until the number of random searches reaches 30, the average position error Δravg calculated by Eq. (11) is controlled within 2 pixels. In our method, whether the number of the random searches is equal to 15 or 30, the average position error Δravg < 2 pixels can be achieved, but our method with 30 random searches can achieve higher accuracy. For ccPIE with the refinement rate of 1000, the position error is not corrected effectively, which means the subpixel offset is typically smaller than 0.01 pixels in our simulation. When the refinement rate reaches 10000, the average position error Δravg is greatly decreased. As the noise level decreases, the rule of the variation of Δravg hardly changes. Notably, when the noise level is equal to 40 dB, the correction accuracy of ccPIE with the refinement rate of 10000 is higher than that of our method with 30 random searches, while when the noise level surpasses 40 dB, the correction accuracy of ccPIE is worse than that of our method. In Fig. 6(d) and 6(f), MSE of three methods continuously decreases with iteration process, but in Fig. 6(b), MSE increases after a short period of decrease. Consider that this is because the noise level is too high, which causes the noise to be amplified during the iteration process. We select the average position error Δravg < 2 pixels as the threshold condition to compare the computation time of three methods. Under 20 dB Gaussian noise, the computation time of pcPIE, our method and ccPIE is 1347.4 s, 633.4 s, and 17114.7 s, respectively. Under 30 dB Gaussian noise, the computation time of pcPIE, our method and ccPIE is 1505.4 s, 857.9 s, and 12842.7 s, respectively. Under 40 dB Gaussian noise, the computation time of pcPIE, our method and ccPIE is 2179.5 s, 649.9 s, and 12147.7 s, respectively. The simulation results demonstrate that compared with pcPIE, our method can achieve higher accuracy in a shorter time. Compared with ccPIE under Gaussian noise at high dose levels, our method can quickly obtain more accurate translation positions. But for low dose levels, the correction accuracy of our method is not as good as that of ccPIE, but the correction speed is still faster than that of ccPIE.

$$\Delta {{\bf r}_{\textrm{avg}}} = {{\sum\limits_{j = 1}^J {({{{\bf G}^j} - {\bf r}_{\textrm{ture}}^j} )} } / J}$$
where ${\bf r}_{\textrm{ture}}^j$ is the actual coordinate of each scanning position.

 figure: Fig. 6.

Fig. 6. Error analyses under Gaussian noise at different levels. (a)-(f) the variation of the position average error Δravg and MSE under 20 dB, 30 dB and 40 dB Gaussian noise.

Download Full Size | PDF

Tables Icon

Table 1. The comparisons of reconstruction results under 30 dB Gaussian noise

5. Conclusion

In this paper, we propose a method to improve the correction accuracy of the translation position error as well as avoid trapping in local optimum in ptychography. In our method, the correlation coefficient function as an evaluation index is effective to find the global best position and the personal best positions. QPSO algorithm is introduced to add randomness in position update, and our method can increase the possibility of the particle to find the global optimal value in the whole feasible solution space. The comparisons of the experimental and simulation results prove that our method can improve the correction accuracy.

It is worth noting that compared with certain existing correction methods, our method still demands considerable computational time. The computational efficiency of our method is primarily influenced by the number of particles M and the coefficient a. Further investigations will be focused on implementing the parallel computing technique and optimizing parameters to improve the computational efficiency. In addition, many improvements to QPSO have been proposed to improve the ability of the global search, and we will further improve the accuracy of position correction by combining these new developments of QPSO.

Funding

National Natural Science Foundation of China (62205133); Natural Science Foundation of Jiangsu Province (BK20190954).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. B. Edo, D. J. Batey, A. M. Maiden, et al., “Sampling in x-ray ptychography,” Phys. Rev. A 87(5), 053850 (2013). [CrossRef]  

2. Z. Qin, Z. Xu, R. Li, et al., “Initial probe function construction in ptychography based on zone-plate optics,” Appl. Opt. 62(14), 3542–3550 (2023). [CrossRef]  

3. S. McDermott and A. Maiden, “Near-field ptychographic microscope for quantitative phase imaging,” Opt. Express 26(19), 25471–25480 (2018). [CrossRef]  

4. W. Xu, H. Xu, Y. Luo, et al., “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]  

5. C. C. Polo, L. Pereira, P. Mazzafera, et al., “Correlations between lignin content and structural robustness in plants revealed by X-ray ptychography,” Sci. Rep. 10(1), 6023 (2020). [CrossRef]  

6. L. Grote, M. Seyrich, R. Döhrmann, et al., “Imaging Cu2O nanocube hollowing in solution by quantitative in situ X-ray ptychography,” Nat. Commun. 13(1), 4971 (2022). [CrossRef]  

7. F. Allars, P. H. Lu, M. Kruth, et al., “Efficient large field of view electron phase imaging using near-field electron ptychography with a diffuser,” Ultramicroscopy 231, 113257 (2021). [CrossRef]  

8. Z. Chen, Y. Jiang, Y. T. Shao, et al., “Electron ptychography achieves atomic-resolution limits set by lattice vibrations,” Science 372(6544), 826–831 (2021). [CrossRef]  

9. Z. Ding, S. Gao, W. Fang, et al., “Three-dimensional electron ptychography of organic-inorganic hybrid nanostructures,” Nat. Commun. 13(1), 4787 (2022). [CrossRef]  

10. Y. W. Kim, D. G. Lee, S. Moon, et al., “Actinic patterned mask imaging using extreme ultraviolet ptychography microscope with high harmonic generation source,” Appl. Phys. Express 15(7), 076505 (2022). [CrossRef]  

11. P. D. Baksh, M. Ostrcil, M. Miszczak, et al., “Quantitative and correlative extreme ultraviolet coherent imaging of mouse hippocampal neurons at high resolution,” Sci. Adv. 6(18), eaaz3025 (2020). [CrossRef]  

12. L. Rong, F. Tan, D. Wang, et al., “High-resolution terahertz ptychography using divergent illumination and extrapolation algorithm,” Opt. Laser Eng. 147, 106729 (2021). [CrossRef]  

13. L. Valzania, T. Feurer, P. Zolliker, et al., “Terahertz ptychography,” Opt. Lett. 43(3), 543–546 (2018). [CrossRef]  

14. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

15. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

16. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

17. X. He, S. P. Veetil, Z. Jiang, et al., “Analysis of influence of object–detector distance error on the reconstructed object and probe in ptychographic imaging,” AIP Adv. 12(6), 065312 (2022). [CrossRef]  

18. L. Loetgering, M. Du, K. S. E. Eikema, et al., “zPIE: an autofocusing algorithm for ptychography,” Opt. Lett. 45(7), 2030–2033 (2020). [CrossRef]  

19. J. Dou, Z. Gao, J. Ma, et al., “Iterative autofocusing strategy for axial distance error correction in ptychography,” Opt. Laser Eng. 98, 56–61 (2017). [CrossRef]  

20. R. Ma, D. Yang, T. Yu, et al., “Sharpness-statistics-based auto-focusing algorithm for optical ptychography,” Opt. Laser Eng. 128, 106053 (2020). [CrossRef]  

21. A. M. Maiden, M. J. Humphry, M. C. Sarahan, et al., “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

22. F. Zhang, I. Peterson, J. Vila-Comamala, et al., “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

23. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

24. Y. Chen, T. Xu, J. Zhang, et al., “Precise and independent position correction strategy for Fourier ptychographic microscopy,” Optik 265, 169481 (2022). [CrossRef]  

25. B. Bergh, and Frans, “An Analysis of Particle Swarm Optimizers,” Diss. University of Pretoria, (2007).

26. J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” Congress on Evolutionary Comput. 1, 325–331 (2004).

27. J. Sun, W. Fang, X. Wu, et al., “Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection,” Evolutionary computation 20(3), 349–393 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The experimental system of ptychography.
Fig. 2.
Fig. 2. Experimental results of USAF. (a)-(d) amplitudes retrieved by pcPIE with 5, 10, 15, 30 random searches; (e)-(h) amplitudes retrieved by our method with 5, 10, 15, 30 random searches; (i)-(k) amplitudes retrieved by ccPIE with the position refinement rates of 100, 1000 and 10000; (l) the image collected by an optical microscope; the yellow line is the marked area for the intensity comparison.
Fig. 3.
Fig. 3. Results analysis of USAF resolution chart. (a) Normalized intensity distribution of the marked area; (b) The MSE changing curves of three methods; (c) comparisons of the computation time when three methods reach the threshold MSE < 0.02.
Fig. 4.
Fig. 4. Reconstruction results of the fern stem. (a)-(d) amplitudes retrieved by pcPIE and our method with 15 and 30 random searches; (e) and (f) amplitudes retrieved by ccPIE with the position refinement rates of 1000 and 10000; the number at the bottom right corner represents RMSE between the reconstruction result and the microscopic image; (g) the image collected by an optical microscope; (h) comparisons of MSE of three methods.
Fig. 5.
Fig. 5. Reconstruction results of the rat tail. (a)-(d) phases retrieved by pcPIE and our method with 15 and 30 random searches; (e) and (f) phases retrieved by ccPIE with the position refinement rates of 1000 and 10000; (g) comparisons of MSE of three methods.
Fig. 6.
Fig. 6. Error analyses under Gaussian noise at different levels. (a)-(f) the variation of the position average error Δravg and MSE under 20 dB, 30 dB and 40 dB Gaussian noise.

Tables (1)

Tables Icon

Table 1. The comparisons of reconstruction results under 30 dB Gaussian noise

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

{ r min j = r 0 j δ r max j = r 0 j + δ { x min j = x 0 j δ x x max j = x 0 j + δ x y min j = y 0 j δ y y max j = y 0 j + δ y
r m j = r min j + α ( r max j r min j ) { x m j = x min j + α ( x max j x min j ) y m j = y min j + α ( y max j y min j )
I ( u m j ) = | F { O ( r m j ) P ( r m j ) } | 2
C m j = u { I ( u j ) I ¯ ( u j ) } { I ( u m j ) I ¯ ( u m j ) } u | I ( u j ) I ¯ ( u j ) | 2 u | I ( u m j ) I ¯ ( u m j ) | 2
A m j = β P m j + ( 1 β ) G j
{ r m j = A m j + a | k j r m j | ln 1 u u > 0.5 r m j = A m j a | k j r m j | ln 1 u u 0.5
a = 1 0.5 t T
k j = 1 M m = 1 M P m j
M S E = u j | I ( u j ) I ( u G j ) | 2 u j | I ( u j ) | 2
R M S E = m = 1 M n = 1 N | O mic ( m , n ) O rec ( m , n ) | 2 M × N
Δ r avg = j = 1 J ( G j r ture j ) / J
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.