Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Direct model-based wavefront sensorless method with a fixed number of measurements

Open Access Open Access

Abstract

In wavefront sensorless (WFSL) adaptive optics, the intensity image in the observation plane, instead of the wavefront sensor, is utilised to estimate the input aberration. The number of intensity measurements is critical for applications with ever-changing phase aberration, such as astronomical imaging. This paper details two direct WFSL methods that need a fixed number of intensity measurements to estimate the input aberration. The proposed methods adopt a zonal approach rather than a modal one to estimate the phase aberration. Simulation results demonstrate that after applying the proposed methods, the aberration correction percentage can rise by approximately 70% for large aberrations.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Adaptive optics (AO) employs a closed-loop control system which typically comprises a deformable mirror, a wavefront sensor and controller [1]. However, in recent years, researchers have introduced a new variant for AO, in which the wavefront sensor is absent, hence the name wavefront sensorless (WFSL) AO [2]. There are two typical set-ups for WFSL AO [3]. These configurations are shown in Fig. 1. In Fig. 1(a), the pinhole acts as a spatial filter with a radius of $1.22{\lambda z}/{D}$, where $\lambda$ is the light wavelength, $z$ is the distance to the aperture and $D$ is the aperture diameter, thereby passing through the central spot. The photo-diode then measures the power of this spot. In Fig. 1(b), the spatial filtering is performed in software and the image sensor conducts measurement as well as observation. In both cases, the light intensity is utilised as feedback to estimate the aberration and subsequently determine the controller’s output to the deformable mirror [4]. There are other methods referred to as phase retrieval methods, which do not use a WFS to estimate the phase aberration [57]. Such methods, however, are different from WFSL methods in the sense that the former methods utilise two planes, conventionally image plane and far-field plane, while the latter methods use only the far-field plane. Furthermore, WFSL methods utilise intensity image sensors, which are also used for observation, to acquire the intensity image while phase retrieval methods need a light field image sensor, aka plenoptic camera, to acquire the light field in the far-field plane [6,7].

 figure: Fig. 1.

Fig. 1. Two typical WFSL set-ups. The WFSL configuration in (a) is based on the combination of a pinhole and a photo-diode. The WFSL configuration in (b) is based on the image sensor, which is used for observation.

Download Full Size | PDF

WFSL AO can be categorically split into model-based and model-free methods [4]. The model-free methods are based on iterative search algorithms, such as Fienup method [5], genetic algorithm (GA) and hill-climbing [3,8,9]. Although such methods have low computational complexity per iteration, they require a large number of intensity measurements to correct the aberration [2]. This property makes them undesirable for performing real-time image processing [10,11]. Furthermore, they suffer from over-sampling and under-sampling of the design space [2]. The model-based approaches, on the other hand, are deterministic as the search space, the objective function, or both, are parametrically modelled [12]. As a result, the number of measurements decreases, yet at the cost of higher computational complexity per iteration [10]. This paper presents two direct model-based WFSL methods, which estimate the input phase aberration using a fixed number of intensity measurements. To reach this goal, first, the original circular pupil is partitioned into a number of smaller square segments/sub-pupils. The phase within each sub-pupil is approximated using a piston element. It is then demonstrated that dividing the intensity image pertaining to a circular phase by the diffraction-limited intensity image pertaining to a single sub-pupil results in a Fourier series whose coefficients are functions of the amplitudes of the piston elements. Equating the Fourier coefficients extracted using the measured intensity image with these functions leads to a set of non-linear equations, whose solution is an estimate of the piston amplitudes. Eventually, the input phase aberration can be estimated by interpolating the extracted piston amplitudes of the sub-pupils. The trade-off between the accuracy and computational load of these methods can be adjusted by changing the number of segments. Existing model-based WFSL methods developed to date require at least $N+1$ intensity measurements to estimate an $N$-mode aberration [2]. The proposed methods, however, are not dependent on the number of modes in an aberration, since unlike the preceding methods, which utilise the modal representation of an aberration, these methods employ a zonal representation. As a result, a higher accuracy can be expected as these methods are aimed to correct the aberration as a whole, not just a limited number of modes [13,14]. The proposed methods will be shown to require only two intensity measurements for aberrations with magnitudes less than 1 [rad] RMS, six measurements for magnitudes between 1 and 3 [rad] RMS, and ten measurements for magnitudes between 3 and 5 [rad] RMS .

This paper is structured as follows. In Section 2, the proposed WFSL methods are discussed in detail. In Section 3, the performance of the proposed methods is examined under different conditions. In Section 4, this article is concluded.

2. Theory

In Fourier optics, the light field in the observation plane is proportional to the Fourier transform of the light field in the pupil plane. Mathematically, it is expressed as [1]

$$U(x,y)= C\cdot\mathcal{F}\left\{p(\alpha,\beta)\exp\big(j\phi(\alpha,\beta)\big)\right\},$$
where $U(x,y)$ is the light field in the observation plane. $\mathcal {F}$ denotes the Fourier transform, $C$ is an imaginary number, $p$ is the pupil function and, $(x ,y)$ and $(u,v)$ denote the Cartesian coordinates in the observation and pupil planes, respectively.

In an imaging system, the total point spread function (PSF) is defined as the convolution of the impulse response of the imaging system and that of the light propagation medium. Assuming that the imaging system is linear spatially-invariant (LSI), the intensity image relates to the PSF as [1]

$$I(x,y) = O(x,y) \odot \Delta(x,y) + \Xi (x,y),$$
where $\odot$ is the convolution operator, $\Delta$ is the PSF, $O$ denotes the object, $I$ is the distorted intensity image and $\Xi$ represents the noise. The PSF can be obtained from the estimated light phase. The original image can then be extracted using deconvolution methods [15]. Ignoring the noise as well as assuming a point source as the object, the distorted intensity image becomes equivalent to the PSF. The light intensity is equivalent to a constant times the absolute square of the light field in the observation plane and thus is given by
$$I(x,y)= C'\cdot\lvert U_{out}(x,y) \rvert ^2.$$

2.1 Phase segmentation

The light phase can be approximated to consist of small segments of tip, tilt and piston [4]. That is, the phase within each segment can be expressed by

$$\phi_i(\alpha,\beta) = \left( b_i \alpha + c_i \beta + a_i \right)\cdot \mathrm{rect}\bigg(\frac{\alpha-\alpha_i}{\tau_0} \bigg) \mathrm{rect}\bigg(\frac{\beta-\beta_i}{\tau_0}\bigg),$$
where $\mathrm {rect}(.)$ denotes the rectangular pulse, $\tau _0$ is the width of the segments, $b_i \alpha$, $c_i \beta$ and $a_i$ are the tip element, the tilt element and the amplitude of the piston element of the $i^{\mathrm {th}}$ segment, respectively. For better visualisation, the segmentation of a random phase aberration with a magnitude of $1.5\pi$ [rad] is shown in Fig. 2. The segment numbering scheme used here is also depicted. The segments that more than half of their area fall outside the original pupil are discarded, otherwise they are considered as complete segments. Inserting Eq. (4) into Eq. (1) leads to a very complicated equation. To address this challenge, further simplification is necessary. This is done by simplifying the phase segments to pure piston elements ($a_i$). Each segment is then represented by
$$\phi_i(\alpha,\beta) = a_i \,\mathrm{rect}\bigg(\frac{\alpha-\alpha_i}{\tau_0}\bigg) \mathrm{rect}\bigg(\frac{\beta-\beta_i}{\tau_0}\bigg).$$

The slopes of tip and tilt, i.e. $b_i$ and $c_i$, can be derived by interpolating the piston elements.

 figure: Fig. 2.

Fig. 2. A random phase with a magnitude of $1.5\pi$ [rad]. The segments are denoted by squares and numbered as shown. The dashed circle represents the pupil.

Download Full Size | PDF

2.2 Extracting the output light field

Considering the pupil segments in Fig. 2 as the sub-pupils of the original pupil and applying the linearity property of the Fourier transform [16], the relation for $U(x,y)$ can be rewritten as

$$U(x,y)=C\cdot\mathcal{F}\big\{p(\alpha,\beta)\exp\big(j\phi (\alpha,\beta)\big)\big\} =C\cdot\sum_{i=1}^N \mathcal{F}\big\{p_i(\alpha,\beta)\exp\big(j\phi_i (\alpha,\beta)\big)\big\},$$
where $N$ is the total number of segments and $p_i(x,y)$ is the rectangular pupil function of the $i^{\mathrm {th}}$ segment. Equation (6) states that the light field in the observation plane can be obtained by summing the light field segments in the observation plane.

Note that the number of segments is ruled by the size of the aperture, the resolution of the phase, and down-sampling factor. These factors will be elaborated in Section 2.4.1. Utilising the approximation of the phase segments from Eq. (3) and the shifting property of the Fourier transform [16], the $i^{\mathrm {th}}$ light field segment in the observation plane, $U_i(x,y)$, can be shown to follow

$$\begin{aligned} U_i(x,y) = C\cdot\mathcal{F}\Big\{ p_i(\alpha,\beta) \exp\big(j\phi_i(\alpha,\beta) \big) \Big\}&=C\cdot\mathcal{F}\Big\{\mathrm{rect}\bigg(\frac{\alpha-\alpha_i}{\tau_0}\bigg)\mathrm{rect}\bigg(\frac{\beta-\beta_i}{\tau_0}\bigg)\exp\big(j\phi_i \left(\alpha,\beta \right)\big)\Big\}\\ =C\cdot\exp(ja_i) \tau_0^2 &\exp \big({-}j2\pi(\alpha_i x + \beta_i y)\big)\mathrm{sinc}(\tau_0 x) \mathrm{sinc}(\tau_0 y). \end{aligned}$$

By inserting Eq. (7) into Eq. (1), and then exploiting Euler’s formula as well as the trigonometric relations for angle addition, the relation for the light field in the observation plane becomes

$$\begin{aligned} U(x,y) =C\cdot\tau_0^2 \mathrm{sinc}(\tau_0 x)\mathrm{sinc}(\tau_0 y)\Bigg\{\sum_{i=1}^{N} \Big[ \mathrm{cos}(a_i)\mathrm{cos}\big(2\pi(\alpha_i x + \beta_i y)\big)+\sin(a_i)\sin\big(2\pi(\alpha_i x + \beta_i y)\big)\Big]\\ +j\sum_{i=1}^{N}\Big[\mathrm{sin}(a_i)\mathrm{cos}\big(2\pi(\alpha_i x + \beta_i y)\big)-\mathrm{cos}(a_i)\mathrm{sin}\big(2\pi(\alpha_i x + \beta_i y)\big)\Big]\Bigg\}. \end{aligned}$$

In order to obtain more compact expressions, the simplifying notations in Eq. (9) are utilised.

$$c_i = \cos(a_i),\; d_i = \mathrm{sin}(a_i),\;\psi_i = \cos\big(2\pi(\alpha_i x + \beta_i y)\big),\;\gamma_i = \sin\big(2\pi(\alpha_i x + \beta_i y)\big),$$
$$H(x,y) = C' \cdot \tau_0^2 \mathrm{sinc}(\tau_0 x)\mathrm{sinc}(\tau_0 y).$$

Equation (8) can then be rewritten as

$$U(x,y) =\mathrm{exp}\big (j \measuredangle{C} \big )\cdot H(x,y) \bigg[ \sum_{i=1}^N \big( c_i \psi_i + d_i\gamma_i \big) + j \sum_{i=1}^N \big( d_i \psi_i -c_i \gamma_i\big) \bigg],$$
where $\measuredangle {C}$ is the phase angle of $C$.

2.3 Deriving the light intensity

In Subsection 2.2, a compact expression for $U(x,y)$ was derived. In this subsection, the procedure to obtain an expression for the light intensity is described. According to Eq. (3), the light intensity is obtained by taking the absolute square of, and scaling the light field in the observation plane [1]. Therefore, inserting Eq. (10) into Eq. (3), gives the light intensity as

$$I(x,y) = H^2(x,y) \Bigg[ \Big( \sum_{i=1}^N c_i \psi_i + d_i\gamma_i \Big)^2 + \Big( \sum_{i=1}^N d_i \psi_i -c_i \gamma_i \Big)^2 \Bigg].$$
By utilising the binomial expansion along with the Pythagorean and angle sum/difference trigonometric identities, Eq. (11) is reduced to
$$I(x,y)=H^2(x,y)\bigg[ N \cdot E(x,y) +S(x,y) \bigg],$$
where $E(x,y)=1$, and $S(x,y)$ is equal to
$$\begin{aligned} S(x,y) = \sum_{i=1}^N \sum_{j=i+1}^N & \mathrm{cos}(a_i - a_j)\mathrm{cos}\Big(2\pi\big( (\alpha_i - \alpha_j)x + (\beta_i-\beta_j)y\big)\Big)\\ &+ \mathrm{sin}(a_i - a_j)\sin\Big(2\pi\big((\alpha_i - \alpha_j)x + (\beta_i-\beta_j)x\big)\Big). \end{aligned}$$

According to this equation, $S(x,y)$ consists of sine and cosine harmonics. The first step in the proposed methods is to determine the same harmonics with respect to a specific segment numbering scheme. From Fig. 2, it can be inferred that some pairs of segments have the same harmonic and thus their coefficients should be summed in Eq. (13). For example, according to Fig. 2, the pairs of segments (1,2) and (2,3) with the corresponding harmonics of $\mathrm {cos}\Big (2\pi \big ( (\alpha _1 - \alpha _2)x + (\beta _1-\beta _2)y\big )\Big )$ and $\mathrm {cos}\Big (2\pi \big ( (\alpha _2 - \alpha _3)x + (\beta _2-\beta _3)y\big )\Big )$ share the same harmonic of $\mathrm {cos}(-2\pi \tau _0 x)$ and therefore the corresponding coefficients, i.e. $\mathrm {cos}(a_1 - a_2)$ and $\mathrm {cos}(a_2 - a_3)$, should be summed in Eq. (13). Accordingly, Eq. (13) can be rewritten as

$$\begin{aligned} S(x,y)=\sum_{k_1,k_2} &A_{k_1,k_2}(\mathbf{a})\cdot \mathrm{cos}\big(2\pi( k_1 \tau_0 x + k_2 \tau_0 y)\big)+ B_{k_1,k_2}(\mathbf{a})\cdot\mathrm{sin}\big(2\pi( k_1 \tau_0 x + k_2 \tau_0 y )\big),\\ &k_1,k_2 \subset \mathbb{Z},\; \mathbf{a} = [a_1,a_2,\ldots,a_N ], \end{aligned}$$
where $A_{k_1,k_2}(\mathbf {a})$ and $B_{k_1,k_2}(\mathbf {a})$ are functions of the amplitudes of the piston elements.

By rearranging Eq. (12) and utilising the $I(x,y)$ acquired from the image sensor, the empirical $S(x,y)$, denoted by $\hat {S}(x,y)$, can be extracted by

$$\hat{S}(x,y)=\frac{I(x,y)}{H^2(x,y)}-N\cdot E(x,y),$$
where $H(x,y)$ can be obtained from Eq. (9b) provided that $C'$ in this equation is given. The procedure to extract $C'$ using the acquired intensity image is explained as follows. From Parseval’s energy law [11], the total energy of the light intensity follows
$$\begin{aligned} \int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty} I(x,y) \cdot dx \, dy &=C'\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty}\big \lvert \mathcal{F}\{ p( \alpha ,\beta )\exp\left( j\phi ( \alpha,\beta )\right)\} \big \rvert ^2 d\alpha \, d\beta ,\\ &= C'\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty}\big \lvert p( \alpha ,\beta )\exp \left({-}j\phi ( \alpha ,\beta )\right) \big \rvert ^2 d\alpha \, d\beta = C'\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty} p^2( \alpha ,\beta ) d\alpha \, d\beta. \end{aligned}$$

Given that the segments are rectangular, we have

$$C'\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty} p^2(\alpha ,\beta )d\alpha \, d\beta = C'\cdot N\cdot \tau_0^2.$$

Inserting Eq. (17) into Eq. (16), $C'$ is obtained by

$$C' = \frac{\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty} I(x,y) \cdot dx \, dy }{\tau_0^2 N}.$$

Utilising the extracted $C'$, $H(x,y)$ from Eq. (9b) can be obtained. Thus, $\hat {S}(x,y)$ from Eq. (15) is definite, since all the expressions of the right side of this equation are obtainable and known. In the following, the procedure to obtain the underlying unknowns, i.e. $a_i$, using the extracted $\hat {S}(x,y)$ is explained. The relation for $S(x,y)$ in Eq. (14) implies the very definition of the truncated two-dimensional Fourier series [16]. In other words, $\hat {S}(x,y)$ can be represented as

$$\hat{S}(x,y) = \sum_{n,m\subset \mathbb{Z}}\bigg [ \hat{A}_{n,m} \cos \Big( 2\pi ( n \tau_0 x- m \tau_0 y ) \Big) + \hat{B}_{n,m}\sin \Big( 2\pi ( n \tau_0 x + m \tau_0 y )\Big) \bigg ].$$

The coefficients of the Fourier series in Eq. (19), namely $\hat {A}_{n,m}$ and $\hat {B}_{n,m}$, can be obtained using the extracted $\hat {S}(x,y)$ from Eq. (15) by [16]

$$\begin{aligned}\hat{A}_{n,m} = \frac{\tau_0}{2}\int\limits_{\frac{1}{\tau_0}}\int\limits_{\frac{1}{\tau_0}}\hat{S}(x,y)\mathrm{cos}\Big( 2\pi ( n \tau_0 x - m \tau_0 y )\Big)dx\, dy, \end{aligned}$$
$$\begin{aligned}\hat{B}_{n,m} = \frac{\tau_0}{2}\int\limits_{\frac{1}{\tau_0}}\int\limits_{\frac{1}{\tau_0}}\hat{S}(x,y)\mathrm{sin} \Big(2\pi( n \tau_0 x + m \tau_0 y )\Big)dx\, dy. \end{aligned}$$

The integrals in Eq. (20) can be obtained using the numerical integration methods such as the trapezoidal rule [17]. Equating the coefficient functions of the series in Eq. (14), namely $A_{k_1,k_2}(\mathbf {a})$ and $B_{k_1,k_2}(\mathbf {a})$, with the values of the Fourier series coefficients obtained using Eq. (20), i.e. $\hat {A}_{n,m}$ and $\hat {B}_{n,m}$, leads to a system of nonlinear equations. A symmetrically segmented pupil with $\Lambda$ segments along the diameter of the pupil results in $2\Lambda (\Lambda -1)$ harmonics and therefore $4\Lambda (\Lambda -1)$ equations by taking into account both sinusoidal and cosinusoidal equations. For example, a 4-segmented pupil results in

$$\begin{aligned}\mathrm{cos} \big( a_{1} - a_{2}\big)+ \mathrm{cos} \big(a_{3} -a_{4}\big)= A_{{-}1,0}, \end{aligned}$$
$$\begin{aligned}\mathrm{cos} \big( a_{1} - a_{4}\big) = A_{{-}1,-1}, \end{aligned}$$
$$\begin{aligned}\mathrm{cos} \big( a_{1} - a_{3}\big)+ \mathrm{cos} \big(a_{2} - a_{4}\big) = A_{0,-1}, \end{aligned}$$
$$\begin{aligned}\mathrm{cos} \big( a_{2} - a_{3}\big) = A_{1,-1}, \end{aligned}$$
$$\begin{aligned}\mathrm{sin} \big( a_{1} - a_{2}\big)+ \mathrm{sin} \big(a_{3} - a_{4}\big) = B_{{-}1,0}, \end{aligned}$$
$$\begin{aligned}\mathrm{sin} \big( a_{1} - a_{4}\big) = B_{{-}1,-1}, \end{aligned}$$
$$\begin{aligned}\mathrm{sin} \big( a_{1} - a_{3}\big)+ \mathrm{sin} \big(a_{2} - a_{4}\big) = B_{0,-1}, \end{aligned}$$
$$\begin{aligned}\mathrm{sin} \big( a_{2} - a_{3}\big)= B_{1,-1}. \end{aligned}$$

Note that the non-linear equations will change if the segment size is altered. For example, in Fig. 2, if the segment size enlarges, there may be two segments in the first row rather than four, and the third and fourth segments will belong to the second row. As a result, the harmonic for the segment pair (3,4) changes compared to the previous arrangement with smaller segment size. For small apertures, the mean value of a phase, i.e. piston (the DC component of the phase), is constant and usually ignored [1]. Hence

$$a_{1}+ a_{2}+ a_{3}+ a_{4}=0.$$

Solving the system of non-linear equations consisting of Eq. (21) and (22) leads to a number of solutions. There are numerous methods for solving a system of non-linear equations. Here, the damped least-squares (DLS) method is used as the solver for its robustness quality [18]. The number of solutions, however, can be drastically reduced by imposing appropriate constraints. In fact, among the resulting solutions from the solver, those that satisfy the constraints are the acceptable solutions. A discussion regarding the number of solutions will be presented in Section 2.5.2.

Depending on their magnitudes, aberrations are usually categorised as small for magnitudes of less than 1 [rad] RMS, medium for between 1 and 3 [rad] RMS, large for between 3 and 5 [rad] RMS and very large for greater than 5 [rad] RMS [19]. Thus, the amplitudes of the phase segments can be appropriately represented by $\lvert a_i \rvert \le M,\, i=1,2, \ldots N$, where $M$ is the segment amplitude’s upper bound. Furthermore, given that the light phase is a continuous function, it can be deduced that the difference between the phase amplitudes of the neighbouring segments must be limited. The numbering scheme used in this work is such that the segment numbers $i$ and $i+1$ are adjacent providing that they pertain to the same row. Therefore, depending on the aberration strength, the segment difference can be constrained by the condition $\lvert a_i - a_{i+1} \rvert \le l$, where $l$ is an adjustable real number. In the following, two methods for estimating the segmented phase and original phase aberrations are presented. The first method is based on down-sampling without an anti-aliasing filter while the second method utilises an anti-aliasing filter. Furthermore, the first method can be employed just for small aberrations while the second method is not dependent on the aberration magnitude.

2.4 Method 1

Thus far, the procedure of deriving the non-linear equations using the intensity image has been presented. The number of equations is dependent on the number/size of segments. The higher the number of segments is, the higher the number of equations, and as a result the higher the computational burden will be. Consequently, to reduce the computational burden, the number of segments needs to be decreased. This reduction, however, would negatively affect the accuracy. In the following, the procedure for estimating the segmented phase using the original intensity image is presented.

2.4.1 Impact of phase segmentation on the intensity image

The process of reducing the number of segments is made up of two steps: raw down-sampling the original phase with factor $N_d$ and zero-order hold (ZOH) up-sampling with the same factor. For simplicity, the whole process is called the ZOH segmentation. Figure 3 depicts a random aberration with a magnitude of $1.5$ [rad] RMS and the corresponding intensity image. Figure 4 illustrates the impact of the ZOH segmentation of the original phase in Fig. 3 on the intensity image for $N_d=2,4,6$.

 figure: Fig. 3.

Fig. 3. (a) and (b) depict a random aberration with a magnitude of $1.5$ [rad] RMS and the corresponding intensity image, respectively.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The impact of the ZOH segmentation of the original phase in Fig. 3 on the intensity image for $N_d=2,4$ and 6.

Download Full Size | PDF

2.4.2 Fourier analysis of ZOH segmentation

As discussed before, the ZOH segmentation consists of two major steps of raw down-sampling and ZOH up-sampling. The raw down-sampling is a down-sampling process in which the intermediate samples of the original signal are zeroed rather than being dropped.

Let $f_o[n]$ and $f_u[n]$ denote the original signal of length $N_o$ and the ZOH up-sampled signal, respectively. It can be shown that the magnitude of the Fourier transform of $f_u[n]$ is related to that of $f_o[n]$ by [16]

$$\Big\vert F_u [\omega] \Big\vert = \frac{1}{N_d} \bigg\vert\sum_{k = 0}^{N_d - 1} F_o \Big[\omega-2\pi k \frac{N_o}{N_d}\Big] \bigg\vert\cdot \bigg\vert \frac{\mathrm{sin}(0.5 \omega N_d)}{\mathrm{sin}(0.5 \omega)} \bigg\vert ,$$
where $F_o [\omega ]$ and $F_u [\omega ]$ denote the Fourier transform of $f_o [n]$ and $f_u [n]$, respectively. This equation can be simply extended to two dimensions to give the relation between a segmented phase and the corresponding intensity image. Here, the magnitude of the Fourier transform is of interest, since the intensity image is proportional to the magnitude of the Fourier transform of the light field squared.

2.4.3 Extraction of the intensity image pertaining to a segmented phase

In the previous subsection, the impact of the ZOH phase segmentation on the intensity image was explained. Now, extraction of the intensity image corresponding to a segmented phase by utilising the original intensity image is discussed. The intensity image corresponding to a segmented phase can be extracted by utilising Eq. (23). Figure 5 depicts the intensity images extracted utilising Eq. (23) corresponding to the segmented phases in Fig. 3(a), 3(b) and 4(c).

As can be seen, the larger the size of the segments or the down-sampling factor is, the less the extracted image resembles the corresponding true image in Fig. 4, which implies an increase in the inaccuracy of the proposed method 1. This inaccuracy arises from the fact that when the size of the segments rises, there will be more overlaps between the replicas of the original intensity image within one period ($2\pi$). This phenomenon, termed the aliasing, intensifies in cases where the sampling rate after down-sampling falls below the Nyquist frequency [16]. Two cases where aliasing occurs are shown in Fig. 5(b) and 5(c).

 figure: Fig. 5.

Fig. 5. (a), (b) and (c) show the extracted intensity images using Eq. (23) corresponding to the aberrations in Fig. 4(a), 4(b) and 4(c), respectively.

Download Full Size | PDF

It is known from optics that the larger the aberration magnitude is, the further the intensity image will spread in the observation plane (i.e. the bandwidth will be larger), and vice versa [19,20]. As a result, between two aberrations with different magnitudes but the same segment size, the impact of the overlap between the replicas of the original intensity image will be less significant for the smaller one. Therefore, larger segment sizes can be utilised for smaller aberrations. Overall, the proposed method 1 is preferred for small aberrations.

2.4.4 Estimation of the original phase

It is well known that the light phase is a continuous function [1]. That is, the phase does not abruptly change locally. The final step in the proposed method 1 is to estimate the original phase by interpolating the extracted piston elements. In this method, sinc interpolation is chosen for this job.

2.5 Method 2

As discussed in the previous section, for large aberrations and a specific segment size, due to high information loss arising from aliasing, the ZOH segmentation even with the smallest segment size leads to much inaccuracy. Therefore, in the following, an alternative method, which can mitigate the aliasing effect, is introduced.

2.5.1 Relation between the phase resolution and intensity image size

The aliasing effect, which occurred in method 1 for large aberrations, can be mitigated by applying an anti-aliasing filter before replicating the original intensity image [20]. The resulting image within the Nyquist frequency is then associated with the down-sampled/segmented phase. Note that anti-aliasing filtering is performed simply by multiplying the original intensity image by a low-pass filter here a rectangular function in the frequency domain (i.e. in the observation plane). It is equivalent to convolving the original phase (unsegmented phase) by the sinc function corresponding to this filter. Evidently, the higher the down-sampling factor is, the smaller the Nyquist frequency and thus the size of the resulting image will be [20]. Such a relation between the phase resolution and the corresponding image size is illustrated in Fig. 6. Note that method 1 used raw down-sampling, which means the number of samples does not really change but rather the intermediate samples are zeroed by multiplying by a comb function [16]. Here, the true down-sampling, which discards the intermediate samples, is used [16]. Therefore, to differentiate between them the true down-sampling factor is denoted by $N_c$.

 figure: Fig. 6.

Fig. 6. (a) and (d) show the original aberration and the corresponding intensity image, respectively. (e) and (f) show the intensity images corresponding to the down-sampled phases with $N_c=2,4$ in (b) and (c), respectively.

Download Full Size | PDF

The resolution of the phase and thus the number of segments within the original circular pupil are determined by $N_c$. Larger $N_c$ will lead to less segments within the circular pupil, and as a result fewer equations. The phase with the original resolution, i.e. when $N_c=1$, can be estimated by up-sampling and interpolating the down-sampled phase. Due to the fact that the main information loss using this method arises from interpolation, the choice of a suitable interpolation method is of importance. Both methods 1 and 2 can decrease the computational burden by reducing the number of segments (or equivalently increasing the down-sampling factor), which implies a reduction in number of equations. However, this reduction would lead to a partial loss of the accuracy of the estimated phase due to the up-sampling and interpolation of the down-sampled phase. Therefore, $N_c$ is a parameter for adjusting the trade-off between the computational complexity (speed) and accuracy.

2.5.2 Number of the solutions

Both methods 1 and 2 have the same procedure to find the solution to the system of nonlinear equations. Here, method 2 is chosen for explanation. To solve the system of nonlinear equations resulting from equating the analytical $S(x,y)$ with the empirical one extracted utilising Eq. (15), first, $N_c$ is initialised such that it leads to a few segments within the circular pupil and therefore a few equations. To better demonstrate the number of unique solutions of the proposed method, the phase is chosen to initially consist of two segments, which leads to the equations

$$\begin{aligned} \mathrm{cos} \big( a_{1} - a_{2} \big )=A_{{-}1,0},\end{aligned}$$
$$\begin{aligned} \mathrm{sin} \big( a_{1} - a_{2} \big )=B_{{-}1,0},\end{aligned}$$
$$\begin{aligned} a_{1} + a_{2} = 0,\end{aligned}$$
$$\begin{aligned}\lvert a_1 \rvert \le M,\; \lvert a_2 \rvert \le M, \;\; \lvert a_1 - a_{2} \rvert \le l. \end{aligned}$$

Obviously, Eq. (24) has $2\alpha$ solutions for $M=l= \alpha \cdot \pi$.

In the next step, $N_c$ is reduced to get a new segmented phase with a higher sampling rate than the initial phase as well as a new system of equations. Here, $N_c$ is chosen to be halved to double the sampling rate of the initial phase. Independently, an estimate of the new segmented phase with new piston amplitudes is achieved by up-sampling and interpolating the initial phase by the same factor, i.e. 2. Let the new amplitude vector resulting from the interpolation be denoted by $\mathbf {a}^{(1)}$. To solve the new system of equations, the initial guess solution of the DLS solver is set to $\mathbf {a}^{(1)}$. Since this initial guess solution would be close to the solution of the new system of equations, the convergence speed of the solver will increase as well. The closest solutions to $\mathbf {a}^{(1)}$, are taken as the initial guess solutions for the next system of equations. This process will continue until the desired $N_c$ is reached. Evidently, the number of solutions is equal to the number of solutions at the beginning of the process, i.e. $2\alpha$ for $M=l= \alpha \cdot \pi$. However, the correct solution among the obtained solutions is filtered out using the traditional focusibility metrics such as the fractional encircled energy (EE) [3], which will shortly be described. As a result, the proposed methods need $2\alpha$ intensity measurements to output the correct solution. The steps of method 2 are outlined as a pseudo-code in Algorithm 1.

2.5.3 Final solution

Above, it was shown that solving the system of non-linear equations in the proposed methods leads to $2\alpha$ potential solutions for $l=M=\alpha \pi$. For example, in the case of small aberrations, i.e. when $l=M=\pi$, the system of non-linear equations gives 2 potential solutions. To find the correct solution between these two, the two phases resulting from these two solutions, which are actually the amplitudes of the segments, are introduced sequentially by the DM in the WFSL configuration in Fig. 1(a) to the input aberration. Then, the fractional EE corresponding to these phases are measured. The phase that results in a higher fractional EE is chosen as the final solution. Fractional encircled energy (EE) or power-in-bucket (PIB) is a metric of AO performance, which is expressed in percentage and is defined as the proportion of the light energy encircled within an assumed circle of radius $R_b$ to the total light energy received by the detector with a pupil radius of $R_a$ [21]. Conventionally, $R_b$ is set to the Airy disk radius. The fractional EE in polar coordinates is then given as [21]

$$\mathrm{EE} = \frac{\int\limits_{\rho = 0}^{R_{b}}\int\limits_{\varphi=0}^{2\pi} I(\rho,\varphi)dA}{\int\limits _{\rho = 0}^{R_a}\int\limits_{\varphi=0}^{2\pi} I\left(\rho,\varphi\right)dA} \xrightarrow{dA= \rho d\rho d\varphi} \frac{\int\limits_{\rho = 0}^{R_{b}}\int\limits_{\varphi=0}^{2\pi} I\left(\rho,\varphi\right)\left( \rho d\rho d\varphi \right)}{\int\limits _{\rho = 0}^{R_a}\int\limits_{\varphi=0}^{2\pi} I\left(\rho,\varphi\right)\left( \rho d\rho d\varphi \right)},$$
where $(\rho,\varphi )$ denote the polar coordinates in the image plane. Due to being fractional, the fractional EE is robust to the intensity fluctuations in point sources such as lasers and stars. The fractional EE can be computed directly utilising the intensity image in the observation plane. To conduct this computation in software, $R_b$ and $R_a$ should be converted to the number pixels in advance. This can be done simply by dividing theses parameters to the pixel size of the image sensor.

Tables Icon

Algorithm 1. The pseudo-code for direct WFSL method 2

3. Simulation results

In the remaining subsections of this paper, the performance of the proposed WFSL methods is examined for different cases.

3.1 Performance metric

To measure the performance of the methods, a performance metric, called the correction percentage ($\mathrm {CP}$), is defined, which is given by

$$\mathrm{CP} = \begin{cases} \frac{\sigma_o - \sigma_c}{\sigma_o} & \sigma_o \ge \sigma_c \\ 0 & \text{else} \end{cases},$$
where $\sigma _c$ is the RMS of the corrected (residual) phase, and $\sigma _o$ is that of the input phase aberration. In the case $\sigma _c > \sigma _o$, $\mathrm {CP}$ is set to zero, since not only no correction results, but conversely, the intensity image is further distorted.

3.2 Direct WFSL method performance on a random phase

To examine the correction performance of the proposed methods for aberration strengths of different ranges, three ten-mode phases with magnitudes of 1, 3 and 5 [rad] RMS representing the three ranges of small, medium and large, respectively, are introduced as the input aberration. Note that it is assumed that the first two aberration modes, a.k.a. tip and tilt, are already removed by a tip-tilt mirror. The Zernike coefficients of these phase aberrations are depicted in Fig. 7. These phases and the associated intensity images are shown in Fig. 8.

 figure: Fig. 7.

Fig. 7. The Zernike representation of three ten-mode phases excluding tip and tilt modes, with magnitudes of 1, 3 and 5 [rad]. The OSA indexing scheme is used for the Zernike modes. To examine the performance of the proposed WFSL methods, they are applied to these phase aberrations, which represent the three ranges of aberration strengths.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (a), (b) and (c) show the phases associated with the three sets of Zernike coefficients in Fig. 7, respectively. The corresponding intensity images are shown right below them in (d), (e) and (f).

Download Full Size | PDF

3.2.1 Extracting the constraints

To determine the suitable values for the constraints of the system of nonlinear equations, statistics of the amplitudes of the segments need to be obtained. Figure 9 depicts the histograms of the amplitudes of the segments for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS. For each aberration, $N_c=4$, the pupil diameter, $D=0.6$ and the desired segment size, $\tau _0=0.02$. As a result of this geometry, $N=43$ segments fit inside the pupil. Based on these histograms, it would be a cautious choice to put $M=\pi,3\pi,5\pi$ for small, medium and large aberrations, respectively. Figure 10 depicts the distribution of the adjacent segments’ difference, denoted by diff(a), for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS. According to these histograms, it would be a credible choice to set $l=\pi,3\pi,5\pi$ for small, medium and large aberrations, respectively. Evidently, setting smaller values would negatively affect the following phase estimations. The proposed method 2 is applied to the aberrations in Fig. 8. The extracted amplitudes of the piston elements as well as the desired/true ones for $N_c= 4,8$ are depicted in Fig. 11. Furthermore, the derived segmented phases associated with the extracted amplitudes of the piston elements in Fig. 11 are illustrated in Fig. 12. The estimated phases and the corresponding corrected intensity images are shown in Fig. 13.

 figure: Fig. 9.

Fig. 9. (a), (b) and (c) show the histograms of the amplitudes of the segments for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a), (b) and (c) show the distribution of the neighbouring segments’ amplitude difference for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS, respectively.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. (a), (c) and (e) depict the extracted piston amplitudes (in green) as well as the true ones (in red) after applying the proposed WFSL method 2 with $N_c= 4$ to the aberrations with magnitudes 1, 3 and 5 [rad] RMS shown in Fig. 8. Fig. (b), (d) and (f) depict the aforementioned for $N_c=8$. $a_i$ in these figures represents the amplitude of the $i^\mathrm {th}$ piston element.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. (a), (b) and (c) depict the derived segmented phases corresponding to the extracted piston amplitudes in Fig. 11(a), 11(c) and 11(e), respectively, and (d), (e) and (f) depict the derived segmented phases corresponding to Fig. 11(b), 11(d) and 11(f).

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. (a), (b) and (c) show the estimated phases, and (d), (e) and (f) show the corresponding corrected intensity images after applying the proposed WFSL method 2 with $N_c=4$ to the three aberrations in Fig. 8.

Download Full Size | PDF

3.3 Overall efficiency examination

In order to investigate the performance of the proposed methods 1 and 2, they are applied to several batches of 10 random 14-mode phases with magnitudes ranging within $[1,5]$ [rad]. To ensure the same condition for all the magnitudes, the total number of iterations for the DLS solver is restricted to 3000 in all the cases. Furthermore, for aberrations with the magnitude of 1 [rad], $M,l=\pi$, for magnitudes of 2 and 3 [rad] RMS $M,l=3\pi$, and for magnitudes of 4 and 5 [rad] $M,l=5\pi$. Figure 14 depicts the average $\mathrm {CP}$ as a function of the RMS of the aberrations for $N_c=2,4$. As expected, method 1 has a relatively weak performance except for small aberrations. The larger the aberration is, the larger the search space will be and thus the solver would require more iterations to converge. To examine the effect of the number of iterations on the accuracy, the proposed WFSL methods 1 and 2 are applied on 10 random 14-mode phases with $\sigma _o=3$ and $\sigma _o=5$ [rad], respectively. The results are shown in Fig. 15. Evidently, the accuracy improves by increasing the number of iterations of the solver.

 figure: Fig. 14.

Fig. 14. The average $\mathrm {CP}$ against the magnitude of the aberrations. Each point denotes the average $\mathrm {CP}$ after applying the proposed WFSL methods 1 and 2 with $N_c=2$ and 4 to 10 random aberrations of the corresponding magnitude.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. The average $\mathrm {CP}$ against the total number of iterations set for the solver. Each point denotes the average $\mathrm {CP}$ after applying the proposed WFSL methods 1 and 2 with $N_c=4$ to 10 random phases with $\mathrm {RMS}=3$ and 5 [rad], respectively.

Download Full Size | PDF

3.4 Comparison with existing methods

For comparison of the proposed method 2 with existing methods, the highly regarded model-based method proposed by M. Booth in [2] is considered as the baseline. In his paper, he used random 6-mode small aberrations to evaluate the performance of his work. To compare under the same conditions, the proposed method 2 is applied to batches of 10 random 6-mode small aberrations with exactly the same modes and magnitudes used by Booth to evaluate the performance of his work. The pupil diameter $D=0.6$, the desired phase resolution $\tau _0 = 0.02$, and the constraints for small aberrations, i.e. $M,l=\pi$, are set for the proposed method 2. Furthermore, no interpolation is applied. In other words, the final segment size is assumed to be $\tau _0 = 0.02$ as well. Thus, no information will be lost due to interpolation and the error pertaining to the proposed method arises purely from the performance of the DLS solver. Figure 16 compares Booth’s method with the proposed method 2. Evidently, in terms of accuracy, the proposed method 2 outperforms Booth’s method and in terms of number of intensity measurements, Booth’s method needs seven intensity measurements while the proposed method 2 needs only two measurements.

 figure: Fig. 16.

Fig. 16. Comparison between the proposed method 2 and Booth’s method. In this plot, $\sigma _o$ denotes the RMS of the input phase aberration and $\sigma _c$ denotes the RMS of the residual phase after correction.

Download Full Size | PDF

3.5 Computational complexity

The computational complexity of the proposed WFSL methods is examined using MATLAB by measuring the CPU time required by the methods to output the estimated phase. Table 1 lists the CPU time elapsed for the proposed methods for three different segment numbers. Evidently, there is a non-linear relation between the CPU time and the number of segments.

Tables Icon

Table 1. The elapsed CPU time for the proposed WFSL methods against the number of segments.

3.6 Photon-counting noise impact on the performance

Studying the noise impact on WFSL methods is vital since they use intensity images as their feedback. Thanks to modern technology, the impact of common device-rooted noises such as the dark and read-out noises has been dramatically reduced [22]. The dominant noise type in image sensors under low light signal is the photon-counting shot noise [23]. The photon-counting noise is inherent to light and indeed arises from the photon nature of light. This noise is signal-dependent and obeys the Poisson distribution [23]. The technology cannot find a solution for this type of noise as it is associated with the nature of light. In some applications such as astronomical imaging, it is highly probable that the image sensor receives on average less than 100 photons per pixel per second [23]. A simulation is conducted to examine the performance of the proposed method 2 under different photon levels. Let the average number of photons per pixel collected by the sensor during an exposure time be denoted by $\mu$. Figure 17(a) and 17(b) depict the average $\mathrm {CP}$ versus $\mu$ for the three aberration strengths and $N_c=2,4$. Each point denotes the average CP of 20 phases with a given magnitude but random directions after correction is made by method 2. According to these figures, the performance of the proposed method 2 declines as $\mu$ decreases.

 figure: Fig. 17.

Fig. 17. The impact of the photon-counting shot noise on the proposed method 2 performance. (a) depicts the average $\mathrm {CP}$ versus the average number of photons per pixel per exposure for three different aberration strengths and $N_c=2$. (b) depicts the aforementioned for $N_c=4$.

Download Full Size | PDF

4. Conclusion

By splitting a circular pupil into smaller rectangular sub-pupils/segments as well as utilising the relation between the light phase and the intensity image, a model for the WFSL AO has been established. It was demonstrated that the intensity image pertaining to an aberration divided by that pertaining to a single sub-pupil leads to a Fourier series whose coefficients are functions of the amplitude of the segments. Equating the coefficients derived from the measured intensity image from the image sensor with these coefficients, results in a system of non-linear equations solving which gives an estimation of the input phase aberration. It was also shown that the number of equations and therefore the computational load can be tuned by down-sampling the phase. Larger down-sampling factors led to fewer number of equations but smaller accuracy. The simulation results for different aberration strengths verified the functionality and effectiveness of this method. Furthermore, the performance of the proposed WFSL methods under photon-counting shot noise was studied.

Funding

University of Canterbury (UC Doctoral Scholarship).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. Tyson, Principles of Adaptive Optics, 3rd ed. (CRC Press, 2010).

2. M. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006). [CrossRef]  

3. Y. Liu, J. Ma, B. Li, and J. Chu, “Hill-climbing algorithm based on Zernike modes for wavefront sensorless adaptive optics,” Opt. Eng. 52(1), 016601 (2013). [CrossRef]  

4. H. Linhai and C. Rao, “Wavefront sensorless adaptive optics: a general model-based approach,” Opt. Express 19(1), 371–379 (2011). [CrossRef]  

5. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

6. R. Liu, S. Zhao, P. Zhang, H. Gao, and F. Li, “Complex wavefront reconstruction with single-pixel detector,” Appl. Phys. Lett. 114(16), 161901 (2019). [CrossRef]  

7. C. Wu, J. Ko, and C. C. Davis, “Lossy wavefront sensing and correction of distorted laser beams,” Appl. Opt. 59(3), 817–824 (2020). [CrossRef]  

8. S. Zommer, E. N. Ribak, S. G. Lipson, and J. Adler, “Simulated annealing in ocular adaptive optics,” Opt. Lett. 31(7), 939–941 (2006). [CrossRef]  

9. J. V. Sheldakova, A. L. Rukosuev, and A. V. Kudryashov, “Genetic and hill-climbing algorithms for laser beam correction,” Laser Reson. Beam Control VII.5333, (2004).

10. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “DNN-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019). [CrossRef]  

11. H. Yang, O. Soloviev, and M. Verhaegen, “Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects,” Opt. Express 23(19), 24587–24601 (2015). [CrossRef]  

12. H. Song, R. Fraanje, G. Schitter, H. Kroese, G. Vdovin, and M. Verhaegen, “Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system,” Opt. Express 18(23), 24070–24084 (2010). [CrossRef]  

13. M. Segel, E. Anzuola, S. Gladysz, and K. Stein, “Modal vs. zonal wavefront-sensorless adaptive optics for free-space laser communications,” Opt. Info Base Conf. Pap. (2016).

14. E. Anzuola, M. Segel, S. Gladysz, and K. Stein, “Performance of wavefront-sensorless adaptive optics using modal and zonal correction,” Proc. SPIE 10002, 100020J (2016). [CrossRef]  

15. D. A. Fish, J. G. Walker, A. M. Brinicombe, and E. R. Pike, “Blind deconvolution by means of the Richardson–Lucy algorithm,” J. Opt. Soc. Am. A 12(1), 58–65 (1995). [CrossRef]  

16. S. W. Smith, “The Scientist and Engineer’s Guide to Digital Signal Processing,” arXiv, arXiv: 97-80293 (1999).

17. Q. I. Rahman and G. Schmeisser, “Characterization of the speed of convergence of the trapezoidal rule,” Numer. Math. 57(1), 123–138 (1990). [CrossRef]  

18. E. H. Bergou, Y. Diouane, and V. Kungurtsev, “Convergence and Complexity Analysis of a Levenberg-Marquardt Algorithm for Inverse Problems,” J. Optim. Theory Appl. 185(3), 927–944 (2020). [CrossRef]  

19. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

20. D. G. Voelz, Computational Fourier Optics: A MATLAB Tutorial (2011).

21. T. B. Andersen, “Accurate calculation of diffraction-limited encircled and ensquared energy,” Appl. Opt. 54(25), 7525–7533 (2015). [CrossRef]  

22. A. G. Basden, “Analysis of electron multiplying charge coupled device and scientific CMOS readout noise models for Shack-Hartmann wavefront sensor accuracy,” J. Astron. Telesc. Instruments, Syst. 1(3), 039002 (2015). [CrossRef]  

23. S. B. Howell, Handbook of CCD Astronomy (Cambridge University Press, 2006).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Two typical WFSL set-ups. The WFSL configuration in (a) is based on the combination of a pinhole and a photo-diode. The WFSL configuration in (b) is based on the image sensor, which is used for observation.
Fig. 2.
Fig. 2. A random phase with a magnitude of $1.5\pi$ [rad]. The segments are denoted by squares and numbered as shown. The dashed circle represents the pupil.
Fig. 3.
Fig. 3. (a) and (b) depict a random aberration with a magnitude of $1.5$ [rad] RMS and the corresponding intensity image, respectively.
Fig. 4.
Fig. 4. The impact of the ZOH segmentation of the original phase in Fig. 3 on the intensity image for $N_d=2,4$ and 6.
Fig. 5.
Fig. 5. (a), (b) and (c) show the extracted intensity images using Eq. (23) corresponding to the aberrations in Fig. 4(a), 4(b) and 4(c), respectively.
Fig. 6.
Fig. 6. (a) and (d) show the original aberration and the corresponding intensity image, respectively. (e) and (f) show the intensity images corresponding to the down-sampled phases with $N_c=2,4$ in (b) and (c), respectively.
Fig. 7.
Fig. 7. The Zernike representation of three ten-mode phases excluding tip and tilt modes, with magnitudes of 1, 3 and 5 [rad]. The OSA indexing scheme is used for the Zernike modes. To examine the performance of the proposed WFSL methods, they are applied to these phase aberrations, which represent the three ranges of aberration strengths.
Fig. 8.
Fig. 8. (a), (b) and (c) show the phases associated with the three sets of Zernike coefficients in Fig. 7, respectively. The corresponding intensity images are shown right below them in (d), (e) and (f).
Fig. 9.
Fig. 9. (a), (b) and (c) show the histograms of the amplitudes of the segments for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS.
Fig. 10.
Fig. 10. (a), (b) and (c) show the distribution of the neighbouring segments’ amplitude difference for 500 random aberrations with magnitudes of 1, 3 and 5 [rad] RMS, respectively.
Fig. 11.
Fig. 11. (a), (c) and (e) depict the extracted piston amplitudes (in green) as well as the true ones (in red) after applying the proposed WFSL method 2 with $N_c= 4$ to the aberrations with magnitudes 1, 3 and 5 [rad] RMS shown in Fig. 8. Fig. (b), (d) and (f) depict the aforementioned for $N_c=8$. $a_i$ in these figures represents the amplitude of the $i^\mathrm {th}$ piston element.
Fig. 12.
Fig. 12. (a), (b) and (c) depict the derived segmented phases corresponding to the extracted piston amplitudes in Fig. 11(a), 11(c) and 11(e), respectively, and (d), (e) and (f) depict the derived segmented phases corresponding to Fig. 11(b), 11(d) and 11(f).
Fig. 13.
Fig. 13. (a), (b) and (c) show the estimated phases, and (d), (e) and (f) show the corresponding corrected intensity images after applying the proposed WFSL method 2 with $N_c=4$ to the three aberrations in Fig. 8.
Fig. 14.
Fig. 14. The average $\mathrm {CP}$ against the magnitude of the aberrations. Each point denotes the average $\mathrm {CP}$ after applying the proposed WFSL methods 1 and 2 with $N_c=2$ and 4 to 10 random aberrations of the corresponding magnitude.
Fig. 15.
Fig. 15. The average $\mathrm {CP}$ against the total number of iterations set for the solver. Each point denotes the average $\mathrm {CP}$ after applying the proposed WFSL methods 1 and 2 with $N_c=4$ to 10 random phases with $\mathrm {RMS}=3$ and 5 [rad], respectively.
Fig. 16.
Fig. 16. Comparison between the proposed method 2 and Booth’s method. In this plot, $\sigma _o$ denotes the RMS of the input phase aberration and $\sigma _c$ denotes the RMS of the residual phase after correction.
Fig. 17.
Fig. 17. The impact of the photon-counting shot noise on the proposed method 2 performance. (a) depicts the average $\mathrm {CP}$ versus the average number of photons per pixel per exposure for three different aberration strengths and $N_c=2$. (b) depicts the aforementioned for $N_c=4$.

Tables (2)

Tables Icon

Algorithm 1. The pseudo-code for direct WFSL method 2

Tables Icon

Table 1. The elapsed CPU time for the proposed WFSL methods against the number of segments.

Equations (38)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = C F { p ( α , β ) exp ( j ϕ ( α , β ) ) } ,
I ( x , y ) = O ( x , y ) Δ ( x , y ) + Ξ ( x , y ) ,
I ( x , y ) = C | U o u t ( x , y ) | 2 .
ϕ i ( α , β ) = ( b i α + c i β + a i ) r e c t ( α α i τ 0 ) r e c t ( β β i τ 0 ) ,
ϕ i ( α , β ) = a i r e c t ( α α i τ 0 ) r e c t ( β β i τ 0 ) .
U ( x , y ) = C F { p ( α , β ) exp ( j ϕ ( α , β ) ) } = C i = 1 N F { p i ( α , β ) exp ( j ϕ i ( α , β ) ) } ,
U i ( x , y ) = C F { p i ( α , β ) exp ( j ϕ i ( α , β ) ) } = C F { r e c t ( α α i τ 0 ) r e c t ( β β i τ 0 ) exp ( j ϕ i ( α , β ) ) } = C exp ( j a i ) τ 0 2 exp ( j 2 π ( α i x + β i y ) ) s i n c ( τ 0 x ) s i n c ( τ 0 y ) .
U ( x , y ) = C τ 0 2 s i n c ( τ 0 x ) s i n c ( τ 0 y ) { i = 1 N [ c o s ( a i ) c o s ( 2 π ( α i x + β i y ) ) + sin ( a i ) sin ( 2 π ( α i x + β i y ) ) ] + j i = 1 N [ s i n ( a i ) c o s ( 2 π ( α i x + β i y ) ) c o s ( a i ) s i n ( 2 π ( α i x + β i y ) ) ] } .
c i = cos ( a i ) , d i = s i n ( a i ) , ψ i = cos ( 2 π ( α i x + β i y ) ) , γ i = sin ( 2 π ( α i x + β i y ) ) ,
H ( x , y ) = C τ 0 2 s i n c ( τ 0 x ) s i n c ( τ 0 y ) .
U ( x , y ) = e x p ( j C ) H ( x , y ) [ i = 1 N ( c i ψ i + d i γ i ) + j i = 1 N ( d i ψ i c i γ i ) ] ,
I ( x , y ) = H 2 ( x , y ) [ ( i = 1 N c i ψ i + d i γ i ) 2 + ( i = 1 N d i ψ i c i γ i ) 2 ] .
I ( x , y ) = H 2 ( x , y ) [ N E ( x , y ) + S ( x , y ) ] ,
S ( x , y ) = i = 1 N j = i + 1 N c o s ( a i a j ) c o s ( 2 π ( ( α i α j ) x + ( β i β j ) y ) ) + s i n ( a i a j ) sin ( 2 π ( ( α i α j ) x + ( β i β j ) x ) ) .
S ( x , y ) = k 1 , k 2 A k 1 , k 2 ( a ) c o s ( 2 π ( k 1 τ 0 x + k 2 τ 0 y ) ) + B k 1 , k 2 ( a ) s i n ( 2 π ( k 1 τ 0 x + k 2 τ 0 y ) ) , k 1 , k 2 Z , a = [ a 1 , a 2 , , a N ] ,
S ^ ( x , y ) = I ( x , y ) H 2 ( x , y ) N E ( x , y ) ,
+ + I ( x , y ) d x d y = C + + | F { p ( α , β ) exp ( j ϕ ( α , β ) ) } | 2 d α d β , = C + + | p ( α , β ) exp ( j ϕ ( α , β ) ) | 2 d α d β = C + + p 2 ( α , β ) d α d β .
C + + p 2 ( α , β ) d α d β = C N τ 0 2 .
C = + + I ( x , y ) d x d y τ 0 2 N .
S ^ ( x , y ) = n , m Z [ A ^ n , m cos ( 2 π ( n τ 0 x m τ 0 y ) ) + B ^ n , m sin ( 2 π ( n τ 0 x + m τ 0 y ) ) ] .
A ^ n , m = τ 0 2 1 τ 0 1 τ 0 S ^ ( x , y ) c o s ( 2 π ( n τ 0 x m τ 0 y ) ) d x d y ,
B ^ n , m = τ 0 2 1 τ 0 1 τ 0 S ^ ( x , y ) s i n ( 2 π ( n τ 0 x + m τ 0 y ) ) d x d y .
c o s ( a 1 a 2 ) + c o s ( a 3 a 4 ) = A 1 , 0 ,
c o s ( a 1 a 4 ) = A 1 , 1 ,
c o s ( a 1 a 3 ) + c o s ( a 2 a 4 ) = A 0 , 1 ,
c o s ( a 2 a 3 ) = A 1 , 1 ,
s i n ( a 1 a 2 ) + s i n ( a 3 a 4 ) = B 1 , 0 ,
s i n ( a 1 a 4 ) = B 1 , 1 ,
s i n ( a 1 a 3 ) + s i n ( a 2 a 4 ) = B 0 , 1 ,
s i n ( a 2 a 3 ) = B 1 , 1 .
a 1 + a 2 + a 3 + a 4 = 0.
| F u [ ω ] | = 1 N d | k = 0 N d 1 F o [ ω 2 π k N o N d ] | | s i n ( 0.5 ω N d ) s i n ( 0.5 ω ) | ,
c o s ( a 1 a 2 ) = A 1 , 0 ,
s i n ( a 1 a 2 ) = B 1 , 0 ,
a 1 + a 2 = 0 ,
| a 1 | M , | a 2 | M , | a 1 a 2 | l .
E E = ρ = 0 R b φ = 0 2 π I ( ρ , φ ) d A ρ = 0 R a φ = 0 2 π I ( ρ , φ ) d A d A = ρ d ρ d φ ρ = 0 R b φ = 0 2 π I ( ρ , φ ) ( ρ d ρ d φ ) ρ = 0 R a φ = 0 2 π I ( ρ , φ ) ( ρ d ρ d φ ) ,
C P = { σ o σ c σ o σ o σ c 0 else ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.