Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive optics for dynamic aberration compensation using parallel model-based controllers based on a field programmable gate array

Open Access Open Access

Abstract

Adaptive optics (AO) is an effective technique for compensating the aberrations in optical systems and restoring their performance for various applications such as image formation, laser processing, and beam shaping. To reduce the controller complexity and extend the compensation capacity from static aberrations to dynamic disturbances, the present study proposes an AO system consisting of a self-built Shack-Hartmann wavefront sensor (SHWS), a deformable mirror (DM), and field programmable gate array (FPGA)-based controllers. This AO system is developed for tracking static and dynamic disturbances and tuning the controller parameters as required to achieve rapid compensation of the incoming wavefront. In the proposed system, the FPGA estimates the coefficients of the eight Zernike modes based on the SHWS with CameraLink operated at 200 Hz. The estimated coefficients are then processed by eight parallel independent discrete controllers to generate the voltage vectors to drive the DM to compensate the aberrations. To have the DM model for controller design, the voltage vectors are identified offline and are optimized by closed-loop controllers. Furthermore, the controller parameters are tuned dynamically in accordance with the main frequency of the aberration as determined by a fast Fourier transform (FFT) process. The experimental results show that the AO system provides a low complexity and effective means of compensating both static aberrations and dynamic disturbance up to 20 Hz.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Adaptive optics (AO) system was first proposed to compensate natural and time-varying disturbances in the atmosphere which blur the astronomical image in telescopes in 1953 by H. W. Bobcock [1,2]. Nowadays, AO plays a crucial role in improving the performance of many optical systems [24]. In the past 20 years, AO is shown to be integrated with optical microscopy for higher quality images [4]. The refractive index variations in complex biological specimens can be compensated to improve the image contrast as a result. AO has been widely applied in the detection path of widefield microscopy systems to improve the image quality [46]. In point scanning-based microscopy applications, such as multiphoton-excited fluorescence microscopy, AO enables the aberrations in the excitation path to be compensated in such a way that the focusing spot approaches the diffraction limit in specimen. The resulting improvement in the excitation intensity then allows the information of interest to be obtained at a much greater depth [7,8]. However, in addition to an improved excitation efficiency, scanning confocal microscopy also requires an effective control of the emission fluorescence to pass the detection pinhole for best collecting efficiency in order to enhance the axial imaging resolution [9,10]. AO also assists super resolution microscopies to maintain the resolution beyond the diffraction limit [1113]. Studies have shown that by eliminating the aberrations to form high contrast illumination pattern, AO enables the reconstruction of images with a resolution well beyond the diffraction limit. AO is also of great benefit in improving the axial resolution of advanced microscopy techniques such as temporal focusing [14] and light-sheet microscopy [15,16]; particularly for thick specimens. AO is applied not only in imaging applications, but also in manufacturing processes such as laser ablation and modification. For example, by compensating the spherical aberrations, AO makes possible the realization of uniform, smooth and precise three-dimensional structures in a variety of materials, including technical glasses, diamond and other high refractive index or birefringent crystals [1719]. With its favorable properties, AO is employed to great effect in many applications nowadays, including ophthalmology [20], neuroscience [21,22], optical communication [23], and high power laser amplifier design [24].

A typical AO system consists of an active compensator, a wavefront sensor, and a closed-loop controller. Among the various optical compensators available, liquid crystal on silicon (LCOS) spatial light modulators (SLMs) are one of the mostly commonly used due to their high resolution phase modulation ability. The SLM can be operated at up to hundred Hz frame rate and the individual pixel can be driven to retard phase from 0 to 2π. Once the optical aberration has been measured, the inverse and wrapped phase pattern is applied to the SLM to compensate the aberration [25]. In contrast to SLMs, deformable mirrors (DMs) have reflective continuous membrane driven by multiple electrostatic electrode actuators. Having measured the aberration of the incoming wavefront, the individual electrodes are actuated by a multichannel high voltage (HV) driver to compensate the incoming optical phase. Deformable mirrors provide response time of less than one millisecond and are thus well suited to real-time AO applications. However, even though the membrane can provide the necessary phase changes over multiple wavelengths, each channel response is coupled with that of its neighbors, and hence manipulating the DM in such a way as to generate the desired shape is challenging [26]. Multi-actuator adaptive lenses [27] and optofluidic wavefront modulators [28] provide a transmissive wavefront correction ability over a single wavelength range and are readily integrated into microscope assemblies due to their low complexity. Moreover, fast switchable digital micromirror devices (DMDs) are capable of performing both wavefront correction and laser scanning in two-photon microscopy [29]. However, the phase modulation process is complex and intensity modulated by DMDs causes undesirable power loss.

Besides active wavefront compensator mechanisms, AO systems also require an effective means of measuring the incoming aberration. In typical systems, the wavefront distribution is measured by phase-shift interferometry, and the variation of the phase with respect to plane wave is taken as a measure of the aberration quantity [7]. However, interferometry requires the collection of multiple interferogram images with different phase delays to reconstruct the wavefront, and is therefore too time-consuming for real-time AO applications. By contrast, partitioned aperture wavefront (PAW) sensors provide a single-shot wavefront sensing ability based on four oblique images captured simultaneously by a quatrefoil lens [30]. Shack-Hartmann wavefront sensors (SHWSs), based on a microlens array (MLA), also provide the means to perform single-shot wavefront measurement [31,32]. In a typical AO system, the centroid displacements of the focusing spots captured by SHWS are used to determine the coefficients of Zernike polynomials representing the aberration wavefront based on a process of least-square estimation [33] or deep learning [34].

Single-shot SHWS, in conjunction with high-speed DMs, provides potential and attractive means of realizing rapid AO systems with an operation speed of hundred Hz. However, such systems also require the use of a fast and efficient closed-loop controller to actuate the active compensator in accordance with the measured wavefront aberration. The closed-loop controller can be designed according to the identified multichannel-input-multichannel-output (MIMO) model of the optical system [35,36]. However, the MIMO model becomes large and complicated if the design involves multiple DM channels and higher-order Zernike modes. The influence functions of DM actuators can be identified and used for wavefront control in AO [37,38]. The matrix dimension can be reduced by adopting the subset of the influence functions according to singular value decomposition (SVD), but the matrix still becomes large if the DM has more actuators. Previous studies have shown that the coefficients of the Zernike modes vary linearly with the square of the driving voltage applied to the electrostatic membrane DM [26]. Furthermore, the Zernike polynomials form an orthogonal set, and hence the potential exists to implement the corresponding AO system as multiple independent linear control systems; with each system targeted at a particular Zernike mode. Field programmable gate array (FPGA) platforms provide large reconfigurable logic blocks for complex computation and enable precise hardware-based timing with nanosecond resolution. Consequently, they are ideally suited to the realization of real-time discrete controller designs [35,39]. The CPU (central processing unit)-based computing platform performs excellent floating-point operations and can demonstrate high frequency control loop which is limited by the wavefront sensor speed and the USB (universal serial bus) communication protocol [40]. However, general purpose operating system, such as Windows, could only provide millisecond accuracy which might be a limitation in developing fast and stable digital control system.

Closed-loop AO controllers are usually designed to have a fast step response in order to achieve rapid convergence. However, real-world optical systems often suffer time-varying aberrations, such as wind turbulence, heat flow-induced fluctuations, and motion artifacts [39,41,42]. These aberrations result in an oscillating steady-state error, which severely degrade the control performance. The present study therefore proposes an AO system for dynamic aberration compensation consisting of a DM, a high-bandwidth CameraLink-based SHWS, and an efficient model-based closed-loop controller implemented on FPGA. The individual voltage vectors for DM to generate each of eight Zernike modes are identified offline and further optimized by a PI (proportional-integral) control closed loop. The focusing spots image obtained by the SHWS is interfaced to the FPGA which uses this information to estimate the eight Zernike coefficients of the wavefront aberration. The estimated coefficients are then processed by eight discrete linear control systems implemented in parallel on FPGA in order to generate the driving voltages required to actuate the DM in such a way as to compensate each of the eight Zernike modes; thereby achieving a plane wavefront and an improved laser focusing. The main frequency of the dynamic aberration is determined via a fast Fourier transform (FFT) operation and the controller parameters are tuned accordingly. The experimental results show that the proposed AO system running at 200 Hz is capable not only for compensating static aberrations, but also dynamic disturbances with a frequency of up to 20 Hz.

2. System setup and method

2.1 Overall system

Figure 1(a) shows the basic structure of the AO-based laser focusing system and Fig. 1(b) is the photograph of the overall system. The light emitted by the He-Ne laser with wavelength of 632.8 nm (25-LHP-691, MELLES GRIOT) is passed through a polarizer to control its power and is then incident on a beam expander consisting of a 10x objective and a collimating lens (L1). The beam expander additionally incorporates a 20 μm pinhole, which acts as a spatial filter and produces an output beam with a uniform Gaussian distribution. The light emerging from the beam expander passes through a beam splitter (BS1), where the wavefront is disturbed dynamically by a membrane DM (DM1; MDM1-61S-4, Active Optical Systems) driven by a self-built multichannel HV driver and an isolated FPGA (FPGA1; myRIO-1900, National Instruments). The disturbed beam is then incident on second membrane DM (DM2; MDM1-32S-4, Active Optical Systems) with 32 channels, which serves as the active compensator in the AO system and is commanded by second FPGA (FPGA2; PCIe-1473R-LX110, National Instruments) with Xilinx Virtex-5 FPGA. The beam reflected from the DM is further incident on a beam splitter (BS3), which splits the compensated beam into two paths. The reflected beam passes through a lens (L4) with a focal length of 75 mm and the focusing spot is imaged through a 10x objective onto a CMOS camera (daA1920-15um, Basler). Meanwhile, the transmitted beam is reduced by two lenses (L2 and L3) with focal lengths of 250 mm and 100 mm, respectively, to match aperture size of the SHWS. The SHWS then detects the wavefront and expands it to Zernike modes combination processed in FPGA2 to quantify the aberrations.

 figure: Fig. 1.

Fig. 1. Optical system with AO system. (a) Block diagram; (b) Photograph. (BS: beam splitter, DM: deformable mirror, L1: f = 400 mm lens, L2: f = 250 mm lens, L3: f = 100 mm lens, L4: f = 75 mm lens, SHWS: Shack-Hartmann wavefront sensor, DMA FIFO: direct memory access first-in-first-out).

Download Full Size | PDF

The SHWS in Fig. 1(a) is a self-built assembly consisting of an MLA (#64-482, Edmund Optics) and a CMOS camera (acA2040-180 km, Basler) with an exposure time of 70 μs and an imaging ROI (region of interest) with a size of 1260×1260 pixels. The image captured by the camera is transmitted via a CameraLink interface to FPGA2 with a frequency of 85 MHz and a transmission time of 1587.6 μs per frame. (Note that the image includes 14×14 sub-lenses and only 132 focusing spots within the incident beam are used for calculation) The FPGA calculates the centroid positions of the focusing spots to form a wavefront slope matrix indicating the derivative of the wavefront at each sub-lens position. The eight Zernike coefficients in the Wyant expansion scheme, corresponding to the x-tilt, y-tilt, defocus, x-astigmatism, y-astigmatism, x-coma, y-coma, and spherical aberration of the detected wavefront, respectively, are then estimated using a least-square approximation technique [33]. The FPGA computation times for the centroid position calculation and Zernike coefficient estimation procedure are around 150 μs and 240 μs, respectively. However, to accelerate the compensation process, the FPGA starts to compute the centroid positions as soon as few rows of every sub-lens region of image are received. Having determined the Zernike coefficients, the FPGA outputs a 32-channel voltage vector to drive DM2 in such a way as to compensate the detected aberrations. The processing time for passing the 32-channel command to the HV driver of DM2 is around 380.85 μs. The FPGA2 runs the control loop at 200 Hz (i.e., a time-step of 5 ms) to precisely trigger the SHWS and drive DM2.

FPGA2 has four main responsibilities within the proposed AO system, including interfacing with the SHWS through the CameraLink protocol, estimating the Zernike coefficients of the aberration, hosting the discrete controllers used to perform AO compensation, and controlling DM2. Since floating-point arithmetic operations consume significant logic block resources and increase the latency delay, part of the computational task performed by the FPGA is offloaded to a PC, where it is executed in LabVIEW (see Fig. 1(a)). In particular, the incoming dynamic aberration information received from the SHWS is transferred continuously from the FPGA to the PC over a DMA FIFO (direct memory access first-in-first-out) channel, and the PC determines the main frequency of the aberration and returns the frequency value to the FPGA as a signal to tune the controller parameters and improve the compensation performance as a result.

2.2 Deformable mirror identification

The responses of the DM channels are coupled, and hence it is difficult to control the individual actuators of the DM in such a way as to achieve the required compensation effect by mapping the compensation phase to DM channels directly. Accordingly, in the present study, individual voltage vectors for driving the DM are generated for each of the eight Zernike modes. In other words, the AO control system implemented on FPGA2 generates eight independent control voltage vectors (one vector per mode) rather than a single 32-channel control vector. In implementing the proposed AO scheme, the wavefront error between the measured Zernike modes and the desired modes is iteratively reduced by applying the following steepest-decent algorithm to each control vector [26]:

$${{\boldsymbol c}_{k,n + 1}} = {{\boldsymbol c}_{k,n}} - \mu \frac{{\partial E}}{{\partial {{\boldsymbol c}_{k,n}}}} = {{\boldsymbol c}_{k,n}} - 2\mu {{\boldsymbol B}^T}[{({{{\boldsymbol a}_n} - {{\boldsymbol a}_k}} )\ast {{\boldsymbol w}^2}} ]\textrm{, }k = 1,2, \ldots ,8,$$
where ck,n is the control signal vector for the kth Zernike mode at time step n. The control signal is defined as the square of the control voltage. Namely, ${\boldsymbol c} = {[V_1^2,V_2^2, \cdots ,V_{32}^2]^T}$. Furthermore, B is the DM response characteristics matrix and is measured experimentally. In particular, each element bji of B represents the slope between the ith DM channel control signal and the corresponding jth Zernike coefficient change. In addition, an is the measured Zernike coefficient vector at time step n and ak is a vector containing a single non-zero element, ak, representing the desired kth Zernike mode. w2 is a weighting factor vector with each element wk2 defined as the integral of the kth Zernike polynomial over the unit circle. The * operator denotes element-by-element multiplication of two vectors. Finally, μ is a scalar related to the convergence rate. In practice, ak is set as 3.5 × 10−5 and μwk2 is set as 5 × 1013.

Once the minimal value of the wavefront error E in Eq. (1) is obtained, the converged control signal vector is applied to DM2 to generate the corresponding wavefront of the kth Zernike mode. However, in practice, residual Zernike modes inevitably remain in addition to the desired kth Zernike mode when ck is applied. Hence, to improve the contrast of the generated Zernike modes, a closed-loop PI controller is further applied to optimize the individual control signal vectors, as shown in Fig. 2. In particular, the eight-channel PI controller updates the control signal vector obtained in Eq. (1) as follows:

$${{\boldsymbol c^{\prime}_{k,n + 1}}} = P[{{{\boldsymbol e}_n}{\boldsymbol C}} ]+ I\left[ {\left( {\sum\limits_{p = 1}^n T {{\boldsymbol e}_p}} \right){\boldsymbol C}} \right],$$
where ${{\boldsymbol e}_n} = {{\boldsymbol a}_k} - {{\boldsymbol a}_n}$ and ${\boldsymbol C} = [{{\boldsymbol c}_1},{{\boldsymbol c}_2}, \ldots ,{{\boldsymbol c}_8}]$ is the control signal matrix and T is the sampling period (50 ms). In addition, P and I are the proportional and integral gains of the controller, respectively. P and IT are assigned values of 0.3 and 0.7 in practice. Once the error vector norm is stably converged, the optimized control signal used to generate the wavefront of the kth Zernike mode is taken as the product of the original control signal matrix C and a coefficient vector dk, i.e.,
$${{\boldsymbol c^{\prime}_k}} = {\boldsymbol C}{{\boldsymbol d}_k},\textrm{ }k = 1,2, \ldots ,8.$$

 figure: Fig. 2.

Fig. 2. Block diagram of DM driving control signal vector optimization for each Zernike mode.

Download Full Size | PDF

Figure 3(a) shows the measured Zernike coefficients obtained when applying the original control signal vectors identified by Eq. (1). It is seen that obvious residuals exist for every mode. However, when the optimized control signals in Eq. (3) are applied, the residuals are significantly reduced, as shown in Fig. 3(b). A detailed inspection shows that the application of the optimized control signals increases the signal-to-noise ratio (SNR) of the desired mode over the residual modes to more than 20 dB. For example, the x-coma (Z6) to x-tilt (Z1) aberration contrast has a SNR of just 6.88 dB in Fig. 3(a) when c6 is applied, but has a SNR of 22.45 dB in Fig. 3(b) when c6′ is applied.

 figure: Fig. 3.

Fig. 3. Measured Zernike coefficients when driving DM2 with: (a) original control signal vectors (Eq. (1)) and (b) optimized control signal vectors (Eq. (3)). Note that the columns marked in red, magenta, orange, dark green, green, blue, cyan, and purple represent Zernike modes Z1∼Z8, respectively.

Download Full Size | PDF

2.3 Parallel model-based controller design

In the AO method proposed in the present study, the optimized control signals required to manipulate the individual Zernike modes are generated and a model-based control can be designed and implemented on FPGA2. As shown in Fig. 4(a), the controller consists of eight independent controllers (one for each Zernike mode) running in parallel on the FPGA at a frequency of 200 Hz. The command vector ac is the target Zernike coefficients of the individual controllers. For the AO system considered in the present study, the aim of DM2 is to achieve a plane wavefront in order to obtain the optimal focusing. Consequently, the aim of each controller is to reduce the corresponding Zernike coefficient to zero (i.e., ${{\boldsymbol a}_c} = [0,0, \ldots ,0]$). In implementing the proposed AO system, the variation of the wavefront detected by SHWS, which comprises the summation of the incoming disturbance and produced by DM2, is compared with the target, and the resulting error vector e (expressed in terms of the eight Zernike coefficients) is provided to the eight parallel controllers as an input. Since the resonance frequency of DM2 is located at 2 kHz (as stated in the manufacturer’s specification) and its response is relatively uniform under 200 Hz (i.e., the control loop rate), the controller transfer function for each channel can be designed as the following PI controller, Hk(s):

$${H_k}(s) = {P_{HK}} + \frac{{{I_{HK}}}}{s},$$
where k denotes the kth Zernike mode; and PHK and IHK are the proportional and integral gains of the kth controller, respectively.

 figure: Fig. 4.

Fig. 4. Block diagrams of: (a) parallel closed-loop model-based control system; and (b) control system for kth channel of parallel controller.

Download Full Size | PDF

However, although the controller in Eq. (4) can be optimized to achieve a rapid step response for static aberrations, it produces a large fluctuating steady-state error for dynamic aberrations with a changing frequency. Thus, to compensate the frequency component of the aberration, the following controller transfer function is introduced:

$${G_k}(s) = \left( {{P_{GK}} + \frac{{{I_{GK}}}}{s}} \right)\left( {\frac{{s - {o_k}}}{{{s^2} + {\omega^2}}}} \right),$$
where ω is the aberration angular frequency and ok is an additional zero added to the system to stabilize the kth controller. According to the final value theorem, the second term of Eq. (5) can be used to reduce the steady-state fluctuation of the frequency ω. Furthermore, PGK and IGK can be optimized to achieve a rapid rising time and eliminate the steady-state error. Having finalized the controller, the matched pole-zero (MPZ) method [43] can be used to convert the s-domain transfer function Gk(s) to the z-domain Gk(z), as follows:
$${G_k}(z) = \frac{{{a_{1k}}{z^{ - 1}} + {a_{2k}}{z^{ - 2}} + {a_{3k}}{z^{ - 3}}}}{{1 + {b_{1k}}{z^{ - 1}} + {b_{2k}}{z^{ - 2}} + {b_{3k}}{z^{ - 3}}}}.$$
according to the z-transform property, z−1 indicates a unit step delay in the time domain. Thus, Eq. (6) can be transferred to the following difference equation:
$$\begin{array}{l} {r_k}[n] = {a_{1k}}{e_k}[n - 1] + {a_{2k}}{e_k}[n - 2] + {a_{3k}}{e_k}[n - 3]\\ \textrm{ } - ({{b_{1k}}{r_k}[n - 1] + {b_{2k}}{r_k}[n - 2] + {b_{3k}}{r_k}[n - 3]} ). \end{array}$$

The difference equation for kth channel is implemented on FPGA2 in the form of the block diagram shown in Fig. 4(b), where ek is the kth element of the error vector e and rk is the output response of the kth channel controller, Gk. As described in Section 2.1, to alleviate the computational load on the FPGA, the error vector is sent to the PC via the DMA FIFO channel for continuous spectrum analysis by floating-point FFT. The main frequency of the dynamic aberration is then returned to the FPGA to adjust parameter ω in Eq. (5). The coefficient vector ${\boldsymbol b} = [{b_{1k}},{b_{2k}},{b_{3k}}]$ in Eq. (6) then changes in response to the poles movement of the controller in Eq. (5). For reasons of expediency, the values of coefficient vector b were calculated in advance for default frequencies of 1, 5, 10, 15, and 20 Hz, respectively, and the coefficient values were fitted by a polynomial expression using MATLAB. The polynomial was then implemented on the FPGA such that the coefficients could be rapidly determined for any value of ω.

3. Experimental results

3.1 Parallel model-based controller performance

To verify the performance of the proposed parallel model-based controllers, DM1 (see Fig. 1) was used to disturb the incoming laser beam at frequencies of 0∼20 Hz and the model-based controller implemented on FPGA2 was used to compensate the resulting aberrations. In performing the experiments, the PI controller Hk(s) given in Eq. (4) was adopted for every channel, and the controller parameters, i.e., PHK and IHK, were set as 100 and 800, respectively. The experiments commenced by applying a static aberration (ω = 0 Hz) to the incoming laser beam in order to evaluate the step response performance of the controller. Figure 5(a) shows the resulting variation of the eight Zernike coefficients over time. (Note that the AO system was turned on after 500 ms.) Fig. 5(d) shows the variation of the x-astigmatism aberration, i.e., the 4th Zernike mode, before and after AO correction, respectively. As shown, the rising time of the controller is 30 ms. That is, the control signal converges to a steady-state condition after approximately 6 time steps. In other words, given appropriate values of the proportional and integral gains, the PI controller can achieve a rapid step response.

 figure: Fig. 5.

Fig. 5. Variation of eight Zernike coefficients over time for case where AO system uses PI controllers Hk(s) and the applied aberration is: (a) static; (b) 1 Hz; and (c) 15 Hz. AO was turned on after 500 ms, 2500 ms, and 500 ms in (a), (b), and (c), respectively. Curves in (d), (e) and (f) show time-varying coefficients of x-astigmatism aberration (Z4) in Figs. 5(a), (b) and (c), respectively. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.

Download Full Size | PDF

However, the same gains provide a poor compensation performance when applied to dynamic aberrations with a frequency component. For example, Figs. 5(b) and (c) show the variation of the eight Zernike coefficients over time for disturbance frequencies of 1 Hz and 15 Hz, respectively. It is seen that both the amplitude and the frequency of the Zernike coefficient variation increase with an increasing disturbance frequency. Figures 5(e) and (f) show the variation of the Z4 coefficient, which indicates the x-astigmatism aberration, in Figs. 5(b) and (c), respectively. It is evident that the PI controller reduces the amplitudes of the dynamic aberrations, but fails to eliminate them completely. Thus, the proposed controller Gk(z) in Eq. (6) was applied instead. Figures 6(a)–(c) show the corresponding results obtained for the change in the Zernike coefficients over time given disturbance frequencies of 0, 1, and 15 Hz, respectively. A close inspection of Fig. 6(a) shows that in the case of a static aberration, the average rising time is around 90 ms, i.e., 18 time steps. However, as shown in Fig. 6(d), the laser is successfully focused in the center with a Gaussian-like intensity distribution following the correction process. Moreover, the results presented in Figs. 6(b) and (c) show that the proposed controller in Eq. (6) yields an effective compensation of the eight Zernike coefficients for disturbance frequencies of 1 Hz and 15 Hz, respectively. Figures 6(e) and (f) show the variation over time of the x-astigmatism mode coefficients in Figs. 6(b) and (c), respectively. An inspection of the two figures shows that both coefficients converge in approximately 60 ms, which is 12 time steps in the control loop.

 figure: Fig. 6.

Fig. 6. Variation of eight Zernike coefficients over time for case where AO system uses parallel model-based controllers Gk(z) and the applied aberration is: (a) static; (b) 1 Hz; and (c) 15 Hz. (d) Focusing spot before and after AO correction. AO was turned on after 500 ms, 2500 ms, and 500 ms in (a), (b), and (c), respectively. Curves in (e) and (f) show time-varying coefficients of x-astigmatism aberration (Z4) in Figs. 6(b) and (c), respectively. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Variation of eight Zernike coefficients over time for case where single channel controller is applied and other parallel controllers are turned off: (a) 3rd channel controller (G3(z)) for defocus mode (Z3); (b) 5th channel controller (G5(z)) for y-astigmatism mode (Z5). The controller was turned on after 500 ms. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.

Download Full Size | PDF

3.2 Model-based controller independence

The eight model-based controllers are running in parallel on FPGA and stand for eight independent linear systems. Each control system targets at single Zernike mode. To verify the controller independence, we turned on the 3rd channel controller (G3(z)) and compensated the defocus aberration (Z3) while other parallel controllers were turned off. Figure 7(a) shows the variation of the eight Zernike coefficients over time. The controller G3(z) was turned on after 500 ms. As shown, the defocus mode coefficient is eliminated and other Zernike mode coefficients remain unchanged. Figure 7(b) shows the corresponding Zernike coefficient variations when only 5th channel controller (G5(z)) was turned on. The y-astigmatism aberration is solely compensated without affecting other channels. This natural independence property allows us to manipulate each Zernike mode easily without mode coupling. For example, we can set the coefficient of defocus mode (Z3) to maintain its value and compensate other Zernike modes to zero in order to hold laser axial focus position and have good focusing quality.

3.3 Dynamic disturbance compensation

A further investigation was performed to examine the compensation performance of the proposed AO system for the case where the frequency component of the aberration cycled continuously through 20 Hz, 15 Hz, 5 Hz, and static, with a duration of approximately 5 sec in every case. Figure 8(a) shows the corresponding amplitude variation of the defocus aberration (Z3) for the case of no AO correction (brown line) and AO correction (orange line), respectively. (Note that the AO system was turned on after 1.45 sec.) It is seen that the model-based parallel controllers effectively track the dynamic aberrations and rapidly restore them to zero in every case. A similar compensation performance is observed even for the phase jump at a time of around 12.6 sec. Figure 8(b) shows the compensation results obtained for the x-astigmatism aberration (Z4), where the green and brown curves show the amplitudes of the aberration with and without AO compensation, respectively. The results again confirm the effectiveness of the frequency-modulated controllers in suppressing the dynamic aberrations of the incoming wavefront.

 figure: Fig. 8.

Fig. 8. Effectiveness of AO system in compensating wavefront disturbance with changing frequency. The frequency cycles continuously through 20 Hz, 15 Hz, 5 Hz, and static. The AO system was turned on after 1.45 sec. (a) Amplitude of defocus aberration (Z3): brown curve: without AO correction; orange curve: with AO correction. (b) Amplitude of x-astigmatism aberration (Z4): brown curve: without AO correction; dark green curve: with AO correction. Focusing spot: (c) without aberration; (d) with dynamic disturbance at 15 Hz; and (e) with AO correction. (Real time focusing spot variation captured at 40 fps is shown in Visualization 1.)

Download Full Size | PDF

As described in Section 2.1, the task of analyzing the disturbance spectrum is offloaded to a PC in order to ease the computational load on the FPGA. As a result, a short-term fluctuation with a duration of approximately 800 ms is seen in the orange and dark green curves in Figs. 8(a) and (b), respectively, immediately after the change in the aberration frequency due to the time incurred in performing the FFT analysis on the PC and transferring the frequency value back to the FPGA to tune the model parameters. Nonetheless, the results presented in Figs. 8(a) and (b) confirm the ability of the parallel model-based controller design to compensate both static and dynamic aberrations.

Figure 8(c) shows the focusing spot captured by the CMOS camera in the case of no aberration, i.e., a uniform Gaussian intensity distribution. Figure 8(d) shows the corresponding focusing spot for a dynamic disturbance of 15 Hz and no AO correction. It is seen that the peak intensity is significantly reduced compared to the case of an ideal wavefront with no aberration. Moreover, the intensity distribution exhibits obvious distortion. Finally, Fig. 8(e) shows the focusing spot when the proposed AO control system is applied. Comparing the images in Fig. 8(e) and (d), it is apparent that the model-based parallel controller improves the peak intensity by around 2-fold compared to the non-compensated case and recovers the intensity distribution to a uniform Gaussian distribution located in the center of the image region. (Visualization 1 shows the real time variation of the focus spot with the aberration disturbance with and without AO compensation, respectively. Note that both images are captured using a high-speed CMOS camera (acA2040-90um, Basler) operated at 40 fps.)

4. Conclusion and discussion

This study has presented an AO system consisting of a DM, a self-built SHWS, and a FPGA for the real-time detection and compensation of wavefront aberrations at a closed loop rate of 200 Hz. In the proposed system, the images of the MLA focusing spots are transferred from the SHWS to the FPGA over CameraLink. The centroids of the focusing spots are used to estimate the coefficients of the eight Zernike polynomials of the wavefront aberration and the coefficients are then used to determine the driving voltages required to actuate the DM in such a way as to restore all the coefficients to zero (i.e., a plane wavefront). For simplicity, the driving voltages are determined by eight independent controllers (one controller per Zernike mode) implemented in parallel on the FPGA. To enhance the contrast of the individual generated Zernike modes, the driving voltages are optimized by an iterative closed-loop PI controller. It is shown that the PI controller improves the contrast of the measured Zernike modes by around 20 dB compared to the case where the optimization is not applied. The experimental results have shown that the PI controllers can provide a rapid and effective response for static aberrations. However, when the aberrations vary dynamically, the controller response fluctuates with the same frequency and exhibits a steady-state error, which increases with an increasing frequency. Accordingly, a modified controller has also been proposed in which the transfer function contains a frequency compensation term computed by a standalone PC interfaced to the FPGA. It has been shown that the resulting controller successfully compensates dynamic aberrations with a frequency of 0∼20 Hz. Moreover, the controller requires just 12 time steps to compensate a dynamic disturbance of 15 Hz and successfully compensates even phase jumps in the aberration. If the aberrations have significant x-tilt or y-tilt, the beam might shift away from the center of the DM2 and the optimized driving voltages might lose accuracy. Previous research suggests to adopt tip-tilt mirror in the system to compensate the tilts and let the DM do the compensation for other Zernike modes [44].

The closed loop rate is mainly limited by the camera due to the data transmission time and available maximum frame rate. If we reduce the number of sub-lenses used for wavefront estimation to 8×8 sub-lenses (i.e., image size of 750×750 pixels), the camera frame rate can be improved to 500 Hz. However, if the loop rate approaches to 1 kHz, the frequency response of the membrane DM should be taken into account for a stable controller design. The present FPGA already utilizes 93.95% programmable slices to support the CameraLink protocol, wavefront analysis and closed-loop control functions of the proposed AO system. The rapid growth of FPGA technology provides more programmable slices for higher aberration order correction. For example, the Kintex-7 FPGA (XC7K410T, Xilinx) has more than 3 times more programmable slices than our current FPGA. The proposed parallel model-based controllers would be available for compensating the Zernike modes up to secondary and tertiary modes. (Note that secondary and tertiary modes include 15 and 24 Zernike modes in Wyant’s expansion scheme, respectively.) For the case where the disturbance frequency varies over time, the proposed controller successfully restores the incoming wavefront to a plane wave. However, a time delay of approximately 800 ms is incurred in analyzing the frequency of the aberration and returning the frequency value to the FPGA. If there are more programmable logic blocks to perform the disturbance spectrum analysis task directly on the FPGA; thereby improving the responsiveness of the controller to changes in the main frequency of the aberration.

Funding

Ministry of Science and Technology, Taiwan (109-2636-E-006-018, 110-2636-E-006-018).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. N. Hubin and L. Noethe, “Active optics, adaptive optics, and laser guide stars,” Science 262(5138), 1390–1394 (1993). [CrossRef]  

2. R. K. Tyson, Principles of Adaptive Optics, 2nd ed. (CRC, 1998).

3. V. Marx, “Microscopy: hello, adaptive optics,” Nat. Methods 14(12), 1133–1136 (2017). [CrossRef]  

4. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

5. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

6. J.-H. Park, Z. Yu, K. Lee, P. Lai, and Y. Park, “Perspective: Wavefront shaping techniques for controlling multiple light scattering in biological tissues: Toward in vivo applications,” APL Photonics 3(10), 100901 (2018). [CrossRef]  

7. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proceedings of the National Academy of Sciences 103(46), 17137–17142 (2006). [CrossRef]  

8. D. Sinefeld, H. P. Paudel, D. G. Ouzounov, T. G. Bifano, and C. Xu, “Adaptive optics in multiphoton microscopy: comparison of two, three and four photon fluorescence,” Opt. Express 23(24), 31472–31483 (2015). [CrossRef]  

9. M. J. Booth, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proceedings of the National Academy of Sciences 99(9), 5788–5792 (2002). [CrossRef]  

10. P. Pozzi, D. Wilding, O. Soloviev, H. Verstraete, L. Bliek, G. Vdovin, and M. Verhaegen, “High speed wavefront sensorless aberration correction in digital micromirror based confocal microscopy,” Opt. Express 25(2), 949–959 (2017). [CrossRef]  

11. M. Booth, D. Andrade, D. Burke, B. Patton, and M. Zurauskas, “Aberrations and adaptive optics in super-resolution microscopy,” Microscopy 64(4), 251–261 (2015). [CrossRef]  

12. W. Zheng, Y. Wu, P. Winter, R. Fischer, D. D. Nogare, A. Hong, C. McCormick, R. Christensen, W. P. Dempsey, D. B. Arnold, J. Zimmerberg, A. Chitnis, J. Sellers, C. Waterman, and H. Shroff, “Adaptive optics improves multiphoton super-resolution imaging,” Nat. Methods 14(9), 869–872 (2017). [CrossRef]  

13. M. J. Mlodzianoski, P. J. Cheng-Hathaway, S. M. Bemiller, T. J. McCray, S. Liu, D. A. Miller, B. T. Lamb, G. E. Landreth, and F. Huang, “Active PSF shaping and adaptive optics enable volumetric localization microscopy through brain sections,” Nat. Methods 15(8), 583–586 (2018). [CrossRef]  

14. C.-Y. Chang, L.-C. Cheng, H.-W. Su, Y. Y. Hu, K.-C. Cho, W.-C. Yen, C. Xu, C. Y. Dong, and S.-J. Chen, “Wavefront sensorless adaptive optics temporal focusing-based multiphoton microscopy,” Biomed. Opt. Express 5(6), 1768–1777 (2014). [CrossRef]  

15. T. L. Liu, S. Upadhyayula, D. E. Milkie, V. Singh, K. Wang, I. A. Swinburne, K. R. Mosaliganti, Z. M. Collins, T. W. Hiscock, J. Shea, A. Q. Kohrman, T. N. Medwig, D. Dambournet, R. Forster, B. Cunniff, Y. Ruan, H. Yashiro, S. Scholpp, E. M. Meyerowitz, D. Hockemeyer, D. G. Drubin, B. L. Martin, D. Q. Matus, M. Koyama, S. G. Megason, T. Kirchhausen, and E. Betzig, “Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms,” Science 360(6386), 1392 (2018). [CrossRef]  

16. A. Hubert, F. Harms, R. Juvénal, P. Treimany, X. Levecq, V. Loriette, G. Farkouh, F. Rouyer, and A. Fragola, “Adaptive optics light-sheet microscopy based on direct wavefront sensing without any guide star,” Opt. Lett. 44(10), 2514–2517 (2019). [CrossRef]  

17. M. Jenne, D. Flamm, T. Ouaj, J. Hellstern, J. Kleiner, D. Grossmann, M. Koschig, M. Kaiser, M. Kumkar, and S. Nolte, “High-quality tailored-edge cleaving using aberration-corrected Bessel-like beams,” Opt. Lett. 43(13), 3164–3167 (2018). [CrossRef]  

18. P. S. Salter and M. J. Booth, “Adaptive optics in laser processing,” Light: Sci. Appl. 8(1), 110 (2019). [CrossRef]  

19. G.-L. Roth, S. Rung, C. Esen, and R. Hellmann, “Microchannels inside bulk PMMA generated by femtosecond laser using adaptive beam shaping,” Opt. Express 28(4), 5801–5811 (2020). [CrossRef]  

20. Z. Qin, S. He, C. Yang, J. S.-Y. Yung, C. Chen, C. K.-S. Leung, K. Liu, and J. Y. Qu, “Adaptive optics two-photon microscopy enables near-diffraction-limited and functional retinal imaging in vivo,” Light: Sci. Appl. 9(1), 79 (2020). [CrossRef]  

21. L. Kong and M. Cui, “In vivo neuroimaging through the highly scattering tissue via iterative multi-photon adaptive compensation technique,” Opt. Express 23(5), 6145–6150 (2015). [CrossRef]  

22. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018). [CrossRef]  

23. Y. Wang, H. Xu, D. Li, R. Wang, C. Jin, X. Yin, S. Gao, Q. Mu, L. Xuan, and Z. Cao, “Performance analysis of an adaptive optics system for free-space optics communication through atmospheric turbulence,” Sci. Rep. 8(1), 1124 (2018). [CrossRef]  

24. M. Negro, M. Quintavalla, J. Mocci, A. G. Ciriolo, M. Devetta, R. Muradore, S. Stagira, C. Vozzi, and S. Bonora, “Fast stabilization of a high-energy ultrafast OPA with adaptive lenses,” Sci. Rep. 8(1), 14317 (2018). [CrossRef]  

25. G. D. Love, “Wave-front correction and production of Zernike modes with a liquid-crystal spatial light modulator,” Appl. Opt. 36(7), 1517–1524 (1997). [CrossRef]  

26. L. Zhu, P. C. Sun, D. U. Bartsch, W. R. Freeman, and Y. Fainman, “Wave-front generation of Zernike polynomial modes with a micromachined membrane deformable mirror,” Appl. Opt. 38(28), 6019–6026 (1999). [CrossRef]  

27. P. Pozzi, M. Quintavalla, A. B. Wong, J. G. G. Borst, S. Bonora, and M. Verhaegen, “Plug-and-play adaptive optics for commercial laser scanning fluorescence microscopes based on an adaptive lens,” Opt. Lett. 45(13), 3585–3588 (2020). [CrossRef]  

28. P. Rajaeipour, K. Banerjee, A. Dorn, H. Zappe, and Ç. Ataman, “Cascading optofluidic phase modulators for performance enhancement in refractive adaptive optics,” Adv. Photon. 2(06), 066005 (2020). [CrossRef]  

29. M. Ren, J. Chen, D. Chen, and S. Chen, “Aberration-free 3D imaging via DMD-based two-photon microscopy and sensorless adaptive optics,” Opt. Lett. 45(9), 2656–2659 (2020). [CrossRef]  

30. J. Li, D. R. Beaulieu, H. Paudel, R. Barankov, T. G. Bifano, and J. Mertz, “Conjugate adaptive optics in widefield microscopy with an extended-source wavefront sensor,” Optica 2(8), 682–688 (2015). [CrossRef]  

31. S.-H. Baik, S.-K. Park, C.-J. Kim, and B. Cha, “A center detection algorithm for Shack–Hartmann wavefront sensor,” Opt. Las. Technol. 39(2), 262–267 (2007). [CrossRef]  

32. K. Wang, W. Sun, C. T. Richie, B. K. Harvey, E. Betzig, and N. Ji, “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Optics & Laser Technology 6(1), 7276 (2015). [CrossRef]  

33. G.-M. Dai, “Model wave-front reconstruction with Zernike polynomials and Karhunen–Loève functions,” J. Opt. Soc. Am. A 13(6), 1218–1225 (1996). [CrossRef]  

34. L. Hu, S. Hu, W. Gong, and Ke Si, “Deep learning assisted Shack–Hartmann wavefront sensor for direct wavefront detection,” Opt. Lett. 45(13), 3741–3744 (2020). [CrossRef]  

35. C.-Y. Chang, B.-T. Ke, H.-W. Su, W.-C. Yen, and S.-J. Chen, “Easily implementable field programmable gate array-based adaptive optics system with state-space multichannel control,” Rev. Sci. Instrum. 84(9), 095112 (2013). [CrossRef]  

36. J. Mocci, M. Quintavalla, A. Chiuso, S. Bonora, and R. Muradore, “PI-shaped LQG control design for adaptive optics systems,” Control Engineering Practice 102, 104528 (2020). [CrossRef]  

37. C. Paterson, I. Munro, and J. C. Dainty, “A low cost adaptive optics system using a membrane mirror,” Opt. Express 6(9), 175–185 (2000). [CrossRef]  

38. B. Dong and M. J. Booth, “Wavefront control in adaptive microscopy using Shack-Hartmann sensors with arbitrarily shaped pupils,” Opt. Express 26(2), 1655–1669 (2018). [CrossRef]  

39. A. V. Kudryashov, A. L. Rukosuev, A. N. Nikitin, I. V. Galaktionov, and J. V. Sheldakova, “Real-time 1.5 kHz adaptive optical system to correct for atmospheric turbulence,” Opt. Express 28(25), 37546–37552 (2020). [CrossRef]  

40. J. Mocci, M. Quintavalla, C. Trestino, S. Bonora, and R. Muradore, “A Multi-platform CPU-Based Architecture for Cost-Effective Adaptive Optics Systems,” IEEE Trans. Ind. Inf. 14(10), 4431–4439 (2018). [CrossRef]  

41. O. Keskin, L. Jolissaint, and C. Bradley, “Hot-air optical turbulence generator for the testing of adaptive optics systems: principles and characterization,” Appl. Opt. 45(20), 4888–4897 (2006). [CrossRef]  

42. M. Paukert and D. E. Bergles, “Reduction of motion artifacts during in vivo two-photon imaging of brain through heartbeat triggered scanning,” J. Physiol. 590(13), 2955–2963 (2012). [CrossRef]  

43. G. F. Franklin, J. D. Powell, and M. Workman, Digital Control of Dynamic Systems, 3rd ed. (Ellis-Kagle, 1998).

44. X. Lei, S. Wang, H. Yan, W. Liu, L. Dong, P. Yang, and B. Xu, “Double-deformable-mirror adaptive optics system for laser beam cleanup using blind optimization,” Opt. Express 20(20), 22143–22157 (2012). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       The real time variation of the focus spot with the aberration disturbance with and without AO compensation

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Optical system with AO system. (a) Block diagram; (b) Photograph. (BS: beam splitter, DM: deformable mirror, L1: f = 400 mm lens, L2: f = 250 mm lens, L3: f = 100 mm lens, L4: f = 75 mm lens, SHWS: Shack-Hartmann wavefront sensor, DMA FIFO: direct memory access first-in-first-out).
Fig. 2.
Fig. 2. Block diagram of DM driving control signal vector optimization for each Zernike mode.
Fig. 3.
Fig. 3. Measured Zernike coefficients when driving DM2 with: (a) original control signal vectors (Eq. (1)) and (b) optimized control signal vectors (Eq. (3)). Note that the columns marked in red, magenta, orange, dark green, green, blue, cyan, and purple represent Zernike modes Z1∼Z8, respectively.
Fig. 4.
Fig. 4. Block diagrams of: (a) parallel closed-loop model-based control system; and (b) control system for kth channel of parallel controller.
Fig. 5.
Fig. 5. Variation of eight Zernike coefficients over time for case where AO system uses PI controllers Hk(s) and the applied aberration is: (a) static; (b) 1 Hz; and (c) 15 Hz. AO was turned on after 500 ms, 2500 ms, and 500 ms in (a), (b), and (c), respectively. Curves in (d), (e) and (f) show time-varying coefficients of x-astigmatism aberration (Z4) in Figs. 5(a), (b) and (c), respectively. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.
Fig. 6.
Fig. 6. Variation of eight Zernike coefficients over time for case where AO system uses parallel model-based controllers Gk(z) and the applied aberration is: (a) static; (b) 1 Hz; and (c) 15 Hz. (d) Focusing spot before and after AO correction. AO was turned on after 500 ms, 2500 ms, and 500 ms in (a), (b), and (c), respectively. Curves in (e) and (f) show time-varying coefficients of x-astigmatism aberration (Z4) in Figs. 6(b) and (c), respectively. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.
Fig. 7.
Fig. 7. Variation of eight Zernike coefficients over time for case where single channel controller is applied and other parallel controllers are turned off: (a) 3rd channel controller (G3(z)) for defocus mode (Z3); (b) 5th channel controller (G5(z)) for y-astigmatism mode (Z5). The controller was turned on after 500 ms. Note that the red, magenta, orange, dark green, green, blue, cyan, and purple lines represent Zernike modes Z1∼Z8, respectively.
Fig. 8.
Fig. 8. Effectiveness of AO system in compensating wavefront disturbance with changing frequency. The frequency cycles continuously through 20 Hz, 15 Hz, 5 Hz, and static. The AO system was turned on after 1.45 sec. (a) Amplitude of defocus aberration (Z3): brown curve: without AO correction; orange curve: with AO correction. (b) Amplitude of x-astigmatism aberration (Z4): brown curve: without AO correction; dark green curve: with AO correction. Focusing spot: (c) without aberration; (d) with dynamic disturbance at 15 Hz; and (e) with AO correction. (Real time focusing spot variation captured at 40 fps is shown in Visualization 1.)

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

c k , n + 1 = c k , n μ E c k , n = c k , n 2 μ B T [ ( a n a k ) w 2 ] k = 1 , 2 , , 8 ,
c k , n + 1 = P [ e n C ] + I [ ( p = 1 n T e p ) C ] ,
c k = C d k ,   k = 1 , 2 , , 8.
H k ( s ) = P H K + I H K s ,
G k ( s ) = ( P G K + I G K s ) ( s o k s 2 + ω 2 ) ,
G k ( z ) = a 1 k z 1 + a 2 k z 2 + a 3 k z 3 1 + b 1 k z 1 + b 2 k z 2 + b 3 k z 3 .
r k [ n ] = a 1 k e k [ n 1 ] + a 2 k e k [ n 2 ] + a 3 k e k [ n 3 ]   ( b 1 k r k [ n 1 ] + b 2 k r k [ n 2 ] + b 3 k r k [ n 3 ] ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.