Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Data-driven estimation, tracking, and system identification of deterministic and stochastic optical spot dynamics

Open Access Open Access

Abstract

Stabilization, disturbance rejection, and control of optical beams and optical spots are ubiquitous problems that are crucial for the development of optical systems for ground and space telescopes, free-space optical communication terminals, precise beam steering systems, and other types of optical systems. High-performance disturbance rejection and control of optical spots require the development of disturbance estimation and data-driven Kalman filter methods. Motivated by this, we propose a unified and experimentally verified data-driven framework for optical-spot disturbance modeling and tuning of covariance matrices of Kalman filters. Our approach is based on covariance estimation, nonlinear optimization, and subspace identification methods. Also, we use spectral factorization methods to emulate optical-spot disturbances with a desired power spectral density in an optical laboratory environment. We test the effectiveness of the proposed approaches on an experimental setup consisting of a piezo tip-tilt mirror, piezo linear actuator, and a CMOS camera.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Stabilization, disturbance rejection, and precise control of optical beams and optical spots are fundamental and ubiquitous problems that appear in a number of applications and optical systems. These problems are crucial for the development of sensing and control systems for space and ground optical telescopes [114], free-space optical communication terminals [15,16], turbulence compensation systems [15], precise laser beam steering and jitter compensation systems [1723], and for other optical instruments and devices [2429].

Optical spot and wave-front disturbances can originate from various sources and phenomena. In the case of ground telescopes, beside wavefront disturbances originating from the atmosphere, disturbances can originate from the wind and internal sources [7]. Internal disturbance sources are torque ripples in drive motors, friction in telescope axes, and the movement of mechanical components in instruments. According to [7], the power spectrum of wind disturbances contains significant energy in the lower frequency range (from 0.1 to 1 Hz). Due to the fact that resonance frequencies of the structure and active mirror systems are either in this range or close to this range, the effect of wind disturbances can be significant. In the case of spacecraft or satellites carrying optical instruments, terminals, and sensors, the disturbances can originate from structural vibrations, actuator movement, and reaction wheel assemblies. These disturbances cause optical jitter. According to [3], the term optical jitter is used to denote both stochastic image motion and observed dynamic wave-front errors. Control of jitter in optical instruments of space telescopes and satellites is a very challenging task. To illustrate these challenges, according to the authors of [3], the exposure time for James Webb Space Telescope can be up to 10,000 seconds. During this time interval, the tolerances for the wave-front error and line of sight motion are 14 nm and 4 mas, respectively. On the other hand, the target exposure time for the future Roman Space Telescope can be up to 30 hours, with a tolerance for the line of sight motion of 0.5 mas [30]. In this manuscript, we are mainly concerned with the data-driven estimation of optical jitter coming from non-atmospheric sources. However, the methods proposed in this manuscript can be generalized to the case of atmospheric turbulence.

Stringent requirements for rejecting optical disturbances and optical jitter call for the development of advanced model-based control and estimation algorithms [1,17,19,3138]. To develop such algorithms, it is first necessary to develop models and algorithms capable of optimally estimating, tracking, and predicting the spot position one or several samples in the future. The main challenge in developing such models and algorithms comes from the fact that the disturbance dynamics is often a combination of deterministic (periodic) and stochastic components [39]. Widely used methods for estimating and predicting state trajectories and disturbances of dynamical systems in mixed deterministic-stochastic environments are different versions of the Kalman filter [40] and time series methods [41]. It can be shown that widely used time-series models, such as AutoRegressive Moving Average (ARMA) models, can be transformed into a Kalman filter state-space form [42], where the Kalman filter gain matrix is an arbitrary matrix. Consequently, due to its generality, the main focus of this paper is on the Kalman filter state-space model, where the Kalman filter gain is allowed to be suboptimal. Different versions and extensions of the Kalman filter are the backbones of many model-based algorithms, such as for example Linear Quadratic Gaussian (LQG) regulator [43].

When designing the Kalman filter, we are faced with at least two challenges. The first challenge is to develop a sufficiently accurate model of the system. In many cases, the first principle modeling approach might not produce sufficiently accurate models due to a lack of knowledge of the numerical values of model parameters or a complete lack of knowledge of the structure of equations describing the disturbance dynamics. One indirect approach for dealing with this problem is to develop Kalman filter models that approximate the disturbance dynamics as Newtonian systems. For example, such models assume that the velocity ($\boldsymbol {\alpha }-\boldsymbol {\beta }$ filter) or acceleration ($\boldsymbol {\alpha }-\boldsymbol {\beta }-\boldsymbol {\gamma }$ filter) are constant over a short time period [40]. However, in many real-life scenarios, these assumptions are not met in practice. Despite this, the fidelity of such Newtonian Kalman filter models can be improved by the proper selection and tuning of covariance matrices. However, tuning and estimation of covariance matrices are challenging nonlinear problems. Another approach for dealing with this problem is to employ data-driven techniques capable of updating the initial first-principle models or estimating the disturbance models from scratch by using the collected experimental data. System identification methods are often used for this purpose [44]. However, it is challenging to apply the classical system identification methods to this problem since inputs to disturbance models are often completely unknown.

The second challenge is that the measurement noise and disturbance covariance matrices are often not known a priori or they often change with operating conditions. Since the knowledge of covariance matrices is directly used during the design of Kalman filters, this implies that imprecise information about covariance matrices can significantly limit the performance of designed Kalman filters. This important limiting factor calls for the development of methods for the estimation of covariance matrices directly from the collected experimental data.

Motivated by these challenges, in this manuscript we propose and experimentally verify a unified data-driven framework and numerical tools for the estimation, tracking, and prediction of optical spots by using data-driven Kalman filters. We address the first challenge by adapting, tuning, and experimentally testing the subspace system identification algorithm [4547] for estimating the Kalman filter models of stochastic disturbances. We address the second challenge by developing a method for estimating the covariance matrices of Kalman filter models. Our covariance estimation approach is partly inspired by the autocovariance least-squares method [4850] and nonlinear optimization methods. Besides this, we demonstrate that the spectral factorization methods [42,50,51], originally developed in control theory and signal processing communities, can be effective methods for emulating and experimentally producing mechanically-induced jitter disturbances with a desired spectrum in the laboratory environment. We test the effectiveness of the proposed approaches on an experimental setup consisting of a piezo tip-tilt mirror, piezo linear actuator, and a CMOS camera.

Subspace identification methods [44] have been applied to the problem of estimating dynamical models of deformable mirrors used in adaptive optics systems, see for example [5254] and follow-up works. Furthermore, these methods have been applied to the problem of estimating thermally induced wavefront aberrations in [46,47,55,56]. The viability of using subspace identification methods for estimating atmospheric turbulence is analyzed in [34,57]. To the best of our knowledge, little attention has been dedicated to investigating the viability of using subspace identification methods for building disturbance models of optical jitter coming from non-atmospheric sources.

Jitter disturbances can also be estimated by using adaptive filters, such as the Least Mean Squares (LMS) filter [28,5860]. However, as clearly pointed out in [60] (see page 141), the convergence of the LMS algorithm is slow in the case of the colored disturbance process. Furthermore, it is difficult to implement the LMS filter for certain classes of stochastic disturbance processes. In sharp contrast, the Kalman state-space modeling approach pursued in this manuscript can deal with a fairly broad class of stochastic disturbances.

This manuscript is organized as follows. In Section 2, we describe the experimental setup and explain the spectral factorization approach for producing the disturbances in the laboratory environment. In Section 3, we present the method for estimating the covariance matrices of Kalman filters. In Section 4, we present the subspace identification method for estimating Kalman filter models directly from the observed experimental data. In Section 5, we present conclusions and briefly discuss future work.

2. Experimental setup and disturbance emulation using spectral factorization approach

In this section, we describe the experimental setup that is used to verify the presented methods. We also explain and adapt the spectral decomposition method [42,50,51] for emulating the desired disturbance spectrum in the laboratory environment.

2.1 Experimental setup

The experimental setup is shown in Fig. 1 below. The red light fiber source is attached to an L-bracket that is attached to a piezo stage. We use the Micronix linear piezo stage. The product number is PP-12. The movement range of the stage is from $0$ to $4$ [mm] and the maximal speed is $3$ [mm/s]. The stage is equipped with an encoder with a resolution of $20$ [nm]. The piezo stage is controlled by the MMC-10 piezo motor controller. We tuned a PID controller and developed a MATLAB interface for controlling the piezo stage. By moving the piezo stage with the fiber source attached to it, we introduce horizontal optical spot disturbances (jitter) in the system. The lens L1 with a focal length of $25$ [mm] is used to collimate the beam. The beam is reflected from the mirror M1 and the piezo tip-tilt mirror. The piezo tip-tilt mirror is composed of a mirror attached to a tip-tilt piezo stage. We use the nPoint tip-tilt piezo stage, with the following specifications. The model number is RXY3-276. The travel range is +/- $1.5$ [mrad]. The resolution is $0.05$ [$\mu$rad] and the resonance frequency of the stage without the mirror is $2400$ [Hz]. To control the mirror, we use the LC.402 controller. In this paper, the tip-tilt piezo mirror is also used to introduce disturbance. This active mirror can introduce both horizontal and vertical disturbances in the camera detector plane, as well as a combination of such disturbances. The lens L2 is used to focus the beam onto a CMOS camera that is used as a detector. We use a Thorlabs CMOS camera with the product number CS165MU.

 figure: Fig. 1.

Fig. 1. The experimental setup used to verify the presented methods.

Download Full Size | PDF

We use both the linear stage and the piezo tip-tilt mirror to introduce disturbances that are detected by the camera as stochastic spot movement. Our goal is to produce disturbances whose spectrum matches the spectrum of optical jitter movement that can be observed in an optical instrument mounted on a spacecraft, satellite, or on a vibrating platform. For that purpose, we need to use the spectral factorization method that is explained in the sequel.

2.2 Disturbance emulation by using the spectral factorization approach

The effect of disturbances on deployed optical systems can be directly measured by extracting time series data from the observed spot positions. Then, the statistical properties of the observed time series can be estimated. Also, disturbance information can be extracted by directly measuring the mechanical vibrations of the structure and system components. During the process of developing new and tuning the existing disturbance mitigation methods, it is necessary to reproduce such disturbances in a laboratory environment [7,39]. The main challenge is how to create disturbances in a laboratory environment whose statistical properties match the statistical properties of the disturbances affecting the deployed optical systems. Here, we address this important problem.

The statistical properties of disturbances are often described by power spectral density matrices. A power spectral density matrix can be estimated on the basis of the observed disturbance time series by using the methods summarized in [51]. Let $S_{d}(z)$ be a power spectral density matrix that represents the spectral properties of the observed disturbances. Here, $z$ is a complex variable coming from the Z transform [50]. Examples of power spectral densities of real-life ground and space telescopes can be found in Chapter 7 of [7] and in [39,6165]. Having an estimate of $S_{d}(z)$, our main goal is to emulate such a disturbance spectrum in the laboratory environment. Our idea for solving this problem is to use and adapt the spectral factorization method [42,50,51] originally developed in control theory and signal processing communities.

Spectral Decomposition Method. The goal of the spectral decomposition method is to find a transfer function of a linear time-invariant stable system (filter) such that when a white noise sequence is applied to this system, the power spectral density matrix of the output matches $S_{d}(z)$. From the mathematical point of view, we want to determine the transfer function matrix $H(z)$ of the system, such that

$$S_{d}(z)=H(z)S_{v}H^{T}(z^{{-}1})$$
where $S_{v}$ is the covariance matrix of the white noise (diagonal matrix, with the diagonal entries equal to the variance of the white noise inputs). That is, we want to decompose $S_{d}(z)$ in the form given by (1).

Once we determine $H(z)$, we can apply a white noise sequence $\mathbf {v}_{k}$ to compute the filtered output $\mathbf {d}_{k}$

$$\mathbf{d}_{k}= H(q)\mathbf{v}_{k},$$
where $q$ is a shift delay operator [66], $H(q)$ is obtained by substituting $z$ by $q$ in $H(z)$, $\mathbf {d}_{k}$ is the output of the system, and $k$ is a discrete-time instant. Here, it should be emphasized that the power spectral density matrix of the filter $\mathbf {d}_{k}$ matches $S_{d}(z)$.

This implies that under the assumption that the dynamics of the disturbance-generating element (such as a fast steering mirror or a piezo stage) can be neglected, the scaled version of the signal $\mathbf {d}_{k}$ can be used as a reference signal to the disturbance generating element. In this way, we can produce optical jitter disturbances whose spectrum matches $S_{d}(z)$ in the laboratory environment. The scaling factor of $\mathbf {d}_{k}$ can easily be determined.

The dynamics of the disturbance-generating element can be neglected if the sampling period of the sensor in the focal plane is larger than the settling time of the actuator. This is true in our case, since we are using a camera as a detector and its sampling frequency is typically below $100$ [Hz], and the settling time of the disturbance-generating elements is in the several millisecond range.

However, in the case of a fast spot detector or camera, the dynamics of the disturbance-generating element cannot be neglected. In this case, the output signal detected in the detector plane can be modeled as follows $\mathbf {d}_{k}=H_{a}(q) H(q)\mathbf {v}_{k}$, where $H_{a}(q)$ is the transfer function (dynamical model) of the actuator. Consequently, the signal $\mathbf {d}_{k}$ will not have the desired spectrum. We can deal with this problem by either modeling the actuator dynamics $H_{a}(q)$ by using first-principle approaches, or by estimating the actuator dynamics by using system identification methods [44]. Then, we can invert this transfer function and multiply the reference signal by the inverse transfer function to obtain the output signal in the detector plane with the desired power spectral density matrix.

The procedure for emulating the disturbance in the laboratory environment is summarized below.

Step 1: Obtain an estimate of the power spectral density matrix. On the basis of the measurement of the disturbance affecting the deployed system or on the basis of the simulated model of the real system, obtain an estimate of the power spectral density matrix $S_{d}(z)$.

Step 2: Perform spectral factorization. Perform spectral factorization (1) to obtain the transfer function of the filter $H(z)$.

Step 3: Generate a reference signal for the disturbance-generating element. Apply the white noise signal with the covariance matrix $S_{v}$ to the transfer function $H(z)$ to generate the output that is used as the reference signal for the disturbance-generating element. If necessary, scale or additionally modify such a signal to eliminate the effect of the actuator dynamics.

In this manuscript, we use an artificially constructed transfer function to validate the above-presented approach for emulating disturbances. This transfer function in the Laplace domain has the following form

$$\begin{aligned} W& =W_{1}\cdot W_{2} \\ W_{1}& =\frac{10(s+\omega_{n1}^2)}{s^2+2\zeta_{1} \omega_{n1}s + \omega_{n1}^{2}},\;\; W_{2}=\frac{10(s+\omega_{n2}^2)}{s^2+2\zeta_{2} \omega_{n2}s + \omega_{n2}^{2}},\end{aligned}$$
where the parameters are given by $\omega _{n1}=2\pi \cdot 2$, $\zeta _{1}=0.05$, $\omega _{n2}=2\pi \cdot 10$, and $\zeta _{2}=0.05$. The parameters $\omega _{ni}$, $i=1,2$, are called the natural undamped frequencies. On the other hand, the parameters $\zeta _{i}$, $i=1,2$, are called the damping ratios. Figure 2(a) shows the Bode magnitude and the phase plots of the transfer function (3). The transfer function (3) can model the disturbances produced by an elastic structure, external wind disturbances, or internal disturbance sources.

 figure: Fig. 2.

Fig. 2. (a) The Bode diagram of the transfer function (3). (b) The power spectral density of the output $\mathbf {d}_{k}$ of the filter $H(z)$ defined in (4) when the white noise sequence is applied as an input.

Download Full Size | PDF

Our next goal is to construct the filter $H(z)$. There are at least two approaches to perform this decomposition. The first approach is to discretize (3), and then simulate this transfer function by applying the white noise input. Then, we can estimate the power spectral density of the output sequence. From the estimated power-spectral density, we can compute the function $H(z)$ by using the MATLAB function spectralfact().

The second approach that we pursue due to its simplicity, is described in the sequel. First, by using a discretization time step of $h=0.025$ seconds and a zero-order hold method [67], we discretize the transfer function (3). Let the discretized transfer function be denoted by $W_{d}(z)$. Then, we compute $S_{d1}=W_{d}(z)W_{d}(1/z)$. By using such $S_{d1}$, we compute (1), by using the MATLAB function spectralfact(). This produces the following filter $H(z)$ and variance $S_{v}$

$$H(z)=\frac{z^4 - 0.05229 z^3 - 0.3277 z^2 - 0.06164 z - 3.443 \cdot 10^{{-}10}}{z^4 - 1.876 z^3 + 1.831 z^2 - 1.604 z + 0.8282}, \; S_{v}=0.103.$$

Next, we simulate the computed filter (4) with the white noise sequence $\mathbf {v}_{k}$ applied as the input, to obtain the sequence $\mathbf {d}_{k}$ that is used as the reference signal for the linear piezo actuator and the piezo tip-tilt mirror (disturbance generating elements). Figure 2(b) shows the power spectral density of $\mathbf {d}_{k}$.

3. Data driven tuning of tracking Kalman filter

In this section, we first present a data-driven approach for estimating the covariance matrices that are necessary for designing and tuning the Kalman filter. Then, we briefly summarize the approximate Kalman filter models for tracking the optical spot position. In the final part of this section, we experimentally verify the developed covariance estimation approach. The approach developed in this section is partly inspired by the autocovariance least-squares method [48] and nonlinear optimization methods.

3.1 Method for estimating the covariance matrices of the Kalman filter

Consider the following state-space model

$$\mathbf{x}_{k+1}=A\mathbf{x}_{k}+G\mathbf{w}_{k},$$
$$\mathbf{y}_{k}=C\mathbf{x}_{k} + \mathbf{v}_{k},$$
where $k$ is a discrete-time instant, $A\in \mathbb {R}^{n\times n}$, $G\in \mathbb {R}^{n\times s}$, and $C\in \mathbb {R}^{r\times n}$ are the system matrices, $\mathbf {x}_{k}\in \mathbb {R}^{n}$ is the state vector, $\mathbf {w}_{k}\in \mathbb {R}^{s}$ is the process disturbance (process noise) vector, $\mathbf {v}_{k}\in \mathbb {R}^{r}$ is the measurement noise vector, and $\mathbf {y}_{k}\in \mathbb {R}^{r}$ is the observed output vector.

The covariance matrix of $\mathbf {w}_{k}$ is denoted by $Q\in \mathbb {R}^{s\times s}$. The covariance matrix of $\mathbf {v}_{k}$ is denoted by $R\in \mathbb {R}^{r\times r}$. We assume that $\mathbf {v}_{k}$ and $\mathbf {w}_{k}$ are uncorrelated. The covariance matrices $Q$ and $R$ are important since they are directly used to design the Kalman filter for the system given by the Eqs. (5) and (6). However, these matrices are usually not known a priori. Our goal is to develop a method for estimating these covariance matrices. In the case of separately tracking the $x$ and $y$ projections of the optical spot center, the vector $\mathbf {y}_{k}$ contains only a single entry that is equal to the observed $x$ or $y$ projections at the discrete time instant. That is, for every projection, we design a separate Kalman filter. However, in the case of the system identification method that is developed in Section 4, the output vector $\mathbf {y}_{k}$ is two-dimensional and contains both $x$ and $y$ projections at the discrete-time instant $k$.

For the system given by Eqs. (5)–(6), we can design a (suboptimal) observer having the following form

$$\hat{\mathbf{x}}_{k+1|k} = A\hat{\mathbf{x}}_{k|k},$$
$$\hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + L \underbrace{(\mathbf{y}_{k}-C\hat{\mathbf{x}}_{k|k-1})}_{\mathbf{e}_{k}},$$
where $L\in \mathbb {R}^{n\times r}$ is the observer gain. The vector $\hat {\mathbf {x}}_{k|k}$ is the state estimate that takes into account previous state estimates and observed outputs up to the time instant $k$. This vector is also called the a posteriori estimate. The vector $\hat {\mathbf {x}}_{k+1|k}$ is the state estimate at the time instant $k+1$ computed on the basis of the estimates and observed outputs up to the time instant $k$. This vector is called the a priori state estimate. The quantity $\mathbf {e}_{k}\in \mathbb {R}^{r}$
$$\mathbf{e}_{k}=\mathbf{y}_{k}-C\hat{\mathbf{x}}_{k|k-1},$$
that appears in (8), is called the innovation vector. After substituting (8) in (7), we obtain
$$\hat{\mathbf{x}}_{k+1|k} =A\hat{\mathbf{x}}_{k|k-1} + AL \mathbf{e}_{k} .$$

The estimation error is defined as follows

$$\boldsymbol{\varepsilon}_{k}=\mathbf{x}_{k}-\hat{\mathbf{x}}_{k|k-1}.$$

By propagating (11) one time step, and by substituting (5), (6), and (10) in the resulting equation, we obtain

$$\boldsymbol{\varepsilon}_{k+1} = \bar{A} \boldsymbol{\varepsilon}_{k} +\bar{G} \bar{\mathbf{w}}_{k},$$
$$\mathbf{e}_{k} = C \boldsymbol{\varepsilon}_{k}+\mathbf{v}_{k},$$
where
$$\bar{A}=A-ALC,\;\; \bar{G}=\begin{bmatrix} G & -AL \end{bmatrix},\;\; \bar{\mathbf{w}}_{k}=\begin{bmatrix} \mathbf{w}_{k} \\ \mathbf{v}_{k} \end{bmatrix}.$$

Next, we introduce the autocorrelation matrices

$$\mathcal{A}_{j}=E\big[\mathbf{e}_{k} \mathbf{e}_{k+j}^{T} \big], \;\; j=0,1,\ldots, N_{A},$$
where $\mathcal {A}_{j}\in \mathbb {R}^{r\times r}$, and $N_{A}$ is the total number of autocorrelation matrices. It can be shown that under a steady-state assumption [48], we have
$$\mathcal{A}_{0} =CPC^{T}+R,$$
$$\mathcal{A}_{j} =C\bar{A}^{j}PC^{T}-C\bar{A}^{j-1}ALR, \;\; j=1,2,\ldots,N_{A},$$
where $P$ is the steady-state covariance matrix of the estimation error $\boldsymbol {\varepsilon }_{k}$. This covariance matrix satisfies the following equation
$$P=\bar{A}P\bar{A}^{T}+GQG^{T}+ALRL^{T}A^{T}.$$

On the basis of (15), we can estimate the correlation matrices $\mathcal {A}_{j}$ as follows

$$\hat{\mathcal{A}}_{j}=\frac{1}{N_{A}}\sum_{i=0}^{N_{A}-j} \mathbf{e}_{i} \mathbf{e}_{i+j}^{T},$$
where $\hat {\mathcal {A}}_{j}$ denotes an estimate of $\mathcal {A}_{j}$.

Our idea for estimating the covariance matrices is summarized in the sequel. First, we substitute the "true" correlation matrices $\mathcal {A}_{j}$ by their estimates $\hat {\mathcal {A}_{j}}$ in (16) and (17). Then, (16), (17), and (18) represent a system of equations with the unknowns $P$, $Q$, and $R$. The first step in solving such a system is to use (18) to express $P$ as the function of $Q$ and $R$, and then to substitute $P$ in (16) and (17). Then, the resulting equation is solved by formulating a nonlinear optimization problem with unknowns $Q$ and $R$.

Let $\otimes$ denote the Kronecker vector product and let $\text {vec}(\cdot )$ denote the vectorization operator [44]. If $M\in \mathbb {R}^{n\times m}$ is a matrix, then $\text {vec}(M)\in \mathbb {R}^{n\cdot m}$ is an $n\cdot m$ vector obtained by stacking the entries of the matrix $M$ column wise on top of each other and starting from the first column of $M$. If $Z_{1},Z_{2}$, and $Z_{3}$ are arbitrary matrices, then we have

$$\text{vec}(Z_{1}Z_{2}Z_{3})=\big(Z_{3}^{T}\otimes Z_{1}\big) \text{vec}(Z_{2}).$$

By applying the vectorization operator to (18), we obtain

$$\text{vec}(P)=\big(I-\bar{A}_{1} \big)^{{-}1}G_{1}\text{vec}(Q)+\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2}\text{vec}(R),$$
$$\bar{A}_{1}=\bar{A}\otimes\bar{A}, \;\; G_{1}=G\otimes G, \;\; \bar{A}_{2}=AL\otimes AL.$$

More details and examples of using the vectorization and the Kronecker operators can be found in [68,69]. Then, by applying the vectorization operator to (16) and (17), and by substituting $\text {vec}(P)$ from (21) in the resulting equations, we obtain

$$\hat{\mathbf{a}}_{0}=\text{vec}(\hat{\mathcal{A}}_{0})=C_{1}\big(I-\bar{A}_{1} \big)^{{-}1}G_{1}\text{vec}(Q)+\Big(I+C_{1}\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2} \Big)\text{vec}(R),$$
$$C_{1}=C\otimes C,$$
and
$$\hat{\mathbf{a}}_{j}=\text{vec}(\hat{\mathcal{A}}_{j})=\bar{A}_{3,j} \big(I-\bar{A}_{1} \big)^{{-}1}G_{1}\text{vec}(Q)+\Big(\bar{A}_{3,j}\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2}-\bar{A}_{4,j-1} \Big)\text{vec}(R)$$
$$\bar{A}_{3,j}=\big(C\otimes C\bar{A}^{j} \big),\;\; \bar{A}_{4,j-1}=I\otimes \big(C\bar{A}^{j-1}AL \big),\;j=1,2,\ldots, N_{A}.$$

The set of Eqs. (23) and (25), can be written compactly

$$\hat{\mathbf{a}} = H \mathbf{s},$$
$$\hat{\mathbf{a}} = \begin{bmatrix}\hat{\mathbf{a}}_{0} \\ \hat{\mathbf{a}}_{1} \\ \vdots \\ \hat{\mathbf{a}}_{N_{A}} \end{bmatrix}, H=\begin{bmatrix} C_{1}\big(I-\bar{A}_{1} \big)^{{-}1}G_{1} & I+C_{1}\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2} \\ \bar{A}_{3,1} \big(I-\bar{A}_{1} \big)^{{-}1}G_{1} & \bar{A}_{3,1}\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2}-\bar{A}_{4,0} \\ \vdots & \vdots \\ \bar{A}_{3,N_{A}} \big(I-\bar{A}_{1} \big)^{{-}1}G_{1} & \bar{A}_{3,N_{A}}\big(I-\bar{A}_{1} \big)^{{-}1}\bar{A}_{2}-\bar{A}_{4,N_{A}-1} \end{bmatrix}, \mathbf{s}=\begin{bmatrix}\text{vec}(Q) \\ \text{vec}(R) \end{bmatrix}.$$

We determine the covariance matrices whose entries are the entries of the vector $\mathbf {s}$ by solving the following optimization problem

$$\min_{\mathbf{s}} \left\|\hat{\mathbf{a}} - H \mathbf{s} \right\|_{2}^{2},$$
$$\begin{aligned}&\text{subject to:}\\ & Q=Q^{T}, R=R^{T}, \end{aligned}$$
$$Q\ge 0, R \ge 0,$$
$$\mathbf{b}_{1} \le \text{vec}(Q)\le \mathbf{b}_{2},\mathbf{b}_{3} \le \text{vec}(R)\le \mathbf{b}_{4},$$
where $\mathbf {b}_{1}$ and $\mathbf {b}_{2}$ are the lower and upper bounds on the entries of the matrix $Q$, and $\mathbf {b}_{3}$ and $\mathbf {b}_{4}$ are the lower and upper bounds on the entries of the matrix $R$. The constraints in the above optimization problem are used to enforce positive semi-definiteness on the matrices $Q$ and $R$.

In the sequel, we explain how the above-stated optimization problem can be transformed into a form that can be numerically implemented and solved. The main challenge is how to transform the constraints $Q\ge 0$ and $R\ge 0$ into a numerically tractable form. In this manuscript, we decouple the tracking of optical spots in the x and y directions. Consequently, for each direction, the matrix $R$ is actually a scalar. Due to this, the positive semi-definite constraint $R\ge 0$, simply translates to the condition that the scalar $R$ is greater than or equal to zero.

On the other hand, in this paper, we consider three possible cases for the matrix $Q$, and we explain how to implement the constraint $Q\ge 0$.

Case 1: $Q\in \mathbb {R}^{1\times 1}$. In this case, the disturbance vector $\mathbf {w}_{k}$ is one dimensional. Consequently, $Q$ is a scalar and the constraint $Q\ge 0$ is a scalar constraint that can be directly incorporated into the optimization solver.

Case 2: $Q\in \mathbb {R}^{2\times 2}$. In this case, the disturbance vector $\mathbf {w}_{k}$ is two-dimensional. It is well-known that a matrix is positive semi-definite if all the principal minors are greater than or equal to zero. This translates to the following conditions for the entries of the matrix $Q$:

$$q_{11}\ge 0, \;\; q_{22} \ge 0 , \;\; q_{11}q_{22}-q_{12}q_{21}\ge 0,$$
where $q_{11}$, $q_{12}$, $q_{21}$, and $q_{22}$ are the corresponding entries of the matrix $Q$.

Case 3: $Q\in \mathbb {R}^{3\times 3}$. In this case, the disturbance vector $\mathbf {w}_{k}$ is three-dimensional. Similar to the two-dimensional case, the positive semi-definite condition is that all principal minors should be greater than or equal to zero. This translates into the following conditions:

$$q_{11}\ge 0 , q_{22}\ge 0, q_{33}\ge 0,$$
$$q_{22}q_{33}-q_{32}q_{23}\ge 0, q_{11}q_{33}-q_{31}q_{13} \ge 0, q_{11}q_{22}-q_{21}q_{12}\ge 0,$$
$$\text{det}(Q)\ge 0,$$
where $q_{11},q_{12},\ldots, q_{33}$ are the corresponding entries of $Q$, and $\text {det}(Q)$ is the determinant of $Q$.

3.1.1 Summary of the procedure for estimating the covariance matrices

The procedure for estimating the covariance matrices is summarized below.

Step 1: Design the suboptimal observer gain $L$. Use the pole placement method to compute the suboptimal observer gain $L$ of the observer (10). This step can be performed in MATLAB by using the function place().

Step 2: Estimate the correlation values and solve the optimization problem. Use the designed observer to compute the innovation sequence (9). Estimate the autocorrelation matrices by using (19). Then, solve the optimization problem defined by the Eqs. (29) to (32). The optimization problem can be solved by using the MATLAB function fmincon(). In this step, the estimates of the matrices $Q$ and $R$ are computed.

Step 3: Compute the Kalman filter gain. On the basis of the estimated matrices $Q$ and $R$, compute the Kalman filter gain. This step is performed by solving the Riccati equation for the computed $Q$ and $R$. The Riccati equation can be solved by using the MATLAB Riccati solver dare(). Let the solution of the Riccati equation be denoted by $P_{r}$. Then, the data-driven Kalman filter gain is obtained as follows

$$L_{K}=P_{r}C^{T}\big(CPC^{T}+R\big)^{{-}1}.$$

Once the data-driven Kalman filter gain is computed, we can substitute $L_{K}$ instead of the suboptimal gain $L$ in the observer Eq. (10), to obtain an approximate Kalman filter. Here, it is important to emphasize that it is a good practice to iteratively perform steps 2 and 3 of the above-presented procedure. Namely, once we compute $L_{K}$ in step 3, we can use this matrix $L_{K}$ instead of the suboptimal gain $L$ in step 2. In this way, we can get even better estimation results. We follow this strategy to iteratively improve the estimates of the matrices $Q$ and $R$.

3.2 Tracking Kalman filter

In the previous subsection, we presented the procedure for estimating the covariance matrices. Here, we present equations describing a tracking Kalman filter model that is combined with the covariance estimation procedure. The presented Kalman filter is an adapted version of the three-dimensional $\alpha$-$\beta$-$\gamma$ filter [40].

Let $h>0$ be a small discretization constant that is at the same the sampling period of the Kalman filter. The filter is defined by the following state-space matrices in (5) and (6):

$$A=\begin{bmatrix} 1 & h & \frac{h^2}{2} \\ 0 & 1 & h \\ 0 & 0 & 1 \end{bmatrix},\; C=\begin{bmatrix} 1 & 0 & 0 \end{bmatrix}, \; G=\begin{bmatrix} \frac{h^{2}}{2} \\ h \\ 1 \end{bmatrix}, Q=\sigma_{w}^{2}, R=\sigma_{v}^{2},$$
where $\sigma _{w}$ and $\sigma _{v}$ are the standard deviations of the disturbance and measurement noise. The first, second, and third state variables of this filter are position, velocity, and acceleration. This filter assumes that the noisy position is measured. The filter matrix $A$ is obtained from simple kinematic equations of a particle. Note that without the disturbance $\mathbf {w}_{k}$, this model assumes that the derivative of the acceleration is constant. However, this might not occur in practice. Consequently, the process disturbance noise is included to relax the model assumption. The structure of the matrix $G$ is a direct consequence of this assumption.

The matrix $G$ can be integrated into the covariance matrix. Namely, it has been shown that from the Kalman filter perspective, the model (38) is equivalent to the following model

$$\mathbf{x}_{k+1}=A\mathbf{x}_{k}+\tilde{\mathbf{w}}_{k},$$
$$\mathbf{y}_{k}=C\mathbf{x}_{k} + \mathbf{v}_{k},$$
with the covariance matrix of $\tilde {\mathbf {w}}_{k}$ given by
$$\tilde{Q}=GQG^{T},$$
where $Q$ is the covariance matrix of the original disturbance vector $\mathbf {w}_{k}$. By using this idea, we obtain the following covariance matrix
$$\tilde{Q}=\begin{bmatrix}\frac{1}{4}h^4 & \frac{1}{2}h^3 & \frac{1}{2}h^2 \\ \frac{1}{2}h^3 & h^2 & h \\ \frac{1}{2}h^2 & h & 1 \end{bmatrix}\sigma_{w}^{2},\; R=\sigma_{v}^2.$$

However, the type of the covariance matrix given by (42) is not favorable from the estimation perspective. The issue is that the dimension of the estimation problem is increased by having to estimate 9 entries of the matrix $\tilde {Q}$ compared to only a single entry of the matrix $Q$ that needs to be estimated in the case of the model (38). Consequently, in this manuscript, we keep the original formulations of the Kalman filter and the covariance matrix given by (38). That is, only the scalars $\sigma _{w}$ and $\sigma _{v}$ need to be estimated.

3.3 Experimental results

We present the experimental results of using the developed covariance estimation approach and the tracking Kalman filter.

Here, it is important to first test the developed approach on a deterministic signal applied to the disturbance generator. This is because the deterministic signal helps us to better understand and visualize the tracking properties of the developed data-driven Kalman filter. However, in the case of the linear piezo stage disturbance generator, despite the fact that we apply the deterministic signals as a reference input, the spot observed by the camera is still partly stochastic due to the fact that the linear piezo stage produces additional vibrations. In the next section, we test the identification algorithm on completely stochastic signals generated by the spectral factorization approach.

We apply a sinusoidal reference signal to the linear piezo stage. The reference signal is defined by

$$r(t)=0.1\sin(2t)+0.05\sin(6t)+2.$$

The spot positions are observed by the camera. The mean sampling period of the camera is $0.0177$ seconds. This sampling period slightly varies due to the fact that it is challenging to precisely control the sampling frequency on a Windows computer with MATLAB.

We use the well-known center of the mass algorithm to detect the spot center. We apply a combination of two $\alpha$-$\beta$-$\gamma$ filters and the developed covariance estimation method to the observed $x$ and $y$ projections of the center point. The matrices $Q$ and $R$ are scalars. In the tracking Kalman filter and for the estimation of the covariance matrices, we use the discretization constant of $h=0.0177$ and the number of autocorrelation coefficients of $N_{A}=200$. We start with a suboptimal observer designed for the desired eigenvalues given by the set $\{0.3,0.4, 0.5\}$ (step 1 of the procedure summarized in Section 3.1.1). We iteratively perform steps 2 and 3 of the estimation procedure, where in every iteration we start with previously computed observer gains. We perform 10 iterations.

Figure 3(a) shows the comparison between the observed and estimated $x$ projections of the spot center. Figure 3(b) shows the estimated velocity and acceleration of the spot center. Here it should be kept in mind that we only observe the position. Figure 4 shows the autocorrelation coefficients of the innovation sequence. In an ideal case, the innovation sequence should be a white noise sequence. This means that the autocorrelation coefficients should be $1$ for the lag zero, and otherwise zero or very close to zero. This ideal case is achieved under the assumptions that (1) the covariance matrices are perfectly estimated, (2) the measurement noise and disturbances are white Gaussian noise sequences, and (3) the computed Kalman gain is optimal. Consequently, we can perform a white-noise hypothesis test on the innovation sequence in order to investigate the optimality of the Kalman filter and evaluate the performance of the covariance estimation procedure. The red dashed lines in Fig. 4 represent the limits of the autocorrelation coefficients. If 95 $\%$ of autocorrelation coefficients are inside of the region bounded by the red dashed lines, we can conclude that the innovation sequence is a white noise sequence. There is a total of 21 autocorrelation coefficients out of 200 coefficients that exceed the bounds. This is roughly 10 $\%$ of the coefficients. Consequently, we can conclude that the residual does not have a white noise property. However, this number is still significantly smaller than the number of autocorrelation coefficients of the initial suboptimal observer with the gain of $L$. This is a good indication that the proposed data-driven method actually works in practice and that it is a viable tool for tuning the Kalman filters.

 figure: Fig. 3.

Fig. 3. (a) Observed and estimated $x$ projection of the spot center point. (b) Estimated velocity and acceleration of the $x$ projection.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Autocorrelation coefficients of the innovation sequences. The red dashed lines are limits used for white-noise hypothesis testing.

Download Full Size | PDF

4. System identification of the Kalman filter state-space models

In the previous section, we used the Kalman filters that are based on approximate first-principle models of the spot dynamics. The previously used models are based on the Newtonian assumption on spot dynamics. By tuning the covariance matrices, we can improve the performance of the filters and partly compensate for the model inaccuracies. However, this approach has limitations originating from the modeling assumptions. In this section, we use a completely different approach for building the models and deriving the Kalman filter state-space models. Instead of assuming the model parameters a priori, we use a data-driven approach to estimate the system model and the (suboptimal) Kalman filter gain directly from the observed spot position time series. The approach used in this section relies upon the subspace identification approach [45,46,70,71].

In particular, we base our identification approach on a modified version of the subspace identification algorithm that is presented in [54]. The main modification is that we eliminate the exogenous inputs from the subspace identification method and we use past outputs as inputs.

4.1 Summary of the subspace identification method

To simplify the notation in (10), we substitute $\hat {\mathbf {x}}_{k|k-1}$ by $\hat {\mathbf {x}}_{k}$. Then, from (10) we obtain the following model

$$\hat{\mathbf{x}}_{k+1} = \bar{A}\hat{\mathbf{x}}_{k}+\tilde{L}\mathbf{y}_{k},$$
$$\mathbf{y}_{k} = C\hat{\mathbf{x}}_{k},$$
where $\bar {A}=A-ALC$ and $\tilde {L}=AL$. We assume that the output vector $\mathbf {y}_{k}$ is two-dimensional with the entries equal to $x$ and $y$ projections of the optical spot center.

Subspace Identification Problem: From the sequence $\{\mathbf {y}_{i}\}^{i=0,1,2,\ldots, N}$ of the observed projections of the spot center point, estimate the model order $n$, and the system matrices $\bar {A}$, $A$, $\tilde {L}$, and $C$ of the state-space model (44) and (45).

To summarize the modified subspace identification algorithm, we need to introduce the following notation. Let $i_{1}$, $i_{2}$, and $l$ be three positive integers corresponding to discrete-time instants. We introduce the following notation:

$$\mathbf{y}_{i_{1},i_{2}}=\begin{bmatrix} \mathbf{y}_{i_{1}} \\ \mathbf{y}_{i_{1}+1} \\ \vdots \\ \mathbf{y}_{i_{2}} \end{bmatrix}, \;\; Y_{i_{1},i_{2}}^{(l)}=\begin{bmatrix} \mathbf{y}_{i_{1},i_{2}} & \mathbf{y}_{i_{1}+1,i_{2}+1} & \ldots & \mathbf{y}_{i_{1}+l,i_{2}+l} \end{bmatrix}.$$

The subspace identification method is presented below.

Step 1: Estimation of the Markov matrices. Choose the past window $p$ and the parameter $l$ as $l=N-p-1$, and estimate the matrix of Markov parameters $M_{p}$ as follows:

$$\hat{M}_{p}= Y_{p,p}^{(l)}Y_{0,p-1}^{{\dagger}},$$
where the symbol $\dagger$ denotes the matrix pseudo-inverse. The past window $p$ is selected on the basis of the Akaike Information Criterion (AIC), for more details see the experimental results below and [54].

Step 2: Estimate the state sequence. Select the future window parameter $f$, such that $f \le p$, and form the matrix $\hat {\mathcal {M}}$ from the estimated Markov parameters as follows:

$$\hat{\mathcal{M}}=\begin{bmatrix} \hat{M}_{p} \\ 0_{r\times r} \;\;\;\; \hat{M}_{p}(:,1:(p-1)r) \\ 0_{r\times 2r} \;\;\;\; \hat{M}_{p}(:,1:(p-2)r) \\ \vdots \\ 0_{r\times (f-1)r} \;\;\;\; \hat{M}_{p}(:,1:(p-f+1)r) \end{bmatrix}.$$

Then, compute the singular value decomposition [44] of the matrix $\mathcal {D}$

$$\mathcal{D}=\mathcal{U}\Sigma \mathcal{V}^{T},\;\; \mathcal{D}=\hat{\mathcal{M}}Y_{0,p-1}^{(l)}.$$

Select the state order $n$, and compute an estimate of the state sequence as follows

$$\hat{X}_{p,p}^{(l)}=\Sigma(1:n,1:n)^{1/2}\mathcal{V}(1:n,:),$$
where the matrix $\hat {X}_{p,p}^{(l)}$ is defined in the similar manner to the matrix $Y_{i_{1},i_{2}}^{(l)}$ in (46) (the meaning the subscripts and superscripts is identical), with the difference that the output $\mathbf {y}_{k}$ is replaced by the estimated state $\hat {\mathbf {x}}_{k}$.

Step 3: Estimate the system matrices. First, compute the following matrices

$$\mathcal{X}_{1}=\begin{bmatrix} \hat{X}_{p,p}^{(l-1)} \\ Y_{p,p}^{(l-1)} \end{bmatrix}, \; \mathcal{X}_{2}=\hat{X}_{p+1,p+1}^{(l-1)}\mathcal{X}_{1}^{{\dagger}}.$$

Then, estimate the system matrices as follows

$$\hat{\bar{A}}= \mathcal{X}_{2}(:,1:n),\;\; \hat{C}=Y_{p,p}^{(l)}\Big(\hat{X}_{p,p}^{(l)}\Big)^{{\dagger} },\;\; \hat{\tilde{L}}=\mathcal{X}_{2}(:,n+1:n+r),\; \hat{A}=\hat{\bar{A}}+\hat{\tilde{L}}\hat{C}.$$

In step $2$, we need to estimate the state-order $n$ of the model. We estimate the state order on the basis of the singular value plot of the matrix $\mathcal {D}$. The number of most dominant singular values can be used as a good estimate of the state-order [44].

4.2 Experimental results of applying the subspace identification algorithm

Here, we present the experimental tests of the subspace identification algorithm.

In the first case, we apply two inputs to the piezo tip-tilt mirror in Fig. 1. These inputs are designed by using the spectral factorization method explained in Section 2. The power spectral density of these discrete-time signals is shown in Fig. 2(b). We collect 4000 images by using the camera. By using the center of the mass algorithm, we extract $x$ and $y$ coordinates of spots. We split the collected data into two sequences. The first sequence of the length of $2000$ is used to identify the model. This sequence is called the identification data set. The second sequence of the length of $2000$ is used to validate the model and to properly choose the model parameters. This sequence is called the validation sequence.

Figure 5(a) shows the identification and validation time series. We use the AIC value to estimate $p$ that defines the Markov matrix $\hat {M}_{p}$ in (47). The final estimate is the value of $p$ for which the AIC value is smallest, for more details see [54]. Figure 5(b) shows the plot of AIC values. We select the past window of $p=39$. The future window $f$ estimate is $38$. Figure 5(c) shows the singular value plot of the matrix $\mathcal {D}$. We can observe that there is a gap in singular values around $i=27$. Consequently, our state order estimate is $n=27$. We can also observe a significant gap around $i=74$. However, this is a large state order that increases the variance of the estimated model. That is, this state estimate overfits the model. Consequently, we selected a smaller state order of $i=27$ to prevent data overfitting.

 figure: Fig. 5.

Fig. 5. Identification results for disturbances generated by the piezo tip-tilt mirror. (a) Identification and validation time-series used to identify the model. The output is the $x$ projection of the spot center. (b) The AIC value as the function of the past window $p$ used to estimate the Markov parameter matrix. (c) Singular values of the matrix $\mathcal {D}$ defined in (49).

Download Full Size | PDF

After we estimate the model, we validate the model performance. Figure 6(a) shows the output predicted by the model ("Predicted output") and the observed output ("Real output"). We can observe that the identified Kalman filter is able to accurately track the output. Figure 6(b) shows the autocorrelation function of the error between the predicted output and the observed output. In an ideal case, this autocorrelation function should match the autocorrelation function of a white-noise sequence. That is, if the error sequence is a white noise sequence, the autocorrelation value for the lag of $0$ should be equal to $1$, and all other autocorrelation coefficients should be in the region limited by the red dashed lines (see the experimental part of Section 3 for more details on the white-noise hypothesis test). In our case, $26$ out of $100$ entries are outside the limits. This indicates that our model can still be improved. This can be achieved by changing the combination of the past window, future window, and state order parameters. Despite this, Fig. 6(a) shows that our model can still accurately track the output. This is also confirmed by the very high value of $97 \%$ of the Variance Accounted For (VAF) of the estimated model. Finally, Fig. 6(c) shows the eigenvalues of the estimated matrix $\hat {A}$ (open-loop matrix) and the eigenvalues of the estimated matrix $\hat {\bar {A}}$ (closed-loop matrix). From this eigenvalue plot, we can observe that both open-loop and closed-loop Kalman observer systems are asymptotically stable. This is because all the eigenvalues are inside of the unit circle which is the stability boundary for discrete-time systems.

 figure: Fig. 6.

Fig. 6. Identification results for disturbances generated by the piezo tip-tilt mirror. (a) The output predicted by the estimated model ("Predicted output") and observed output ("Real output"). (b) The autocorrelation function of the identification error computed on the basis of the predicted output and observed output. (c) The eigenvalues of the estimated matrix $\hat {A}$ ("Open loop") and the eigenvalues of the estimated matrix $\hat {\bar {A}}$ ("Closed loop").

Download Full Size | PDF

Next, we test the subspace identification algorithm on a data set generated by applying the disturbance signals to the linear piezo stage in Fig. 1. The system identification results are shown in Fig. 7 and Fig. 8. The figure captions are equivalent to the captions of Figs. 6 and 7. In this case, the estimate parameters are: past window $p=40$, future window $f=23$, and model order of $n=45$. The variance accounted for is $58$. We have $8$ autocorrelation coefficients out of $100$ that exceed the white-noise autocorrelation limits. We can observe that the linear piezo stage induced higher-order dynamics compared to the piezo tip-tilt mirror. This is because the linear piezo stage together with the bracket has less damped dynamics compared to the piezo tip-tilt mirror. The estimated model deliberately does not fit higher-order random oscillations since they decrease the model quality verified on the validation data set.

 figure: Fig. 7.

Fig. 7. Identification results for disturbances generated by the linear piezo stage. (a) Identification and validation time-series used to identify the model. The output is the $x$ projection of the spot center. (b) The AIC value as the function of the past window $p$ used to estimate the Markov parameter matrix. (c) Singular values of the matrix $\mathcal {D}$ defined in (49).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Identification results for disturbances generated by the linear piezo stage. (a) The output predicted by the estimated model ("Predicted output") and observed output ("Real output"). (b) The autocorrelation function of the identification error computed on the basis of the predicted output and observed output. (c) The eigenvalues of the estimated matrix $\hat {A}$ ("Open loop") and the eigenvalues of the estimated matrix $\hat {\bar {A}}$ ("Closed loop").

Download Full Size | PDF

Overall, in both cases, the identification results are good and clearly demonstrate the great potential of the subspace identification method for directly estimating Kalman filter models of the stochastic spot dynamics.

5. Conclusions and future work

In this manuscript, we developed a unified data-driven Kalman filter approach for covariance estimation and system identification of the stochastic dynamics of the optical spot position. We experimentally demonstrated that after covariance matrices are estimated, approximate first-principle Kalman filter models can be an effective tool for tracking the spot dynamics. Then, we experimentally demonstrated the great potential of the subspace identification methods for directly estimating the Kalman filter models of spot dynamics from the observed time series. In future work, we will investigate and compare the performance of other types of system identification methods for estimating spot dynamics. Also, we will use the framework developed in this paper to develop optimal controllers for suppressing the disturbance dynamics.

Funding

Goddard Space Flight Center (80NSSC22PB172).

Disclosures

The authors declare no conflicts of interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Wie, Q. Liu, and F. Bauer, “Classical and robust H (infinity) control redesign for the Hubble Space Telescope,” J. Guid. Control. Dyn. 16(6), 1069–1077 (1993). [CrossRef]  

2. S.-M. Moon, D. G. Cole, and R. L. Clark, “Real-time implementation of adaptive feedback and feedforward generalized predictive control algorithm,” J. Sound Vib. 294(1-2), 82–96 (2006). [CrossRef]  

3. T. T. Hyde, K. Q. Ha, J. D. Johnston, J. M. Howard, and G. E. Mosier, “Integrated modeling activities for the James Webb Space Telescope: optical jitter analysis,” Proc. SPIE 5487, 588–599 (2004). [CrossRef]  

4. D. Schwartz, K. Feigum, P. M. Thompson, and C. Briggs, “Integrated modeling structural tools for the Giant Magellan Telescope design effort,” in Aerospace Conference (IEEE, 2018), pp. 1–14.

5. L. Meza, F. Tung, S. Anandakrishnan, V. Spector, and T. Hyde, “Line of sight stabilization of James Webb Space Telescope,” in 27th Annual AAS Guidance and Control Conference (2005), AAS-05-002, pp. 17–30.

6. S.-M. Moon, R. L. Clark, and D. G. Cole, “The recursive generalized predictive feedback control: theory and experiments,” J. Sound Vib. 279(1-2), 171–199 (2005). [CrossRef]  

7. P. Bely, The Design and Construction of Large Optical Telescopes (Springer, 2003).

8. G. E. Mosier, J. M. Howard, J. D. Johnston, K. A. Parrish, T. T. Hyde, M. A. McGinnis, A. M. Bluth, K. Kim, and K. Q. Ha, “The role of integrated modeling in the design and verification of the James Webb Space Telescope,” Proc. SPIE 5528, 96–107 (2004). [CrossRef]  

9. D. S. Acton, J. S. Knight, A. Contos, S. Grimaldi, J. Terry, P. Lightsey, A. Barto, B. League, B. Dean, J. S. Smith, C. Bowers, D. Aronstein, L. Feinberg, W. Hayden, T. Comeau, R. Soummer, E. Elliot, M. Perrin, and C. W. Starr Jr., “Wavefront sensing and controls for the James Webb Space Telescope,” Proc. SPIE 8442, 877–887 (2012). [CrossRef]  

10. D. S. Acton, J. S. Knight, T. Chonis, L. Coyle, K. Smith, E. Coppock, C.-P. Lajoie, M. Perrin, C. Wells, and J. B. Hadaway, “Wavefront sensing and controls demo during the cryo-vac testing of JWST,” Proc. SPIE 10698, 1091–1100 (2018). [CrossRef]  

11. D. C. Redding, S. A. Basinger, A. E. Lowman, A. Kissil, P. Y. Bely, R. Burg, R. G. Lyon, G. E. Mosier, M. Femiano, M. E. Wilson, G. Schunk, L. Craig, D. Jacobson, and J. Rakoczy, “Wavefront sensing and control for a next-generation space telescope,” Proc. SPIE 3356, 758–772 (1998). [CrossRef]  

12. C. Li, X. He, Q. Ji, X. Zhang, and K. Fan, “Theoretical and experimental study on the testing accuracy of the image stabilization system of a space astronomical telescope,” Appl. Opt. 59(22), 6658–6670 (2020). [CrossRef]  

13. V. Preda, J. Cieslak, D. Henry, S. Bennani, and A. Falcoz, “Robust microvibration mitigation and pointing performance analysis for high stability spacecraft,” Int. J. Robust Nonlinear Control. 28(18), 5688–5716 (2018). [CrossRef]  

14. D. J. Eaton, R. A. Whittlesey, B. W. Allen, R. Stoll, L. Abramowicz-Reed, and M. Margulies, “On-orbit performance of the Hubble Space Telescope fine guidance sensors,” Appl. Opt. 32(10), 1689–1695 (1993). [CrossRef]  

15. Y. Liang, X. Su, C. Cai, L. Wang, J. Liu, H. Wang, and J. Wang, “Adaptive turbulence compensation and fast auto-alignment link for free-space optical communications,” Opt. Express 29(24), 40514–40523 (2021). [CrossRef]  

16. H. Kaushal and G. Kaddoum, “Optical communication in space: Challenges and mitigation techniques,” IEEE Commun. Surv. Tutor. 19(1), 57–96 (2017). [CrossRef]  

17. N. Tsuchiya, S. Gibson, T.-C. Tsao, and M. Verhaegen, “Receding-horizon adaptive control of laser beam jitter,” IEEE/ASME Trans. Mechatron. 21(1), 227–237 (2016). [CrossRef]  

18. H. Yoon, B. E. Bateman, and B. N. Agrawal, “Laser beam jitter control using recursive-least-squares adaptive filters,” J. Guid. Control. Dyn. 133(4), 1–8 (2011). [CrossRef]  

19. M. J. Beerer, H. Yoon, and B. N. Agrawal, “Practical adaptive filter controls for precision beam pointing and tracking with jitter attenuation,” Control. Eng. Pract. 21(1), 122–133 (2013). [CrossRef]  

20. E. S. Ahn, R. W. Longman, J. J. Kim, and B. N. Agrawal, “Evaluation of five control algorithms for addressing cmg induced jitter on a spacecraft testbed,” J. Astronaut. Sci. 60(3-4), 434–467 (2013). [CrossRef]  

21. Q. Zhou, P. Ben-Tzvi, D. Fan, and A. A. Goldenberg, “Design of fast steering mirror systems for precision laser beams steering,” in International Workshop on Robotic and Sensors Environments (IEEE, 2008), pp. 144–149.

22. V. A. Skormin, M. A. Tascillo, and T. E. Busch, “Adaptive jitter rejection technique applicable to airborne laser communication systems,” Opt. Eng. 34(5), 1263–1268 (1995). [CrossRef]  

23. R. J. Watkins and B. N. Agrawal, “Use of least means squares filter in control of optical beam jitter,” J. Guid. Control Dyn. 30(4), 1116–1122 (2007). [CrossRef]  

24. Y. Kong and H. Huang, “Vibration isolation and dual-stage actuation pointing system for space precision payloads,” Acta Astronaut. 143, 183–192 (2018). [CrossRef]  

25. L. Li, L. Tan, L. Kong, D. Wang, and H. Yang, “The influence of flywheel micro vibration on space camera and vibration suppression,” Mech. Syst. Signal Process. 100, 360–370 (2018). [CrossRef]  

26. S.-B. Chen, M. Xuan, L. Zhang, S. Gu, X.-X. Gong, and H.-Y. Sun, “Simulating and testing microvibrations on an optical satellite using acceleration sensor-based jitter measurements,” Sensors 19(15), C1 (2019). [CrossRef]  

27. N. Chen, B. Potsaid, J. T. Wen, S. Barry, and A. Cable, “Modeling and control of a fast steering mirror in imaging applications,” in International Conference on Automation Science and Engineering (IEEE, 2010), pp. 27–32.

28. K. Patterson, J. Shields, X. Wang, H. Tang, A. Azizi, P. Brugarolas, M. Mandic, and F. Shi, “Control design for momentum-compensated fast steering mirror for WFIRST-AFTA coronagraph instrument,” Proc. SPIE 9605, 678–695 (2015). [CrossRef]  

29. E. Csencsics and G. Schitter, “System design and control of a resonant fast steering mirror for lissajous-based scanning,” IEEE/ASME Trans. Mechatron. 22(5), 1963–1972 (2017). [CrossRef]  

30. B. Mennesson, V. P. Bailey, R. Zellem, et al., “The Roman Space Telescope coronagraph technology demonstration: current status and relevance to future missions,” Proc. SPIE 12180, 659–671 (2022). [CrossRef]  

31. P. Massioni, H.-F. Raynaud, C. Kulcsár, and J.-M. Conan, “An approximation of the Riccati equation in large-scale systems with application to adaptive optics,” IEEE Trans. Contr. Syst. Technol. 23(2), 479–487 (2015). [CrossRef]  

32. J. Tesch, S. Gibson, and M. Verhaegen, “Receding-horizon adaptive control of aero-optical wavefronts,” Opt. Eng. 52(7), 071406 (2013). [CrossRef]  

33. A. Haber and T. Bifano, “Dual-update data-driven control of deformable mirrors using Walsh basis functions,” J. Opt. Soc. Am. A 39(3), 459–469 (2022). [CrossRef]  

34. K. Hinnen, M. Verhaegen, and N. Doelman, “A data-driven H2-optimal control approach for adaptive optics,” IEEE Trans. Control Syst. Technol. 16(3), 381–395 (2008). [CrossRef]  

35. A. Polo, A. Haber, S. F. Pereira, M. Verhaegen, and H. P. Urbach, “Linear phase retrieval for real-time adaptive optics,” J. Eur. Opt. Soc.-Rapid 8, 13070 (2013). [CrossRef]  

36. A. Haber, A. Polo, C. S. Smith, S. F. Pereira, P. Urbach, and M. Verhaegen, “Iterative learning control of a membrane deformable mirror for optimal wavefront correction,” Appl. Opt. 52(11), 2363–2373 (2013). [CrossRef]  

37. A. Haber and M. Verhaegen, “Framework to trade optimality for local processing in large-scale wavefront reconstruction problems,” Opt. Lett. 41(22), 5162–5165 (2016). [CrossRef]  

38. A. Polo, A. Haber, S. F. Pereira, M. Verhaegen, and H. P. Urbach, “An innovative and efficient method to control the shape of push-pull membrane deformable mirror,” Opt. Express 20(25), 27922–27932 (2012). [CrossRef]  

39. K.-C. Liu, P. Maghami, and C. Blaurock, “Reaction wheel disturbance modeling, jitter analysis, and validation tests for solar dynamics observatory,” in AIAA Guidance, Navigation and Control Conference and Exhibit (2008), p. 7232.

40. D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches (John Wiley & Sons, 2006).

41. H. Lütkepohl, Introduction to Multiple Time Series Analysis (Springer, 2013).

42. B. D. O. Anderson and J. B. Moore, Optimal Filtering (Courier Corporation, 2012).

43. F. L. Lewis, D. Vrabie, and V. L. Syrmos, Optimal Control (John Wiley & Sons, 2012).

44. M. Verhaegen and V. Verdult, Filtering and System Identification: A Least Squares Approach (Cambridge, 2007).

45. A. Haber, “Subspace identification of temperature dynamics,” arXiv, arXiv:1908.02379 (2019). [CrossRef]  

46. A. Haber, J. E. Draganov, K. Heesh, J. Tesch, and M. Krainak, “Modeling and system identification of transient STOP models of optical systems,” Opt. Express 28(26), 39250–39265 (2020). [CrossRef]  

47. A. Haber, J. E. Draganov, and M. Krainak, “Subspace identification of low-dimensional structural-thermal-optical-performance (stop) models of reflective optics,” Proc. SPIE 12215, 39–54 (2022). [CrossRef]  

48. B. J. Odelson, M. R. Rajamani, and J. B. Rawlings, “A new autocovariance least-squares method for estimating noise covariances,” Automatica 42(2), 303–308 (2006). [CrossRef]  

49. M. R. Rajamani and J. B. Rawlings, “Estimation of the disturbance structure from data using semidefinite programming and optimal weighting,” Automatica 45(1), 142–148 (2009). [CrossRef]  

50. T. Kailath, A. Sayed, and B. Hassibi, Linear Estimation (Prentice Hall, 2000).

51. P. Stoica and R. L. Moses, Spectral Analysis of Signals (Prentice Hall, 2005).

52. A. Chiuso, R. Muradore, and E. Marchetti, “Dynamic calibration of adaptive optics systems: A system identification approach,” IEEE Trans. Control Syst. Technol. 18(3), 705–713 (2010). [CrossRef]  

53. H. Song, R. Fraanje, G. Schitter, G. Vdovin, and M. Verhaegen, “Controller design for a high-sampling-rate closed-loop adaptive optics system with piezo-driven deformable mirror,” Eur. J. Control 17(3), 290–301 (2011). [CrossRef]  

54. A. Haber and M. Verhaegen, “Modeling and state-space identification of deformable mirrors,” Opt. Express 28(4), 4726–4740 (2020). [CrossRef]  

55. A. Haber, A. Polo, S. Ravensbergen, H. P. Urbach, and M. Verhaegen, “Identification of a dynamical model of a thermally actuated deformable mirror,” Opt. Lett. 38(16), 3061–3064 (2013). [CrossRef]  

56. A. Haber, A. Polo, I. Maj, S. F. Pereira, H. P. Urbach, and M. Verhaegen, “Predictive control of thermally induced wavefront aberrations,” Opt. Express 21(18), 21530–21541 (2013). [CrossRef]  

57. K. Hinnen, M. Verhaegen, and N. Doelman, “Exploiting the spatiotemporal correlation in adaptive optics using data-driven H2-optimal control,” J. Opt. Soc. Am. A 24(6), 1714–1725 (2007). [CrossRef]  

58. F. Shi, J. Shields, T. Truong, B.-J. Seo, F. Fregoso, B. Kern, D. Marx, K. Patterson, C. M. Prada, J. Shaw, J. Shaw, and C. Shelton, “WFIRST low order wavefront sensing and control (LOWFS/C) performance on line-of-sight disturbances from multiple reaction wheels,” Proc. SPIE 11117, 170–178 (2019). [CrossRef]  

59. F. Shi, E. Cady, B.-J. Seo, et al., “Dynamic testbed demonstration of WFIRST coronagraph low order wavefront sensing and control (LOWFS/C),” Proc. SPIE 10400, 74–90 (2017). [CrossRef]  

60. B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications (John Wiley & Sons, 2013).

61. S. Mitani, Y. Kawakatsu, S. Sakai, N. Murakami, T. Yamawaki, T. Mizutani, K. Komatsu, H. Kataza, K. Enya, and T. Nakagawa, “Precision pointing control for SPICA: risk mitigation phase study,” Proc. SPIE 9143, 1253–1264 (2014). [CrossRef]  

62. N. Yoshida, O. Takahara, and K. Kodeki, “Spacecraft with very high pointing stability: Experiences and lessons learned,” in 19th IFAC Symposium on Automatic Control in Aerospace, vol. 46 (Elsevier, 2013), pp. 547–552.

63. C. Dennehy and O. S. Alvarez-Salazar, “Spacecraft micro-vibration: a survey of problems, experiences, potential solutions, and some lessons learned,” Tech. rep., NASA (2018).

64. N. Pedreiro, “Spacecraft architecture for disturbance-free payload,” J. Guid. Control Dyn. 26(5), 794–804 (2003). [CrossRef]  

65. N. Pedreiro, “Next generation space telescope pointing stability,” in AIAA Guidance, Navigation, and Control Conference and Exhibit (2000), pp. 4543–1–12.

66. L. Ljung, System Identification (Springer, 1998).

67. N. S. Nise, Control Systems Engineering (John Wiley & Sons, 2020).

68. A. Haber and M. Verhaegen, “Sparse solution of the Lyapunov equation for large-scale interconnected systems,” Automatica 73, 256–268 (2016). [CrossRef]  

69. A. Haber and M. Verhaegen, “Sparsity preserving optimal control of discretized PDE systems,” Comput. Method Appl. M. 335, 610–630 (2018). [CrossRef]  

70. A. Haber and M. Verhaegen, “Subspace identification of large-scale interconnected systems,” IEEE Trans. Automat. Contr. 59(10), 2754–2759 (2014). [CrossRef]  

71. A. Haber, F. Pecora, M. U. Chowdhury, and M. Summerville, “Identification of temperature dynamics using subspace and machine learning techniques,” in Dynamic Systems and Control Conference, vol. 59155 (ASME, 2019), p. V002T24A003.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The experimental setup used to verify the presented methods.
Fig. 2.
Fig. 2. (a) The Bode diagram of the transfer function (3). (b) The power spectral density of the output $\mathbf {d}_{k}$ of the filter $H(z)$ defined in (4) when the white noise sequence is applied as an input.
Fig. 3.
Fig. 3. (a) Observed and estimated $x$ projection of the spot center point. (b) Estimated velocity and acceleration of the $x$ projection.
Fig. 4.
Fig. 4. Autocorrelation coefficients of the innovation sequences. The red dashed lines are limits used for white-noise hypothesis testing.
Fig. 5.
Fig. 5. Identification results for disturbances generated by the piezo tip-tilt mirror. (a) Identification and validation time-series used to identify the model. The output is the $x$ projection of the spot center. (b) The AIC value as the function of the past window $p$ used to estimate the Markov parameter matrix. (c) Singular values of the matrix $\mathcal {D}$ defined in (49).
Fig. 6.
Fig. 6. Identification results for disturbances generated by the piezo tip-tilt mirror. (a) The output predicted by the estimated model ("Predicted output") and observed output ("Real output"). (b) The autocorrelation function of the identification error computed on the basis of the predicted output and observed output. (c) The eigenvalues of the estimated matrix $\hat {A}$ ("Open loop") and the eigenvalues of the estimated matrix $\hat {\bar {A}}$ ("Closed loop").
Fig. 7.
Fig. 7. Identification results for disturbances generated by the linear piezo stage. (a) Identification and validation time-series used to identify the model. The output is the $x$ projection of the spot center. (b) The AIC value as the function of the past window $p$ used to estimate the Markov parameter matrix. (c) Singular values of the matrix $\mathcal {D}$ defined in (49).
Fig. 8.
Fig. 8. Identification results for disturbances generated by the linear piezo stage. (a) The output predicted by the estimated model ("Predicted output") and observed output ("Real output"). (b) The autocorrelation function of the identification error computed on the basis of the predicted output and observed output. (c) The eigenvalues of the estimated matrix $\hat {A}$ ("Open loop") and the eigenvalues of the estimated matrix $\hat {\bar {A}}$ ("Closed loop").

Equations (52)

Equations on this page are rendered with MathJax. Learn more.

S d ( z ) = H ( z ) S v H T ( z 1 )
d k = H ( q ) v k ,
W = W 1 W 2 W 1 = 10 ( s + ω n 1 2 ) s 2 + 2 ζ 1 ω n 1 s + ω n 1 2 , W 2 = 10 ( s + ω n 2 2 ) s 2 + 2 ζ 2 ω n 2 s + ω n 2 2 ,
H ( z ) = z 4 0.05229 z 3 0.3277 z 2 0.06164 z 3.443 10 10 z 4 1.876 z 3 + 1.831 z 2 1.604 z + 0.8282 , S v = 0.103.
x k + 1 = A x k + G w k ,
y k = C x k + v k ,
x ^ k + 1 | k = A x ^ k | k ,
x ^ k | k = x ^ k | k 1 + L ( y k C x ^ k | k 1 ) e k ,
e k = y k C x ^ k | k 1 ,
x ^ k + 1 | k = A x ^ k | k 1 + A L e k .
ε k = x k x ^ k | k 1 .
ε k + 1 = A ¯ ε k + G ¯ w ¯ k ,
e k = C ε k + v k ,
A ¯ = A A L C , G ¯ = [ G A L ] , w ¯ k = [ w k v k ] .
A j = E [ e k e k + j T ] , j = 0 , 1 , , N A ,
A 0 = C P C T + R ,
A j = C A ¯ j P C T C A ¯ j 1 A L R , j = 1 , 2 , , N A ,
P = A ¯ P A ¯ T + G Q G T + A L R L T A T .
A ^ j = 1 N A i = 0 N A j e i e i + j T ,
vec ( Z 1 Z 2 Z 3 ) = ( Z 3 T Z 1 ) vec ( Z 2 ) .
vec ( P ) = ( I A ¯ 1 ) 1 G 1 vec ( Q ) + ( I A ¯ 1 ) 1 A ¯ 2 vec ( R ) ,
A ¯ 1 = A ¯ A ¯ , G 1 = G G , A ¯ 2 = A L A L .
a ^ 0 = vec ( A ^ 0 ) = C 1 ( I A ¯ 1 ) 1 G 1 vec ( Q ) + ( I + C 1 ( I A ¯ 1 ) 1 A ¯ 2 ) vec ( R ) ,
C 1 = C C ,
a ^ j = vec ( A ^ j ) = A ¯ 3 , j ( I A ¯ 1 ) 1 G 1 vec ( Q ) + ( A ¯ 3 , j ( I A ¯ 1 ) 1 A ¯ 2 A ¯ 4 , j 1 ) vec ( R )
A ¯ 3 , j = ( C C A ¯ j ) , A ¯ 4 , j 1 = I ( C A ¯ j 1 A L ) , j = 1 , 2 , , N A .
a ^ = H s ,
a ^ = [ a ^ 0 a ^ 1 a ^ N A ] , H = [ C 1 ( I A ¯ 1 ) 1 G 1 I + C 1 ( I A ¯ 1 ) 1 A ¯ 2 A ¯ 3 , 1 ( I A ¯ 1 ) 1 G 1 A ¯ 3 , 1 ( I A ¯ 1 ) 1 A ¯ 2 A ¯ 4 , 0 A ¯ 3 , N A ( I A ¯ 1 ) 1 G 1 A ¯ 3 , N A ( I A ¯ 1 ) 1 A ¯ 2 A ¯ 4 , N A 1 ] , s = [ vec ( Q ) vec ( R ) ] .
min s a ^ H s 2 2 ,
subject to: Q = Q T , R = R T ,
Q 0 , R 0 ,
b 1 vec ( Q ) b 2 , b 3 vec ( R ) b 4 ,
q 11 0 , q 22 0 , q 11 q 22 q 12 q 21 0 ,
q 11 0 , q 22 0 , q 33 0 ,
q 22 q 33 q 32 q 23 0 , q 11 q 33 q 31 q 13 0 , q 11 q 22 q 21 q 12 0 ,
det ( Q ) 0 ,
L K = P r C T ( C P C T + R ) 1 .
A = [ 1 h h 2 2 0 1 h 0 0 1 ] , C = [ 1 0 0 ] , G = [ h 2 2 h 1 ] , Q = σ w 2 , R = σ v 2 ,
x k + 1 = A x k + w ~ k ,
y k = C x k + v k ,
Q ~ = G Q G T ,
Q ~ = [ 1 4 h 4 1 2 h 3 1 2 h 2 1 2 h 3 h 2 h 1 2 h 2 h 1 ] σ w 2 , R = σ v 2 .
r ( t ) = 0.1 sin ( 2 t ) + 0.05 sin ( 6 t ) + 2.
x ^ k + 1 = A ¯ x ^ k + L ~ y k ,
y k = C x ^ k ,
y i 1 , i 2 = [ y i 1 y i 1 + 1 y i 2 ] , Y i 1 , i 2 ( l ) = [ y i 1 , i 2 y i 1 + 1 , i 2 + 1 y i 1 + l , i 2 + l ] .
M ^ p = Y p , p ( l ) Y 0 , p 1 ,
M ^ = [ M ^ p 0 r × r M ^ p ( : , 1 : ( p 1 ) r ) 0 r × 2 r M ^ p ( : , 1 : ( p 2 ) r ) 0 r × ( f 1 ) r M ^ p ( : , 1 : ( p f + 1 ) r ) ] .
D = U Σ V T , D = M ^ Y 0 , p 1 ( l ) .
X ^ p , p ( l ) = Σ ( 1 : n , 1 : n ) 1 / 2 V ( 1 : n , : ) ,
X 1 = [ X ^ p , p ( l 1 ) Y p , p ( l 1 ) ] , X 2 = X ^ p + 1 , p + 1 ( l 1 ) X 1 .
A ¯ ^ = X 2 ( : , 1 : n ) , C ^ = Y p , p ( l ) ( X ^ p , p ( l ) ) , L ~ ^ = X 2 ( : , n + 1 : n + r ) , A ^ = A ¯ ^ + L ~ ^ C ^ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.