Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Increasing optical pose estimation accuracy via freeform design and its application to hand-eye calibration

Open Access Open Access

Abstract

For robot-assisted assembly of complex optical systems, the alignment is facilitated by an accurate pose estimation of its components. However, wavefront-based pose estimation is typically ill-conditioned due to the inherent geometry of conventional industrially manufactured optical components. Therefore, we propose a novel approach in this paper to increase wavefront-based pose estimation accuracy via the design of freeform optics. For this purpose, an optimization problem is derived that parameterizes the component’s surfaces by a predetermined freeform surface model. To show the efficacy of our approach, we provide simulation results to compare the pose estimation accuracy for a variety of optical designs. As an application example for the resulting improved pose estimation, a hand-eye calibration of a wavefront sensor is performed. This calibration originates from the field of robotics and represents the identification of a sensor coordinate system with respect to a global reference frame. For quantitative evaluation, the calibrating results are first presented with the aid of simulation data. Finally, the practical feasibility is demonstrated using a conventional industrial robot and additively manufactured freeform lenses.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In recent years, increased efforts have been made to automate the assembly of optical components based on wavefront sensing [15]. In doing so, some approaches even estimate the poses of individual components during the assembly process, which provides a general approach and makes it universally applicable [610]. However, for automated, high-precision assembly the definition of a suitable reference coordinate system is essential. Furthermore, the placement of individual optical components in their previously defined nominal positions can only be guaranteed if the exact position of a suitable reference coordinate frame is determined in advance.

A common procedure in the field of robotics for the definition of such a reference coordinate system is the so-called hand-eye calibration. It provides the possibility to precisely determine the transformations between imaging systems and the robot as well as between the imaging systems and the end effector coordinate system of the robot. An overview of popular algorithms for solving the hand-eye calibration can be found in [11]. However, these conventional approaches are mostly applicable for imaging systems. For wavefront sensors (e.g. Shack-Hartmann sensors), which have been successfully investigated in the past for pose identification of optical components [9,12], these approaches are not directly applicable. In addition, both wavefront and image-based [13,14] assembly processes of optical components are mathematically ill-conditioned due to the inherent geometry of conventional, industrially manufactured optical components [6,7,10,15]. This makes the identification of the reference coordinate system a non-trivial problem.

In the past, various wavefront based methods have been developed for estimating the state of individual components in optical systems with respect to a previously defined reference frame. A linear approach which is most intuitive and simple is valid for small step sizes and incremental placement [1618]. Unfortunately, due to the nonlinear nature of position estimation of optical components, this method is very sensitive to local minima and likely to diverge. Other approaches used nonlinear and global optimization techniques to address the problem of local minima and local operating points by minimizing a suitable cost function [1,2]. However, these methods do not account for uncertainties caused by e.g. the positioning system or measurement noise. Further research for robust pose estimation of optical components, which take into account the previously mentioned uncertainties, was undertaken by applying generally known nonlinear bayesian filtering methods such as Extended Kalmanfilter (EKF), Unscented Kalmanfilter (UKF), Particlefilter (PF), etc. [6,10,15].

Although these approaches proved to be suitable for position identification and assembly, they all suffer from mathematical ill-conditioning, which often underlies the optical components’ geometry (e.g. rotational symmetry). This degrades pose estimation and may lead to deterioration of identification quality. It should be noted that this problem is of universal nature and therefore affects all the mentioned identification techniques and consequently most conventional optical components. This and the fact that the rotational symmetry of optical components entails a lack of excitation in the estimation process means that hand-eye calibration could not be applied to wavefront sensors in the past.

To remedy this situation, this paper presents a novel approach, whose workflow is shown in Fig. 1. It aims to perform a hand-eye calibration of a wavefront sensor by improving the previously mentioned ill-conditioning using optimized freeform optics (see Sec. 2.). Although the approach described in this paper is also applicable to other optical components such as mirrors, the freeform parameterization of lenses is described here. The workflow of the new approach for hand-eye calibration of wavefront sensors is divided into 3 parts (see Fig. 1). In order to be able follow the basic structure of this paper, these parts are briefly mentioned below in sequential order shown in.

  • 1 Freeform design (Sec. 2.2/2.3)

    Lens surfaces are parameterized and subsequently optimized in such a way that all six degrees of freedom are equally excited with respect to wavefront based pose estimation tasks. For this purpose, the surfaces are parameterized by derivations of the well-known SAG equation [19]. Furthermore, the condition number of the sensitivity matrix (depicted in Sec. 2.1) of the freeform lens is used as a quality measure for quantitative evaluation during optimization.

  • 2 Pose estimation (Sec. 2.1)

    The optimized lens is designed to provide more robust and accurate results of all six spatial degrees of freedom in wavefront-based pose estimation due to the change in geometry.

  • 3 Hand-eye calibration (Sec. 3.)

    Due to the unique position information of all six spatial degrees of freedom from the position estimation, a hand-eye calibration of the wavefront sensor can be performed. This means that the coordinate system of the wavefront sensor is known with respect to the positioning system, allowing the wavefront sensor to be used as a reference system in the subsequent assembly process.

After the last step, the freeform lens is no longer needed and the result is the accurate knowledge about the coordinate system of the wavefront sensor, which can be used for initial positioning of optical components in the assembly process. This allows for more accurate initial positioning of optical components in the consequent assembly process which addresses the problem of local minima in linear and nonlinear approaches, improves the convergence of filtering methods (EKF, UKF, PF, etc.), and generally increases the accuracy within the assembly process itself. Each of the steps mentioned above will be explained in detail in different sections throughout this paper.

 figure: Fig. 1.

Fig. 1. Workflow and integration of hand-eye calibration of wavefront sensors within the assembly process of optical systems

Download Full Size | PDF

2. Pose estimation and freeform design

This section will present the prerequisite knowledge required for hand-eye calibration of wavefront sensors. For this purpose, the fundamental concepts of pose estimation of optical components and the design of freeform lenses are depicted.

2.1 Pose estimation

First, we assume that an optical system consists of a laser light source, which is illuminating a single component (associated with a pose vector $\mathbf {x} \in \mathbb {R}^6$ comprising translation and rotation) in the optical train and a detector with output $\mathbf {z}$. This setup described is shown schematically in Fig. 2. In this paper, we choose the well-known Zernike coefficients as sensor output which can be obtained by wavefront sensors. In this work, a laser is used as a light source, that maps to the output of the detector via

$$\mathbf{z} = \mathbf{h}(\mathbf{x, \mathbf{p}}),$$
where $\mathbf {h}$ is (in general) a nonlinear mapping and $\mathbf {p}$ is a parameter vector, which describes the geometric properties of the optical component. Linearizing the mapping $\mathbf {h}$ w.r.t. the component’s pose yields
$$\frac{\partial \mathbf{z}}{\partial \mathbf{x}} = \frac{\partial \mathbf{h}(\mathbf{x, \mathbf{p}})}{\partial \mathbf{x}} \approx \mathbf{S(p)} \hspace{0.5cm} \Rightarrow \hspace{0.5cm} \partial \mathbf{z} = \mathbf{S(p)} \cdot \partial \mathbf{x},$$
where $\mathbf {S} \in \mathbb {R}^{n_z \times n_x}$ is the Jacobian, commonly known as sensitivity matrix in the field of optics. The field that studies the impact of input changes (here: the optical component’s pose) to output changes of an arbitrary measurement quantity (here: Zernike coefficients) is commonly known as sensitivity analysis. Since there is in general no closed-form solution for $\mathbf {h}$, the sensitivity matrix needs to be calculated by numerical differentiation and can be obtained either experimentally or by simulation [10]. For an experimental evaluation of the sensitivity matrix, optical components in the system have to be incrementally displaced in each spatial degree of freedom respectively and the Zernike coefficients $\mathbf {z}$ are then obtained by wavefront sensor measurements, e.g. by using a Shack-Hartmann-Sensor. For a simulative evaluation, the displacement is performed virtually, so that a decomposition of the wavefront into Zernike coefficients can be readily obtained from the simulation environment. Although experimental evaluation may more accurately reflect the inherent sensitivity of the optical parameters by accounting for uncertainties and deviations, the initial position of the optical component is often unknown in advance. Furthermore, the simulative approach is utilized within this paper for performance reasons.

 figure: Fig. 2.

Fig. 2. Schematic layout of wavefront-based pose estimation

Download Full Size | PDF

The general goal of the pose estimation is to determine the inverse relationship $\mathbf {h}^{-1}(\mathbf {x}, \mathbf {p})$. The sensitivity matrix forms the basis for many linear, nonlinear gradient based, and bayesian filter based state estimation tasks in optical assembly processes [6,810,15]. Therefore it is needed for accurate pose identification of optical components. Most importantly, $\mathbf {S(p)}$ forms the basis of different freeform surfaces of this work and will be used extensively in subsequent sections.

2.2 Optimization problem

A good correspondence between changes of the optical component’s pose and changes of the Zernike coefficients is crucial for an accurate pose identification. To avoid the above mentioned ill-conditioning of the sensitivity matrix or even prevent its rank-loss, optical components with individually parameterized freeform surfaces can be employed. In order to adapt these underlying parameters to the alignment problem, the following requirements for the sensitivity matrix $\mathbf {S(p)}$ should hold:

  • 1. Full rank

    Rotation symmetry and surface ambiguities of simple optical components may result in rank-loss of the sensitivity matrix. This in turn violates the criterion of observability, so that the state estimation cannot be guaranteed [20,21]. Ambiguities must therefore be avoided, e.g. by applying astigmatism to the freeform surfaces, so that the constraint for optimization

    $$\text{rank}(\mathbf{S(p)}) = 6$$
    holds.

  • 2. Small condition number

    The aforementioned ill-conditioned problem leads to numerical issues during the identification process. Therefore, a small the condition number can be obtained by the optimization criterion

    $$\min_{\mathbf{p}} \kappa(\mathbf{S(p)}),$$
    which leads to more robust identification results and improves the overall estimation accuracy. Intuitively speaking, every spatial degree of freedom of $\mathbf {x}$ is equally excited with respect to the measured values $\mathbf {z}$, if the condition number equals one. This results from the fact that the condition number of a matrix represents the quotient of the maximum singular value $\sigma _{max}$ to the minimum singular value $\sigma _{min}$.

  • 3. Large singular values

    Besides the criterion of equal order of magnitude, the absolute size of all singular values of $\mathbf {S}$ may also have a decisive influence on the accuracy of the identification result. Therefore, throughout large singular values indicate a good excitation of the measurement quantities caused by stimulation of each spatial degree of freedom. This behavior can be achieved by introducing

    $$\min_{\mathbf{p}} \frac{1}{\sigma_{min}(\mathbf{S(p)})},$$
    which minimizes the reciprocal of the smallest singular value of $\sigma _{min}(\mathbf {S(p)})$ and is an extension of the optimization criterion of Eq. (2).

The chosen requirements for optimization of freeform surfaces are summarized in Eq. (1), (2), and (3). As mentioned in 2.1, $\mathbf {p}$ denotes a vector for the geometric properties of the optical component used. In this case, it contains the parameters for arbitrary chosen freeform surfaces, which must be tuned in the optimization procedure in such a way that the said requirements are sufficiently fulfilled. It is decisively responsible for the geometric appearance of the surfaces. To keep the parameters within feasible bounds, Eq. (4) introduces inequality constraints
$$\mathbf{p}_l \leq \mathbf{p} \leq \mathbf{p}_u$$
are introduced. It should be mentioned that these boundaries directly depend on the chosen mathematical model of the freeform surface. Here $\mathbf {p}_l$ and $\mathbf {p}_u$ represent the lower and upper parameter limits, respectively.

In order to determine a suitable parameter vector $\mathbf {p}^*$, an optimization criterion

$$\mathbf{p}^* = \underset{\mathbf{p}}{\arg\min}\;J(\mathbf{S(p)}),$$
$$\text{s.t. rank} (\mathbf{S(p)}) = 6$$
can be formulated taking into account Eqs. (1)–(4). Here, the cost function is defined as
$$J(\mathbf{S}) = \mathbf{w}^T \begin{pmatrix} \kappa(\mathbf{S(p)}) \\ \frac{1}{\sigma_{min}(\mathbf{S(p)})} \end{pmatrix} \text{,}$$
where $\mathbf {w}$ represents the weighting vector of the different quality measures for the optimization process. In addition, the objective function was modified, so that rank loss in penalized by the additional constraint, that was depicted in Eq. (1). Since the gradient of the objective function Eq. (7) is not analytically available, gradient-free optimization techniques need to be employed. These are methods such as Levenberg-Marquardt, Nelder-Mead, Particle Swarm, Genetic Algorithms, or Damped Least-Squares. The interested reader is referred to [22] for a comparative study on such solvers. In this paper, the simplex search method of [23] is used which is a direct search method that does not use numerical or analytic gradients. However, it should be noted that the algorithm is not guaranteed to converge to a global minimum.

2.3 Freeform surface design

Currently, there are many parametric models that have been used to describe freeform surfaces of optical components. Some freeform surfaces that can only be described with a few parameters are among others Cubic Spline, Polynomial, Extended Polynomial, Cylinder Fresnel, Grid Gradient, and Zernike Polynomial [24]. However, some of these models are not suitable for this application. The Cubic Spline exhibits second order differentiability, which leads to good manufacturing properties. However, the individual parameters can only manipulate the surface in segments and are not able to break the required rotational symmetry, which in turn leads to poor identification results. For this reason, three surface models were chosen in this paper, which meet the said requirements, but can also be described by a reasonable amount of parameters:

  • 1. Polynomial

    In Cartesian space, polynomial freeform surfaces are defined by

    $$z(x, y) = \sum_{i=0}^n \zeta_i x^i + \gamma_i y^i \text{,}$$
    where $x$ and $y$ represent the indeterminates of a two dimensional polynomial, $\zeta$ and $\gamma$ denote the respective coefficients, and $n$ is the polynomial order of the surface. However, if $i \in {2n + 1 | n \in \mathbb {N}^+}$, the symmetry of the surface is destroyed, which leads to large deviations from the optical path and therefore significantly deteriorates the optical properties of the lens. Furthermore, it is reasonable to limit the number of parameters, to reduce the problem of local minima. Therefore, a modified polynomial model
    $$z_f(x,y) = \zeta_{2,f}x^{2} + \zeta_{4,f}x^4 + \gamma_{2,f}y^2 + \gamma_{4,f}y^4$$
    $$z_b(x,y) = \zeta_{2,b}x^2 + \zeta_{4,b}x^4 + \gamma_{2,b}y^2 + \gamma_{4,b}y^4$$
    is used, where $z_f$ and $z_b$ denote the height of the front and the back surface of the lens, respectively. The resulting parameter vector for the entire lens optimization finally reads
    $$\mathbf{p}_{poly} = (l, \zeta_{2,f}, \zeta_{4,f}, \dots, \gamma_{4,b}) \text{,}$$
    where $l$ is the distance between $z_f$ and $z_b$ and therefore represents the thickness of the lens.

  • 2. Extended polynomial

    The Extended Polynomial surface is similar to the Polynomial surface, but introduces more coefficients. Its general form can be written as

    $$z(x,y) = \frac{cr^2}{1+\sqrt{1-(1+k)c^2r^2}} + \sum_{i=0}^n \gamma_ie_i(x,y) \text{,}$$
    where $e_i$ represents the polynomial expansion series of $x$ and $y$ with corresponding coefficients $\gamma _i$. The first part of the sum of Eq. (12) is a part of the well known SAG equation for aspheric optical surfaces [19] with conic constant $k$, displacement along the optical axis $r$, and the reciprocal of the radius of curvature $c$. Similar to Eq. (9) and (10), the reduced surface equations for the front and back are obtained in the same manner
    $$z_f(x,y) = \gamma_{0,f}x + \gamma_{1,f}y + \gamma_{2,f}x^2 + \gamma_{3,f}xy + \gamma_{4,f}y^2$$
    $$z_b(x,y) = \gamma_{0,b}x + \gamma_{1,b}y + \gamma_{2,b}x^2 + \gamma_{3,b}xy + \gamma_{4,b}y^2 \text{,}$$
    which leads to the parameter vector
    $$\mathbf{p}_{ext} = (l, \gamma_{0,f}, \gamma_{1,f}, \dots, \gamma_{4,b}) \text{.}$$

  • 3. Zernike standard

    Due to the orthogonality of Zernike polynomials, their derivation of the SAG equation is well suited for the design of freeform optics. They can be expressed in polar coordinates as

    $$z(\rho, \varphi) = \frac{cr^2}{1+\sqrt{1-(1+k)c^2r^2}} + \sum_{i=1}^8 \alpha_ir^{2i} + \sum_{i=1}^n \gamma_i z_i(\rho, \varphi) \text{,}$$
    where $z_i$ is the $i^{th}$ Zernike Polynomial with corresponding coefficient $\gamma _i$. The remaining part of Eq. (16) corresponds to a truncated version of the SAG equation of order 8. In this paper, only the first three terms of the Zernike coefficients are chosen to parameterize a surface. Therefore the parameter vector simplifies to
    $$\mathbf{p}_{zer} = (l, \gamma_{1,f}, \gamma_{2,f}, \dots, \gamma_{3,b}) \text{.}$$

3. Application: hand-eye calibration

Within the scope of this study a unified notation for coordinate frames and rigid body transformations is used. Here each coordinate frame described is enclosed with curly brackets $\{\bullet \}$. Furthermore, the Homogeneous transformation from coordinate frame $\{F_1\}$ to $\{F_2\}$ is denoted as $^{F_1}\boldsymbol {\mathcal {T}}_{F_2} \in \mathbb {R}^{6\times 6}$.

The problem definition of hand-eye calibration originates from the field of robotics and plays an important role for the visual perception of robotic systems [11]. Generally speaking, hand-eye calibration represents the identification of unknown rigid body transformations for vision based navigation tasks of robotic systems. In the past, two different approaches to the mathematical formulation of the hand-eye problem have been established. The first and most basic representation of the hand-eye (HE) calibration is formulated as

$$\begin{aligned} \boldsymbol{\mathcal{AX}} &= \boldsymbol{\mathcal{XB}}\text{.}\\ ^{S_i}\boldsymbol{\mathcal{T}}_{S_j}\;^{S}\boldsymbol{\mathcal{T}}_{E} &= ^{S}\boldsymbol{\mathcal{T}}_{E}\; ^{E_i}\boldsymbol{\mathcal{T}}_{E_j} \end{aligned}$$
Here, $\boldsymbol {\mathcal {A}}$ and $\boldsymbol {\mathcal {B}}$ are a set of relative poses of the sensor frame $\{S\}$ and end effector frame $\{E\}$ respectively. The solution to the HE problem is $\boldsymbol {\mathcal {X}}$, which describes the previously unknown transformation between the end effector and the sensor mounted on the robot. However, as described in [25], the solution of Eq. (18) may cause numerical instabilities when using small relative poses, which implies that the calibration result directly deteriorates. More importantly, the missing transformation $^W\boldsymbol {\mathcal {T}}_T$ is not determined. However, since $^W\boldsymbol {\mathcal {T}}_T$ is used in the context of this work to calibrate the wavefront sensor, Eq. (18) is only mentioned for the sake of completeness, since our approach can be applied to it as well. However, within the scope of this paper, the extended representation of the hand-eye problem is used, which is described as the robot-world-hand-eye (RWHE) in literature [11]. The general problem formulation of RWHE approaches can be expressed as
$$\begin{aligned} \boldsymbol{\mathcal{AX}} &= \boldsymbol{\mathcal{ZB}} \\ ^{W}\boldsymbol{\mathcal{T}}_{E_i}\; ^{E}\boldsymbol{\mathcal{T}}_{S} &= ^{W}\boldsymbol{\mathcal{T}}_{T}\; ^{T}\boldsymbol{\mathcal{T}}_{S_i}\text{.} \end{aligned}$$

Unlike Eq. (18), $\boldsymbol {\mathcal {A}}$ and $\boldsymbol {\mathcal {B}}$ are no longer relative poses, but a set of absolute sensor and end effector poses respectively. Figure 3 shows that $\boldsymbol {\mathcal {A}}$ corresponds to the forward kinematics $^{W}\boldsymbol {\mathcal {T}}_{E_i}$ of the serial chain for an arbitrarily pose $i$ and $\boldsymbol {\mathcal {B}}$ is a set of poses of the sensor w.r.t. the calibration target $^{T}\boldsymbol {\mathcal {T}}_{S_i}$. In addition to $\boldsymbol {\mathcal {X}}$, the quantity $\boldsymbol {\mathcal {Z}}$ is the second part of the solution to the hand-eye Eq. (19). It describes the target w.r.t. the global inertial frame $\{W\}$. The geometric representation of this alternative form of hand-eye calibration is shown in Fig. 3. This alternative approach is more intuitive by using absolute rather than relative poses and is thus also less susceptible to numerical instabilities. Furthermore, it takes the identification of the transformation $\boldsymbol {\mathcal {Z}}$ into account, which is required in the context of this paper for referencing the wavefront sensor with respect to the inertial frame. For Eq. (18) as well as for Eq. (19) there are numerous algorithms and approaches, which deal with the solution of the described problem. The interested reader is referred to literature for an exhaustive listing (e.g. [11]). Within this work, the approach presented in [26] was used. It parametrizes a stochastic model and presents a metric in Euclidean space SE(3) for nonlinear optimization. Since stochastic analyses of the uncertainties had to be performed anyway for the application of filtering methods within the context of this paper, these models can be directly applied to this approach.

 figure: Fig. 3.

Fig. 3. Schematic representation of the hand-eye problem of the type $\boldsymbol {\mathcal {AX}}=\boldsymbol {\mathcal {ZB}}$

Download Full Size | PDF

4. Simulation results

In the following section, different simulation results are presented. First, the results of the freeform lens designs are shown. Subsequently, the impact on the lens pose estimation is pointed out. In the last part of this section, the application reference hand-eye calibration is highlighted.

4.1 Freeform designs

This section presents the results of the freeform lens design for which different surface models were proposed in Sec. 2.2 and a suitable cost function was developed in Sec. 2.3. For optimization, the lenses of different models were first placed in the optical path in such a way that the Shack-Hartmann Wavefront Sensor received sufficient signal strength. Thereby, the initial parameters of individual surface models were randomly selected. The central difference quotient was then used to obtain the partial derivative which results in the sensitivity matrix, that was presented in Sec. 2.1. Taking into account the defined quality criteria, the cost function from Eq. (7) was then minimized using the simplex search method [21]. The obtained parameters of the cost function for each surface model can be seen in Table 1. First, the trivial observation can be made that the Spherical Lens with a focal length of 50 mm experiences rank loss and therefore exhibits a minimum singular value of 0. This originates from the fact that the rotatory degree of freedom along the optical axis is not observable and therefore not suited for state estimation (see Eq. (6)). Looking at the two polynomial models, it can be seen that the simple polynomial model has a higher condition number $\kappa (\mathbf {S(\mathbf {p})})$ than the extended polynomial model. The opposite is true for the smallest singular value $\sigma _{min}$, where the simple polynomial model has a larger and thus, from the viewpoint of the cost function, a better minimum singular value. From a cost function point of view, the Zernike approach performs the best and has the absolute smallest condition number with the largest minimum singular value compared to the other models. For reproducibility, the parameter sets

$$\mathbf{p}_{poly} = \begin{pmatrix} 6.07\\ -3.42\times10^{{-}2}\\ -4.24\times10^{{-}6}\\ 1.96\times10^{{-}2}\\ -7.64\times10^{{-}7}\\ -4.16\times10^{{-}2}\\ -9.15\times10^{{-}7}\\ 1.24\times10^{{-}2}\\ -7.45\times10^{{-}7}\end{pmatrix}, \mathbf{p}_{ext} = \begin{pmatrix} 5.06\\ 2.94\times10^{{-}3}\\ 6.43\times10^{{-}2}\\ 7.73\times10^{{-}4}\\ 3.54\times10^{{-}3}\\ -7.13\times10^{{-}3}\\ -9.27\times10^{{-}3}\\ 9.81\times10^{{-}2}\\ -1.20\times10^{{-}2}\\ 1.19\times10^{{-}2}\\ -1.20\times10^{{-}2} \end{pmatrix}, \mathbf{p}_{zer} = \begin{pmatrix} 6.94\\ 1.95\times10^{{-}2}\\ 4.95\times10^{{-}2}\\ -1.24\times10^{{-}5}\\ -4.99\times10^{{-}2}\\ 1.57\times10^{{-}2}\\ 1.59\times10^{{-}5}\end{pmatrix},$$
for each individual surface model are provided (see Eqs. (11), 15, and 17).

Tables Icon

Table 1. Optimization specific cost function values of the lens design of different surface models

Figure 4 shows a rendered 3D-representation of the corresponding optical lenses. It can be observed that the simple polynomial approach (SubFig. 4(a)) has a rather distorted geometry relative to the other two. Among other things, this could result in signal loss at the wavefront sensor in the case of small movements of the lens due to strong beam deflection. The other two freeform models (SubFigs. 4(b) and 4(c)) feature a shape similar to that of conventional optical lenses. Finally, it can be clearly seen from the grid view in SubFigs. 4(d), 4(e), and 4(f) that all freeform models appear to have broken the rotational symmetry that leads to the loss of rank in the sensitivity matrix. As mentioned earlier, this is a necessary condition for full observability in state estimation of all spatial degrees of freedom of the optical lens.

 figure: Fig. 4.

Fig. 4. 3D Illustration of the optimized freeform lens models

Download Full Size | PDF

4.2 Comparison of pose estimation accuracy

After the optimization of the freeform surfaces has been performed, the subsequent step is to verify whether the defined criteria of the cost function have a positive impact on the wavefront based pose estimation. For this purpose, a pose estimation is performed several times for each lens in order to statistically quantify the estimation accuracy. From the potential algorithms presented in Sec. 1, the UKF was chosen since it provides a good compromise between accuracy and computation time. For initial positioning, poses were selected from the CAD geometry of the scene and were randomly modified based on previously estimated serial kinematic chain and grasping uncertainties. For each lens, 100 estimates were obtained in simulation using the UKF, as it has been shown in the past to provide a good compromise between performance and accuracy. The standard spherical lens has been omitted, since for the identification of all degrees of freedom the breaking of the rotational symmetry around the optical axis is required. Figure 5 shows the statistical evaluation of the accuracy pose estimate of all lenses used. When examining individual spatial degrees of freedom in detail, it can be seen that the error of translation along the optical axis $z$ and also the error of rotation around the optical axis $\Theta _z$ could be identified robustly and with high accuracy for all chosen freeform models. Furthermore, it can be observed that the error dispersion of the polynomial model is much larger than that of the other two freeform models. The Zernike model exhibits by far the lowest uncertainties with at the same time the highest robustness. This observation is supported by Table 2, where the median and the standard deviations are shown as the vectornorm of rotatory and translatory components each.

 figure: Fig. 5.

Fig. 5. Error obtained from 100 pose estimation results of individual freeform lenses using an UKF

Download Full Size | PDF

Tables Icon

Table 2. Euclidean norm of standard deviation and median of translation and rotation error

4.3 Hand-eye calibration

A realistic use case of the resulting more accurate pose identification is the hand-eye calibration of wavefront sensors. As already discussed in Sec. 3, all 6 spatial degrees of freedom are needed to solve the hand-eye problem. Furthermore, the quality of the calibration results depends significantly on the accuracy of the provided poses. Given the improved pose estimation accuracy using freeform optics (see section 4.2), the hand-eye problem, which was introduced in section 3 can be solved. Since it is difficult to quantify the results of hand-eye calibration on a real system, we first prove applicability of this approach in simulation. For this purpose, the pose estimation of the Zernike Polynomial model is utilized, as this approach provided the most robust and accurate estimation results (see section 4.2).

For this purpose, an existing real-world setup (Fig. 6) was translated into a simulation environment. The setup consists of a Agilus KR10R1100 serial robot supplied by KUKA AG (Augsburg, Germany) with a custom-built gripper attached to its end effector for manipulating optical components. In the workspace of the robot, a laser CPS532-C2 supplied by Thorlabs Inc. (Newton, New Jersey - USA) with a wavelength of 532nm and a Shack-Hartmann wavefront sensor WFS40-7AR from by Thorlabs are rigidly aligned with each other, so that a lens can be placed in between them for state estimation. Figure 6 shows the relationship of the transformations of individual coordinate systems in the setup. Here, the robot’s end effector frame $\{E\}$ results from the transformation through the serial kinematics of the robot rooted in the world frame $\{W\}$. The object frame $\{O\}$, associated with the optical component is found via state estimation w.r.t. sensor frame $\{S\}$ of the wavefront sensor.

 figure: Fig. 6.

Fig. 6. Kinematic description of a robot-assisted alignment scenario.

Download Full Size | PDF

As discussed in section 3, the algorithm introduced in [26] was used to solve the hand-eye problem. To acquire the most accurate poses possible for hand-eye calibration, 12 iterations were first performed using the UKF. This ensures the convergence of the filter, so that subsequently an additional 12 poses were approached with the robot and estimated, which are used for hand-eye calibration. The perfect homogeneous transformation matrix Z, which represents the wavefront sensor in the coordinate system of the robot base, can be taken from Eq. (21). This matrix was adapted from the real setup and rounded to whole numbers for better interpretability. Here the translational part in the last column of the matrix is displayed in millimeters.

$$\mathbf{Z} = \left( \begin{array}{cccc} 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & -500\\ 1 & 0 & 0 & 1000 \\ 0 & 0 & 0 & 1 \end{array} \right)$$

The identified matrix from the solution of the hand-eye calibration can be seen in Eq. (22).

$$\mathbf{Z}_{sim}^* = \left( \begin{array}{cccc} 8.1\times10^{{-}3} & 1.8\times10^{{-}5} & -1.0 & -1.4021\times10^{{-}1} \\ -5.2\times10^{{-}6} & 1.0 & 1.9\times10^{{-}5} & -4.993\times10^{2}\\ 1.0 & 5.2\times10^{{-}6} & 8.1\times10^{{-}3} & 9.990 \times10^{2}\\ 0.0 & 0.0 & 0.0 & 1.0 \end{array} \right)$$

Despite taking into account previously identified uncertainties and measurement noise, it can be seen that the identified homogenous transformation matrix resembles the ground truth from simulation.

5. Experimental results

In the last chapter, it was shown in a simulation environment that it is possible to reference the coordinate system of a wavefront sensor with respect to the base of an industrial robot using optimized freeform lenses. This section will now demonstrate the applicability in the real world. For this purpose, the freeform lens of the zernike polynomial model was manufactured in a Form2 stereolytographic 3D printer from Formlabs Inc. (Somerville, MA, USA). The Clear Rasin V4 resin from FormLabs was applied. The optical properties of the printing material such as the refractive index, were already investigated in [27] and adopted within the scope of this work. After printing with a layer thickness of 25$\mu$m, the surface of the lens is matte and opaque. For this reason, further post-processing steps were carried out manually. Therefore, the lens was first treated with fine sandpaper and was then polished afterwards. In Fig. 7(a) and 7(b) the resulting lens is shown on a text in 2 different rotational orientations to illustrate the broken rotational symmetry and to give a rough impression of the optical properties. Despite great care to remove as little material as possible from the component, it should be noted at this point that these actions alter the geometry and thus also the optical properties.

 figure: Fig. 7.

Fig. 7. Picture of the additive manufactured zernike polynomial freeform lens for qualitative representation of broken rotational symmetry

Download Full Size | PDF

It should be noted at this point that the quality of the hand-eye calibration on the real system is difficult to quantify with respect to the optimized lens. This is partly due to the fact that the uncertainties of the kinematic chain can only be statistically recorded with the aid of a suitable reference system such as a laser tracker. A much greater influencing factor here, however, is the geometric deviations of the lens and the resulting changes in optical properties. With the real setup (see Fig. 8) and the printed freeform lens, all spatial degrees of freedom of the lens could be identified using an UKF. For this purpose, the freeform lens was first positioned so that laser light was directed onto the sensor surface of the wavefront sensor. Subsequently, 12 randomly selected, valid poses were approached and corresponding filter iterations of the UKF were performed. In each iteration step the UKF compared the residual of the simulated Zernike coefficients and the measured ones, acquired by the Shack-Hartmann wavefront sensor and tried to minimize it in the following iteration step. The results of these 12 poses were not included in the hand-eye calibration data set, but were intended to ensure convergence of the UKF. Subsequently, another 12 poses were approached, whose identified pose information was directly used for the calculation to solve the hand-eye equation (see Sec. 3.). However, large deviations of Zernike coefficients between wavefront sensor and simulation environments could be observed in the estimation process. The visual impression of the identified pose from the hand-eye calibration is realistic from a qualitative point of view. After manual measurement with a tape measure, the translational deviation from the identified pose to the CAD model lies within $\approx 4.5 cm$. This is in principle close to the CAD model. However, due to these deviations, the accuracy of the calibration cannot be quantified. This is because the initial positioning of the lens is inaccurate due to knowledge of the poor calibration results, so the laser light is deflected and no light reaches the sensor surface. As a result, no signal could be obtained.

 figure: Fig. 8.

Fig. 8. Kinematic description of a robot-assisted alignment scenario.

Download Full Size | PDF

6. Conclusion and discussion

Within the scope of this work, a new approach for the optimization of free-form lenses with respect to accuracy and robustness in the pose estimation process was developed. In order to identify all 6 spatial degrees of freedom of the lens, one of the goals was to break the rotational symmetry of conventional optical lenses in order to identify the rotation along the optical axis. For this, section 2.3 introduced 3 different freeform surface models (polynomial, extended polynomial, and zernike polynomial) to fulfill the given requirements. In order to optimize these models, different quality criteria were defined in section 2.2 based on the sensitivity matrix of each freeform lens. These criteria were then combined, so that a cost function for surface optimization could be derived. The resulting freeform lenses and a conventional spherical lens with a focal length of 50mm were then compared with respect to the previously defined quality criteria. For the spherical lens, the trivial observation could be made that its sensitivity matrix experiences rank loss due to rotational symmetry along the optical axis, and thus not all of its spatial degrees of freedom are identifiable. Regarding the free-form lenses, the zernike polynomial model was able to fulfill the set criteria the best, followed by the extended polynomial and finally the polynomial model. In order to be able to make a statement about whether the chosen quality criteria actually improve the pose estimation of the individual lenses, a statistical evaluation was carried out in a simulation environment with an Unscented Kalmanfilter and different starting positions. A correlation between the defined quality criteria and both the robustness and accuracy of the lag estimation was found, so that the descending order of the positioning quality of the individual models is again: zernike polynomial, extended polynomial and polynomial. Finally, as an application example for the optimized free-form lenses, a hand-eye calibration of a wavefront sensor w.r.t. the base of an industrial robot was performed. This was initially carried out in the simulation environment, which led to very accurate identification results. Finally, this calibration was performed as a feasibility study on the real system. For this purpose, a free-form lens was additively manufactured using a stereolithographic 3D printer and then used for pose estimation followed by hand-eye calibration. Due to the postprocessing (sanding and polishing), which altered the geometry of the lens, the identified coordinate system was no longer in the optical path, so that the result could not be assessed quantitatively. Therefore, the hand-eye calibration on the real system can only to be understood as a feasibility study. However, it could be qualitatively evaluated that the coordinate system of the sensor, identified by the hand-eye calibration is located at a reasonable location of the robot’s working space. It could thus be shown that this method can certainly be used in practice with optics manufactured to a higher quality, using existing state-of-the-art techniques [2830], that meet the desired accuracy requirements. Furthermore, the feasibility of this approach is supported by the work of other researchers in which wavefront-based pose estimation was shown to be successful despite mathematically ill-conditioned optical components [7,10,31]. Subsequent steps in the wavefront-based assembly process could then rely on the calibrated reference coordinate system to achieve accurate initial positioning of the optical components in their target position.

7. Future work

For further work, it is crucial to manufacture the freeform lenses more accurate, since the lack of optical quality of additively manufactured lenses is apparently the main cause of the poor identification results on the real system. Furthermore, other surface models could be used for the optimization of the lenses and combinations between individual models would also be conceivable. To evaluate the suitability of this approach in the assembly process of optical systems in general, the next step would be to investigate whether the initial placement of individual optical components in the assembly process is improved by the hand-eye calibration of the wavefront sensor and thus the convergence speed of the position estimation can be improved. This would reduce the time of pose estimation and thus significantly shorten the overall assembly process.

Acknowledgments

The authors would like to thank Zimo Yang for his valuable contribution to this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the author upon reasonable request.

References

1. E. D. Kim, Y.-W. Choi, M.-S. Kang, and S. C. Choi, “Reverse-optimization alignment algorithm using zernike sensitivity,” J. Opt. Soc. Korea 9(2), 68–73 (2005). [CrossRef]  

2. S. Kim, H.-S. Yang, Y.-W. Lee, and S.-W. Kim, “Merit function regression method for efficient alignment control of two-mirror optical systems,” Opt. Express 15(8), 5059–5068 (2007). [CrossRef]  

3. H.-S. Yang, S.-H. Kim, Y.-W. Lee, J.-B. Song, H.-G. Rhee, H.-Y. Lee, J.-H. Lee, I.-W. Lee, and S.-W. Kim, “Computer aided alignment using zernike coefficients,” in Interferometry XIII: Applications, vol. 6293 (International Society for Optics and Photonics, 2006), p. 62930I.

4. X. He, J. Luo, J. Wang, X. Zhang, and Y. Liu, “Improvement of a computer-aided alignment algorithm for the nonsymmetric off-axis reflective telescope,” Appl. Opt. 60(8), 2127–2140 (2021). [CrossRef]  

5. M. Wen, C. Han, and H. Ma, “Active compensation for optimal rms wavefront error in perturbed off-axis optical telescopes using nodal aberration theory,” Appl. Opt. 60(6), 1790–1800 (2021). [CrossRef]  

6. J. Fang and D. Savransky, “Automated alignment of a reconfigurable optical system using focal-plane sensing and kalman filtering,” Appl. Opt. 55(22), 5967–5976 (2016). [CrossRef]  

7. J. Fang, “Online model-based estimation for automated optical system alignment and phase retrieval algorithm,” Ph.D. thesis (2018).

8. C. Schindlbeck, C. Pape, and E. Reithmeier, “Predictor-corrector framework for the sequential assembly of optical systems based on wavefront sensing,” Opt. Express 26(8), 10669–10681 (2018). [CrossRef]  

9. C. Schindlbeck, C. Pape, and E. Reithmeier, “Wavefront predictions for the automated assembly of optical systems,” (2018), pp. 10815–108157.

10. C. Schindlbeck, A Predictor-Corrector Framework for the Robot-Assisted and Automated Assembly of Optical Systems (TEWISS-Technik und Wissen GmbH, 2019).

11. I. Ali, O. Suominen, A. Gotchev, and E. R. Morales, “Methods for simultaneous robot-world-hand–eye calibration: A comparative study,” Sensors 19(12), 2837 (2019). [CrossRef]  

12. D. C. Redding, N. Sigrist, J. Z. Lou, Y. Zhang, P. D. Atcheson, D. S. Acton, and W. L. Hayden, “Optical state estimation using wavefront data,” in Current Developments in Lens Design and Optical Engineering V, vol. 5523 (International Society for Optics and Photonics, 2004), pp. 212–224.

13. A. N. Das, D. O. Popa, J. Sin, and H. E. Stephanou, “Precision alignment and assembly of a fourier transform microspectrometer,” J. Micro-Nano Mech. 5(1-2), 15–28 (2009). [CrossRef]  

14. J. Sin, W. H. Lee, and H. E. Stephanou, “Sensitivity analysis of an assembled fourier transform microspectrometer,” in Next-Generation Spectroscopic Technologies III, vol. 7680 (International Society for Optics and Photonics, 2010), p. 76800T.

15. J. Fang and D. Savransky, “Model-based estimation and control for off-axis parabolic mirror alignment,” in Photonic Instrumentation Engineering V, vol. 10539 (SPIE, 2018), p. 105390X.

16. C. Schindlbeck, C. Pape, and E. Reithmeier, “Sensitivity analysis of a two-lens system for positioning feedback,” in Proc. Appl. Math. Mech. (2018).

17. Z. Gao, L. Chen, S. Zhou, and R. Zhu, “Computer-aided alignment for a reference transmission sphere of an interferometer,” Opt. Eng. 43(1), 69–74 (2004). [CrossRef]  

18. R. Krappig and R. Schmitt, “Capabilities and perfomance of the wavefront-based alignment in multi element optical systems,” in Third European Seminar on Precision Optics Manufacturing, vol. 10009 (International Society for Optics and Photonics, 2016), p. 100090C.

19. G. W. Forbes, “Shape specification for axially symmetric optical surfaces,” Opt. Express 15(8), 5218–5226 (2007). [CrossRef]  

20. S. Skogestad and I. Postlethwaite, Multivariable feedback control: analysis and design, vol. 2 (Wiley New York, 2007).

21. K. S. Tsakalis and P. A. Ioannou, Linear time-varying systems: control and adaptation (Prentice-Hall, Inc., 1993).

22. L. M. Rios and N. V. Sahinidis, “Derivative-free optimization: a review of algorithms and comparison of software implementations,” J. Glob. Optim. 56(3), 1247–1293 (2013). [CrossRef]  

23. J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder–mead simplex method in low dimensions,” SIAM J. Optim. 9(1), 112–147 (1998). [CrossRef]  

24. M. Tricard and D. Bajuk, “Practical examples of freeform optics,” in Freeform Optics, (Optical Society of America, 2013), pp. FT3B–2.

25. R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration,” IEEE Trans. Robot. Automat. 5(3), 345–358 (1989). [CrossRef]  

26. K. H. Strobl and G. Hirzinger, “Optimal hand-eye calibration,” in 2006 IEEE/RSJ international conference on intelligent robots and systems, (IEEE, 2006), pp. 4647–4653.

27. M. Reynoso, I. Gauli, and P. Measor, “Refractive index and dispersion of transparent 3d printing photoresins,” Opt. Mater. Express 11(10), 3392–3397 (2021). [CrossRef]  

28. L. DIck, “High precision freeform polymer optics: Optical freeform surfaces–increased accuracy by 3d error compensation,” Optik & Photonik 7(2), 33–37 (2012). [CrossRef]  

29. T. Blalock, K. Medicus, and J. D. Nelson, “Fabrication of freeform optics,” in Optical Manufacturing and Testing XI, vol. 9575 (International Society for Optics and Photonics, 2015), p. 95750H.

30. C. Schindler, T. Köhler, and E. Roth, “Freeform optics: current challenges for future serial production,” in Optifab 2017, vol. 10448 (International Society for Optics and Photonics, 2017), p. 1044802.

31. D. Li and D. Savransky, “Automated reflective optical system alignment with focal plane sensing and optimal state estimation,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2020), pp. CF1C–3.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Workflow and integration of hand-eye calibration of wavefront sensors within the assembly process of optical systems
Fig. 2.
Fig. 2. Schematic layout of wavefront-based pose estimation
Fig. 3.
Fig. 3. Schematic representation of the hand-eye problem of the type $\boldsymbol {\mathcal {AX}}=\boldsymbol {\mathcal {ZB}}$
Fig. 4.
Fig. 4. 3D Illustration of the optimized freeform lens models
Fig. 5.
Fig. 5. Error obtained from 100 pose estimation results of individual freeform lenses using an UKF
Fig. 6.
Fig. 6. Kinematic description of a robot-assisted alignment scenario.
Fig. 7.
Fig. 7. Picture of the additive manufactured zernike polynomial freeform lens for qualitative representation of broken rotational symmetry
Fig. 8.
Fig. 8. Kinematic description of a robot-assisted alignment scenario.

Tables (2)

Tables Icon

Table 1. Optimization specific cost function values of the lens design of different surface models

Tables Icon

Table 2. Euclidean norm of standard deviation and median of translation and rotation error

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

z = h ( x , p ) ,
z x = h ( x , p ) x S ( p ) z = S ( p ) x ,
rank ( S ( p ) ) = 6
min p κ ( S ( p ) ) ,
min p 1 σ m i n ( S ( p ) ) ,
p l p p u
p = arg min p J ( S ( p ) ) ,
s.t. rank ( S ( p ) ) = 6
J ( S ) = w T ( κ ( S ( p ) ) 1 σ m i n ( S ( p ) ) ) ,
z ( x , y ) = i = 0 n ζ i x i + γ i y i ,
z f ( x , y ) = ζ 2 , f x 2 + ζ 4 , f x 4 + γ 2 , f y 2 + γ 4 , f y 4
z b ( x , y ) = ζ 2 , b x 2 + ζ 4 , b x 4 + γ 2 , b y 2 + γ 4 , b y 4
p p o l y = ( l , ζ 2 , f , ζ 4 , f , , γ 4 , b ) ,
z ( x , y ) = c r 2 1 + 1 ( 1 + k ) c 2 r 2 + i = 0 n γ i e i ( x , y ) ,
z f ( x , y ) = γ 0 , f x + γ 1 , f y + γ 2 , f x 2 + γ 3 , f x y + γ 4 , f y 2
z b ( x , y ) = γ 0 , b x + γ 1 , b y + γ 2 , b x 2 + γ 3 , b x y + γ 4 , b y 2 ,
p e x t = ( l , γ 0 , f , γ 1 , f , , γ 4 , b ) .
z ( ρ , φ ) = c r 2 1 + 1 ( 1 + k ) c 2 r 2 + i = 1 8 α i r 2 i + i = 1 n γ i z i ( ρ , φ ) ,
p z e r = ( l , γ 1 , f , γ 2 , f , , γ 3 , b ) .
A X = X B . S i T S j S T E = S T E E i T E j
A X = Z B W T E i E T S = W T T T T S i .
p p o l y = ( 6.07 3.42 × 10 2 4.24 × 10 6 1.96 × 10 2 7.64 × 10 7 4.16 × 10 2 9.15 × 10 7 1.24 × 10 2 7.45 × 10 7 ) , p e x t = ( 5.06 2.94 × 10 3 6.43 × 10 2 7.73 × 10 4 3.54 × 10 3 7.13 × 10 3 9.27 × 10 3 9.81 × 10 2 1.20 × 10 2 1.19 × 10 2 1.20 × 10 2 ) , p z e r = ( 6.94 1.95 × 10 2 4.95 × 10 2 1.24 × 10 5 4.99 × 10 2 1.57 × 10 2 1.59 × 10 5 ) ,
Z = ( 0 0 1 0 0 1 0 500 1 0 0 1000 0 0 0 1 )
Z s i m = ( 8.1 × 10 3 1.8 × 10 5 1.0 1.4021 × 10 1 5.2 × 10 6 1.0 1.9 × 10 5 4.993 × 10 2 1.0 5.2 × 10 6 8.1 × 10 3 9.990 × 10 2 0.0 0.0 0.0 1.0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.