Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-precision calibration to zoom lens of optical measurement machine based on FNN

Open Access Open Access

Abstract

We proposed a calibration method for high-precision zoom lenses of optical measurement machines based on Fully Connected Neural Network (FNN), using a 5-layer neural network instead of a camera calibration model, to achieve continuous calibration of zoom lenses at any zoom setting by calibrating typical zooms. From the experimental verification, the average calibration error of this method is 9.83×10−4mm and the average measurement error at any zoom setting is 0.01317mm. The overall calibration precision is better than that of Zhang's calibration method and can meet the application requirements of a high-precision optical measurement machine. The method proposed in this paper provided a new solution and a new idea for the calibration of zoom lenses, which can be widely used in the fields of precision parts inspection and machine-vision measurement.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

1.1. Background

With the continuous development of the manufacturing industry, high-precision parts have more important applications in aerospace, micro/nano processing and other fields. Optical measurement machines are widely used in the inspection of precision parts in production sites due to their high accuracy, speed and stability. Camera calibration is one of the most important aspects of optical measurement machines for 2D image measurement, which establishes the mapping between the pixel coordinate system of the imaging plane and the world coordinate system of the real object point by solving the intrinsic and extrinsic parameters of the camera [1]. The accuracy of the camera calibration will directly affect the detection accuracy of the 2D image measurement module of the optical measurement machine, therefore the use of high-precision calibration methods plays a vital role in ensuring the accuracy of the optical measurement machine.

To improve the measurement accuracy and range of optical measurement machines, zoom lenses are widely used as their optical imaging module. Compared to fixed-focus lenses, zoom lenses can adapt to different fields of view and depths of field by adjusting the zoom, aperture and focus, which provides greater flexibility and adaptability [2]. However, changes in camera settings can lead to changes in camera calibration parameters, making it more difficult to calibrate zoom cameras. The current development of zoom lens calibration technology is not perfect, which also limits the application of zoom lenses in high-precision measurement and 3D reconstruction [3]. Combining the state of development for calibration technology and the actual requirements of optical measurement machines, the calibration of optical measurement machines is generally completed by using the currently commonly used calibration methods for fixed-focus lenses, such as Zhang's calibration method [4] for multiple typical zoom settings. These methods are simple and accurate, but users can only use the machine at the typical zoom settings, not at any zoom. Therefore, the study of high-precision and continuous calibration methods for zoom lenses is of great importance for the development of optical measurement machines, as well as for the application of zoom lenses in the field of high-precision measurement.

1.2. Related work

Zoom lenses have the flexibility that fixed-focus lenses do not have due to their variable lens setting parameters. However, the camera calibration parameters also change as the lens settings change, and the calibration parameters are usually a complex non-linear function of the lens settings, which makes it difficult to characterise the relationship between the model parameters and the lens settings with an accurate function, and prevents high-precision calibrations over a wide range of camera settings. At the same time, the calibration of zoom lenses requires a large number of data dimensions compared to fixed-focus lenses, which can greatly increase the difficulty and time cost of calibration [3].

Since the last century, many scholars have been working on the calibration for zoom lenses to solve the above problems. The methods of calibrating zoom cameras existed can be divided into four categories: the method of the lookup table, the method of curve fitting, the method based on the neural network and the method of self-calibration. Tarabanis et al. [5] calibrated the zoom lens at different camera settings and stored the results in a table for subsequent use, a method that is laborious and makes the calibration process tedious and complex, lacking continuity and flexibility. For this reason, Chen et al. [6] calibrated the lens at some zoom settings and used these calibration parameters as a basis for interpolation to obtain the camera parameters at other zooms, which also required a large number of repeated calibrations to ensure calibration accuracy. The method of the lookup table is very laborious and the calibration results are discontinuous, which makes it difficult to find the intrinsic and extrinsic parameters for every camera setting. Willson [7] calibrated the camera's intrinsic and extrinsic matrices based on the fixed-focus camera projection model in Tsai's two-step calibration method, followed by a quadratic polynomial fit to the intrinsic parameters, which allowed the camera to be calibrated with continuous changes in focal length. The method of curve fitting is more complex for requiring the form and number of polynomials to be adjusted according to different lenses, and different polynomials need to be fitted for different intrinsic parameters, increasing the complexity of the calibration and introducing new errors. Ahmed et al. [3,8] proposed the use of a neural network framework for the calibration of zoom cameras. After completing the calibration at different camera settings, the neural network framework was used to fit and optimize the model parameters for other camera settings as a whole. The method fitted the existing pinhole imaging model using the neural network with a fixed number of layers and unit nodes and required certain weights to be zeroed or set to one, reducing the fitting capability of the neural network. Nabil et al. [9] proposed a self-calibration method for zoom cameras, which solves the intrinsic parameters by taking two images of a realistic 3D scene and using the homography constraint relationship between corresponding points in the said images. Li et.al. [10] proposed an efficient camera self-calibration method using a micro-transceiver in conjunction with deep learning. The self-calibration method is limited by the hardware setup of the camera itself, which cannot be completed in certain cases where the camera does not facilitate multi-degree-of-freedom movements, such as in an optical measurement machine.

Although there are several methods to calibrate zoom lenses, all of them have different limitations, and the application of neural networks in the field of camera calibration provides a solution. Due to the powerful non-linear mapping capability, the neural network can describe the complex functional relationship between lens settings and camera calibration parameters, which is a good solution to the problem of zoom lens calibration. The more widely used neural networks today are Convolutional Neural Network (CNN) and Fully Connected Neural Network (FNN). CNNs have shown excellent performance in image processing, classification and recognition, by adding convolutional and pooling layers before fully connected layers to reduce data dimensionality and extract data features, making them better able to handle complex data with multiple dimensions such as images. FNNs have a simpler structure compared to CNNs. In the camera calibration problem, the aim is to solve the map between pixel coordinates and world coordinates, with the input and output data being the corresponding coordinates, and the data being low-dimensional and simple in structure. Thus in dealing with camera calibration problems, the use of FNN can reduce the complexity of the method while ensuring precision. Based on this, this paper proposed a method for zoom lens calibration based on FNN, which makes full use of the powerful learning ability of the neural network to fit the mapping relationship between the pixel coordinate system and the world coordinate system through a multi-layer FNN, without considering the specific camera imaging model and including all the complex non-linear mapping relationships of the zoom process in the neural network. Finally, calibration experiments and measurement experiments are conducted in this paper, and the average calibration error is 9.83×10−4mm while the average measurement error at any zoom setting is 0.01317mm. The results show that this method has the advantages of high calibration accuracy, adaptability and simple operation. As an implicit calibration method, this method reduces the calibration complexity and ensures the calibration accuracy compared to the above methods, as it can be widely used in the field of precision parts inspection.

2. Materials and methods

2.1. Overview

This paper begins with a brief description of the pinhole model and distortion correction model which are currently commonly used in camera calibrations, followed by the proposed model for zoom camera calibration based on FNN. Unlike the approach in the Ref. [3], the neural network model proposed in this paper does not consider specific intrinsic and extrinsic parameters in camera calibration and uses camera setting parameters and image pixel coordinates as input parameters, fitting the non-linear relationship between camera setting parameters and the imaging model. This method is data-driven and optimises the neural network to achieve continuous calibration for zoom lenses at any zoom settings by calibrating for typical zooms. Finally, the experimental equipment and the dataset setup used in this study are described.

2.2. Camera model

There are four coordinate systems in the camera calibration process. They are the two-dimensional image pixel coordinate system $u - v$ and the physical coordinate system $x - y$, the three-dimensional camera coordinate system ${X_c} - {Y_c} - {Z_c}$ and the world coordinate system ${X_w} - {Y_w} - {Z_w}$. [11] The correspondence among the four coordinate systems is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The relationship between the conversion of each coordinate system in the camera calibration.

Download Full Size | PDF

The purpose of camera calibration is to establish a mapping between the 2D pixel coordinates $({u,v} )$ of the image and the 3D world coordinates $({{X_W},{Y_W},{Z_W}} )$. The pinhole model is currently used to characterize the above mapping relationship without taking into account non-linearities such as distortion. The model relationship is shown in Eq. (1) under the flush coordinate system [12].

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{cccc} {{f / {{d_x}}}}&0&{{u_0}}&0\\ 0&{{f / {{d_y}}}}&{{v_0}}&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{cc} \textbf{R}&\textbf{T}\\ {{\textbf{0}^\textbf{T}}}&1 \end{array}} \right]\left[ {\begin{array}{c} {{X_W}}\\ {{Y_W}}\\ {{Z_W}}\\ 1 \end{array}} \right].$$
where f is the focal length of the imaging system, ${d_x}$ and ${d_y}$ are the physical lengths of each pixel in the physical coordinate system of the image, ${u_0}$ and ${v_0}$ are the coordinates of the main point of the camera, and the above parameters form the intrinsic matrix of the imaging model; R is a 3×3 rotation matrix, T is a 3×1 translation matrix, and the above two matrices form the extrinsic matrix between the 3D camera coordinate system and the world coordinate system, and s is the scale factor associated with each point’s ${Z_W}$.

Due to the inherent characteristics of the lens, there will be distortion in the imaging process, which requires the introduction of a distortion correction model in the camera calibration model [13]. The distortion can be divided into radial distortion, tangential distortion and thin prismatic distortion. In general, only radial aberrations are considered [14], so the imaging model is modified as shown in Eq. (2).

$$\left\{ {\begin{array}{l} {{x_1} = x[1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6} + o({k_n}{r^n})]}\\ {{y_1} = y[1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6} + o({k_n}{r^n})]} \end{array}} \right..$$
where ${k_1}$, ${k_2}$ and ${k_3}$ denote the first-, second- and third-order radial distortion coefficients, o denotes the higher-order infinitesimal term, $r = \sqrt {{x^2} + {y^2}} $, $({x,y} )$ is the ideal image coordinate and $({{x_1},{y_1}} )$ is the actual image coordinate considering the radial distortion.

From the above equation, it can be seen that when the setting parameters of the zoom lens are changed, the intrinsic parameters will change. Also, when the focal length of the lens changes, its radial distortion coefficient changes [15]. There are different implicit functions between different calibration parameters and lens settings, so the calibration of zoom cameras is more complex and variable than that of fixed-focus cameras.

2.3. FNN model

The FNN is a multi-layer feed-forward neural network that minimises the cost function by the Back Propagation algorithm and optimises the model by iteratively adjusting the weights between the layers, which is essentially a machine learning algorithm that needs to be trained with sample data to achieve better results [16]. The FNN consists of an input layer, an output layer and hidden layers in which the layers are connected by corresponding weights.

As can be seen from section 2.2 above, there is a complex mapping between the parameters and distortion coefficients with the lens settings in the zoom lens calibration imaging model, which makes it difficult to build an accurate imaging model like the fixed-focus lens. The FNN has a powerful non-linear mapping capability and can establish a mapping between the pixel coordinates and the world coordinates of the zoom lens, ignoring the specific imaging model. When applying FNN to the calibration of an optical measurement machine's zoom lens, the actual application requirements need to be considered. Optical measurement machines generally use a parfocal lens, which only requires the zoom to be set when used, while its calibration is aimed at completing two-dimensional planar measurements that only require the $({{X_W},{Y_W}} )$ coordinates of the object point. The FNN model proposed in this paper is shown in Fig. 2 [17]. The network consists of an input layer, hidden layers and an output layer, and layers are connected by weights. The number of neuron nodes in the input layer is 3, and the input parameters are the image pixel coordinates $({u,v} )$ and the zoom setting F; the number of neuron nodes in the output layer is 2, with the output parameters being the actual physical coordinates $({{X_W},{Y_W}} )$. The total number of the network layers is L, and the weights among the layers are ${\boldsymbol{\theta }^{(\boldsymbol{l} )}}$.

 figure: Fig. 2.

Fig. 2. Fully Connected Neural Network (FNN) models.

Download Full Size | PDF

As shown in Fig. 2, the output matrix of the l-th hidden layer unit is noted as ${\boldsymbol{A}^{(\boldsymbol{l} )}}$, where $a_i^{(l )}$ is the output value of the i-th neuron in the l-th layer, and ${\boldsymbol{\theta }^{(\boldsymbol{l} )}}$ denotes the parameter matrix of the function mapping from the l-th layer to the l+1-st layer. Let the input matrix of the neural network be X and the output matrix be Y. The input, output and hidden layers matrices and the parameter matrix can be expressed by the following equation respectively.

$$\textbf{X} = \left[ {\begin{array}{@{}cccc@{}} {x_1^{(1)}}&{x_2^{(1)}}& \cdots &{x_{{n_x}}^{(1)}}\\ {x_1^{(2)}}&{x_2^{(2)}}& \cdots &{x_{{n_x}}^{(2)}}\\ \vdots & \vdots & \ddots & \vdots \\ {x_1^{(m)}}&{x_2^{(m)}}& \cdots &{x_{{n_x}}^{(m)}} \end{array}} \right],\textbf{Y} = \left[ {\begin{array}{@{}cccc@{}} {y_1^{(1)}}&{y_2^{(1)}}& \cdots &{y_{{n_y}}^{(1)}}\\ {y_1^{(2)}}&{y_2^{(2)}}& \cdots &{y_{{n_y}}^{(2)}}\\ \vdots & \vdots & \ddots & \vdots \\ {y_1^{(m)}}&{y_2^{(m)}}& \cdots &{y_{{n_y}}^{(m)}} \end{array}} \right],{\textbf{A}^{(l)}} = \left[ {\begin{array}{@{}cccc@{}} {a_{11}^{(l)}}&{a_{12}^{(l)}}& \cdots &{a_{1{s_l}}^{(l)}}\\ {a_{21}^{(l)}}&{a_{22}^{(l)}}& \cdots &{a_{2{s_l}}^{(l)}}\\ \vdots & \vdots & \ddots & \vdots \\ {a_{m1}^{(l)}}&{a_{m2}^{(l)}}& \cdots &{a_{m{s_l}}^{(l)}} \end{array}} \right].$$
$${\boldsymbol{\theta }^{(l)}} = \left[ {\begin{array}{cccc} {\theta_{11}^{(l)}}&{\theta_{12}^{(l)}}& \cdots &{\theta_{1{s_l}}^{(l)}}\\ {\theta_{21}^{(l)}}&{\theta_{22}^{(l)}}& \cdots &{\theta_{2{s_l}}^{(l)}}\\ \vdots & \vdots & \ddots & \vdots \\ {\theta_{{s_{l + 1}}1}^{(l)}}&{\theta_{{s_{l + 1}}2}^{(l)}}& \cdots &{\theta_{{s_{l + 1}}{s_l}}^{(l)}} \end{array}} \right].$$
where m denotes the number of data samples, ${n_x}$ denotes the number of features of the input data, ${n_y}$ denotes the number of features of the output data, $x_i^{(j )}$ and $y_i^{(j )}$ denote the i-th feature value of the j-th sample of the input and output data respectively, ${s_l}$ denotes the number of neuron nodes in the l-th layer, $a_{ij}^{(l )}$ denotes the j-th neuron output of the i-th sample of the l-th layer value, $\theta _{ij}^{(l )}$ denotes the mapping weight from the j-th node in the l-th layer to the i-th node in the l+1-st layer.

In an FNN, the feed-forward propagation of the network represents the mapping relationship from the input layer to the output layer, which can be represented in the following equations.

$${\textbf{A}^{(1)}} = {g_1}(\textbf{X} \bullet {\boldsymbol{\theta }^{(1)T}}).$$
$${\textbf{A}^{(l + 1)}} = {g_{l + 1}}({\textbf{A}^{(l)}} \bullet {\boldsymbol{\theta }^{(l)T}}).$$
$$\textbf{Y} = {g_L}({\textbf{A}^{(L - 1)}} \bullet {\boldsymbol{\theta }^{(L - 1)T}}).$$
where ${g_l}$ denotes the activation function of the l-th layer neurons, you can choose sigmoid, relu and other activation functions.

When the feed-forward propagation has been completed, the cost function of the overall neural network will be calculated. This paper requires the use of FNN to fit a zoom camera imaging model and where is a typical regression problem. In this paper, the minimum mean square error (MSE) function will be used as the cost function of the FNN. After the feed-forward propagation of the neural network and the calculation of the cost function, an optimisation algorithm will be used to iteratively update the parameters to minimise the cost function to train a high precision zoom camera imaging model that meets the practical needs.

2.4. Laboratory equipment

The main focus of this research is on the high-precision calibration of zoom cameras in the optical measurement machine, so the use of a proven optical measurement machine as an image acquisition device can minimise the introduction of other related errors. The OPTIV ADVANCE Series 4.5.2 from HEXAGON was used for this study and the experimental equipment is shown in Fig. 3(a). The relevant parameters of this instrument are shown in Table 1. The lens is a NAVISTAR’s motorised parfocal zoom lens, which maintains the same image plane position when the zoom changes basically so that no focus setting is required during use and minor image plane shifts are compensated for by moving the lens up and down along the Z-axis. The unit is also equipped with a high precision displacement stage, which allows quantitative movement in the X, Y and Z axes via software and a handle. The light source can be adjusted via the software for measurement or image acquisition. The unit is equipped with three types of light source settings: coaxial light, ring light and backlight, allowing the light source brightness to be adjusted according to the needs of the user. The schematic diagram of the image acquisition experiment is shown in Fig. 3(b), where the information on the surface of the object under different light sources is captured by the camera through the zoom lens and the data is transferred to the computer for further processing. The coaxial light is directed to the surface of the object through a beam splitter prism, the ring light is directed to the surface of the object at an oblique angle and the reflected light is received by the camera. The backlight carries the transmitted information from the object to the camera. The computer will also control the zoom motor and camera to change the zoom settings and other camera settings for different experimental conditions.

 figure: Fig. 3.

Fig. 3. (a) Optical measurement machine; (b) Schematic diagram of the image acquisition experiment.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of the Hexagon Optiv Advance 4.5.2 optical measurement machine

The GPU used in this study was a GTX960M and the CPU was an Intel Core i5-6300HQ. The subsequent image processing, neural network construction and training were implemented in the Matlab environment.

2.5. Calibration plate

The Edmund 12-197 checkerboard calibration plate was used in this study. The checkerboard grid was drawn by chromium plating on a special glass plate. The nominal size of the checkerboard square model is 0.2mm with a dimensional deviation of ±0.002 mm, the dimensions of which are shown in Fig. 4(a) and the actual photograph is shown in Fig. 4(b). As shown in Fig. 4(a), the checkerboard grid part of the calibration board consists of black and white squares with equal size arranged between each other. When obtaining the pixel coordinates of the reference point, the corner pixel coordinates can be obtained by detecting the corner points of the checkerboard grid through the corner point detection algorithm, while the ${X_W}$ and ${Y_W}$ coordinates of the checkerboard grid calibration plate can be obtained according to the size of the squares.

 figure: Fig. 4.

Fig. 4. (a) Initial calibration plane at 2 × [18]; (b) First calibration image acquisition at 7×.

Download Full Size | PDF

2.6. Dataset acquisition and classification

The FNN is driven by the data and therefore the accuracy of the dataset acquisition and setup will directly affect the accuracy of the FNN model. In this study, the acquisition of the dataset is divided into two steps, image acquisition and corner point extraction. The acquisition of the images was done with the optical measurement machine in section 2.4. The zoom of this device can be set from 1.211 to 7.263, so to ensure the accuracy of the calibration in the full zoom range, 2× (with 2× characterisation zoom set to 2, same below),3×,4×,5×,6×,7× were selected as the calibration planes at typical zoom planes, and 2.5×,6.5× as the test planes to verify the accuracy of the calibration, the acquisition of the calibration plate images at the above-mentioned zooms is required for the acquisition of the dataset. In this study, the images were first acquired from 2×, so the calibration plane at 2× was used as the initial plane. As the field of view of the zoom lens decreases with increasing zoom, the number of corner points at each zooms needs to be approximately equal to ensure the accuracy of the FNN training, so multiple images need to be acquired at high zooms to meet the requirements.

The world coordinate system needs to be defined at the time of data set acquisition. In this study, the calibration plane under 2× was used as the initial calibration plane, the lens was moved to a suitable position to find the sharpest imaging surface, the mechanical coordinates of the displacement stage and the lens $({{X_{{M_0}}},{Y_{{M_0}}},{Z_{{M_0}}}} )$ were noted as the zero point at this time [19]. Place the calibration plate on the displacement table, ensuring that the edge of the checkerboard grid was parallel to the direction of the movement of the displacement stage and that the first square in the lower-left corner of the field of view was the first square in the lower-left corner of the checkerboard grid in Fig. 4. Set the origin of the world coordinate system to the lower-left corner of the first square of the lower-left corner of the checkerboard grid of the initial calibration plane, with the ${X_W}$ and ${Y_W}$ axes running horizontally to the right and vertically upwards along the checkerboard grid, and the ${Z_W}$ axis perpendicular to the plane upwards. As the lens zoom increases, the field of view decreases with the world coordinate system fixed, and the relative movement of the calibration plate was achieved by the movement of the lens and displacement stage to acquire multiple images. After adjusting the zoom, the mechanical coordinates were first returned to zero to ensure that the position of the calibration plate relative to the lens was the same as the initial calibration plane before the image was first acquired, making it easier to obtain the corner world coordinates of all subsequent images acquired at this zoom setting. Figure 5(a) and 5(b) show the initial calibration plane at 2× and the first acquired image after zoom adjustment to 7× respectively. After adjusting the zoom to 7×, the mechanical coordinates were returned to zero, the sharpest imaging surface was found by moving the lens up and down, and then the field of view was panned to find the position of the first corner point in the lower-left corner of the field of view relative to the origin of the world coordinate system to obtain the world coordinates of all corner points in the field of view. Later, when other images were acquired at 7×, the world coordinate system was fixed and a large number of object points were created by moving the lens and the displacement stage to achieve relative movement of the calibration plate.

 figure: Fig. 5.

Fig. 5. (a) Dimensional drawing of the calibration plate ; (b) Physical view of the calibration plate.

Download Full Size | PDF

This study started with the acquisition of images at 2× and continued until 7×, acquiring images at 8 zoom settings in total. As the zoom increases, the field of view and the number of corner points in each image gradually decrease. In order to ensure that the number of corner points is similar at different zoom settings, multiple images were acquired at high zoom settings by moving the displacement stage and lens. The coordinates of the corner points of each image are mapped to the fixed world coordinate system by the distance of movement of the displacement stage and the coordinates of the first corner point of the lower-left corner in the field of view, thus enabling the construction and acquisition of a large amount of corner point data. The image acquisition settings at each zoom setting are shown in Table 2.

Tables Icon

Table 2. Image acquisition settings at different zoomsa

The calibration plate used in this study does not show a black and white checkerboard pattern under coaxial light irradiation due to reflectivity and material, but rather a blue and white pattern, so to improve the accuracy of corner point detection, the original images had been pre-processed in this study, so that their contrast had significantly enhanced, which are more conducive to subsequent corner point detection. The specific process of pre-processing is as follows: the captured colour images are first converted into greyscale images, after which a linear function is used and suitable coefficients are selected to perform a linear greyscale transformation of the images, increasing the difference between the greyscales of the images and improving the contrast of the images. As an example, take the first image acquired at 7×, the pre-processed and post-processed images are shown in Fig. 6(a) and Fig. 6(b), respectively.

 figure: Fig. 6.

Fig. 6. (a) The first image at 7× before pre-processing; (b) The first image at 7× after pre-processing

Download Full Size | PDF

In the sub-pixel Harris corner point detection of the pre-processed image as in Fig. 6(b), the detected corner points often appear in the white squares near the edges of the black squares instead of the exact corner point locations due to greyscale mutations and other reasons. Thus, this study improves the existing sub-pixel-level Harris corner detection algorithm by lowering the detection threshold to detect two initial corner points at diagonal positions for each corner position, and by averaging each pair of initial corner points to find the final corner position. A comparison of the detection results before and after the improvement is shown in Fig. 7(a) and Fig. 7(b). As shown in Fig. 7(a), the red cross is the corner point detected by the traditional Harris corner point detection algorithm, which will result in inaccurate detection position, undetected or repeated detection problems; as shown in Fig. 7(b), the red cross is the initial corner point located on the diagonal, and the green dot is the final sub-pixel level corner point obtained. The two figures show that the improvement of the Harris corner point detection algorithm has resulted in more accurate and reliable detection results.

 figure: Fig. 7.

Fig. 7. (a) Traditional Harris corner point detection algorithm (b) Improved Harris corner point detection algorithm

Download Full Size | PDF

In this study, the dataset contains samples of corner points at the above-mentioned zoom settings, the number of samples of corner points in the dataset at the 2×,3×,4×,5×,6× and 7× calibration planes is 3126, and the number of samples of corner points in the dataset at the 2.5×,6.5× test planes is 643, and the total number of samples of corner points in the dataset is 3769. In this study, the dataset was divided into a training set, a cross-validation set and a test set. The training set was used for training the FNN, the cross-validation set was used for the selection of hyper-parameters to prevent over-fitting and under-fitting of the network, and the test set was used to test the accuracy of the samples. In this study, the ratio of the training set, cross-validation set and test set is roughly 07:0.15:0.15; this study aims to achieve high-precision calibration of continuous zoom in the full zoom range by calibrating several typical zoom planes, so the training set samples are all randomly selected from samples of corner points at the calibration planes, and the number of samples selected for the training set is 2600 (3769×0.7 = 2639.3≈2600); to verify the calibration accuracy of the network in the calibration plane and test plane, the test set consists of two parts, 322 (643/2 = 321.5≈322) randomly selected samples of corner points from the test planes and 263 ((3126-2600)/2 = 263) randomly selected samples of corner points from the calibration planes; to ensure that the trained network will not have overfitting and underfitting situations, the cross-validation set should be the same distribution as the test set, so the cross-validation set consists of samples from the remaining calibration and test planes; in summary, in this study, the number of samples in the training set is 2600, the number of samples in the cross-validation set is 584 (3769-585-2600 = 584) and the number of samples in the test set is 585 (263 + 322 = 585).

3. Experimental results

To verify the feasibility of the above-proposed method, experiments are conducted in this paper. The experiments are divided into three parts: firstly, the FNN network is trained using the dataset described in Section 2.6, and the optimal network structure is constructed by adjusting the hyper-parameters; secondly, the calibration accuracy of the method is verified by comparing the calibration method with the classical Zhang's calibration method and the BP neural network-based calibration method in Ref. [16]; and finally, the precision of this method is verified through experimental reproducibility measurements of the target at multiple zoom settings. A brief description of the experiments and the analysis of the results are presented in the subsequent sections of this chapter.

3.1. FNN structure determination and training

3.1.1. Choice of hyper-parameters

In the training process of neural networks, when the main structure of the neural network is determined, the selection of hyper-parameters will directly affect the effect of network convergence. Common hyper-parameters include the number of hidden layers, the number of neuron nodes in the hidden layers, the learning rate, the number of iterations, the activation function of the hidden layers and the output layer, and the optimisation method to minimise the cost function.

Among the above hyper-parameters, the number of hidden layers and the number of neuron nodes in the hidden layers, which have the most obvious influence on the convergence effect of the network, are adjusted in this paper by building a hyper-parameter grid, which is operated by the strategy of coarse adjustment first and then fine adjustment. In the FNN, if the number of layers is too deep, it will lead to problems such as gradient dispersion or gradient explosion, and when the number of network nodes is too large, it will lead to problems such as deactivation of the extra nodes, thus, it is not better to have more layers and nodes in the network. In this paper, the number of hidden layers is chosen from 1 to 3 layers and the number of nodes from 1-30 to reduce the computational effort and avoid the above problems.

In the process of coarse adjustment, the number of neurons in each hidden layer is set to be the same to reduce the computational effort, and a two-dimensional coarse adjustment hyper-parameter grid with horizontal and vertical coordinates indicating the number of hidden layers and nodes respectively is established as shown in Fig. 8. By traversing each point in the grid and calculating the testing effect on the test and cross-validation sets after training the corresponding network, a range of network layers and nodes is selected to complete the coarse adjustment.

 figure: Fig. 8.

Fig. 8. The hyperparameter grid of coarse adjustment.

Download Full Size | PDF

The results of the coarse adjustment are shown in Fig. 9. Figure 9(a) and 9(b) respectively show the training effect on each dataset when the purelin function is used as the activation function for the output layer and when the tansig function is used. The training effect is characterised by the reconstruction error, which is the mean of the absolute value of the difference between the world coordinates of each sample’s output of the network and the theoretical value. The training effects are shown from left to right in Fig. 9 for the number of hidden layers, from 1 to 3, respectively. As can be seen from Fig. 9, the training effect of the output layer using the purelin function is generally better than that of the tansig function, and the training effect is also better when choosing the number of hidden layers to be 3. When the number of neuron nodes in the hidden layers is less than 4, the training effect is generally worse. Therefore, based on the results of the coarse adjustment, the number of hidden layers was chosen to be 3, the number of nodes ranged from 4 to 30, and the purelin function was used for the output layer activation function.

 figure: Fig. 9.

Fig. 9. (a) The training effect of the output layer using the purelin activation function; (b) The training effect of the output layer using the tansig activation function.

Download Full Size | PDF

After completing the coarse adjustment, the number of neuron nodes in each hidden layer needs to be finely adjusted to select the best network structure. A hyper-parameter grid similar to that in Fig. 8 is created. As the number of hidden layers has been determined to be 3 in the coarse adjustment and the number of nodes in the hidden layers ranges from 4 to 30, a 3D fine adjustment hyper-parameter grid is created with the three axes being the number of nodes in the first, second and third hidden layers respectively. The number of nodes in each layer can be chosen from 27 values, so if the traversal method in coarse adjustment is used, 19,683 networks will need to be trained, which is too computationally intensive and unnecessary, so 100 points in the grid were randomly selected, and the number of nodes of each hidden layer was determined according to their 3D coordinates, and the relevant parameters of the six networks containing the best network were listed in Table 3, the reconstruction error was still used to characterize the effect of the model in the table. By fine adjustment of the number of neurons in each hidden layer, a five-layer network structure of 3-7-13-12-2 is finally selected in this paper.

Tables Icon

Table 3. Training effect with different number of nodes in each hidden layer

In this paper, through extensive experiments, we finally decided to choose the tansig function as the hidden layer activation function and the purelin function as the output layer activation function. The optimization method uses Bayesian Regularization, with a learning rate of 0.000001 and the number of iterations of 200.

3.1.2. FNN training

The network structure and hyper-parameters of the FNN were set according to the experimental results in 3.1.1, and the training of the neural network was completed. The trained neural network was tested in the 2.5× and 6.5× planes. The test results and other performance parameters of the network are shown in Table 4, where the error terms are represented by the aforementioned reconstruction errors. The FNN model was also reconstructed for the world coordinates predicted from the sample data in the 2.5× plane to visually represent the training accuracy of the network, and the difference between the predicted and ideal coordinates is shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Data reconstruction in the 2.5× plane.

Download Full Size | PDF

Tables Icon

Table 4. FNN model training resulta

Through the above results, it can be seen that the trained FNN has a small mean square error and reconstruction error on the training set, and the network fits the training set well; at the same time, the reconstruction error of the model on the test set is small, and its prediction is accurate for the data in the 2.5× and 6.5× test planes, the difference between the predicted value and the ideal value is small when reconstructing the data samples in the 2.5× test plane. The results show that the FNN model in this paper has a strong ability of generalization and prediction, and can be used for high-precision zoom lens calibration.

3.2. Comparison of calibration methods

To verify the calibration precision of the method proposed in this paper, it was compared with the classical Zhang's calibration method and the BP neural network-based calibration method in Ref. [16]. The optical measurement machine mentioned in section 2.4 was used for the acquisition of images, and the calibration was performed at 2×, 2.5×,3×,4×,5×,6×,6.5×,7× planes, again using the aforementioned reconstruction error as the error judging criterion. The results of Zhang's calibration method, the result of the method in Ref. [16] and the results of this paper's method for the calibration of the above planes are listed in Table 5.

Tables Icon

Table 5. Comparison results of calibration methods.

For the method in this paper, the reconstruction errors of 2.5× and 6.5× in Table 5 are the results after FNN prediction and are not true calibration errors. As can be seen from the results in Table 5, the calibration precision of this method is slightly lower than that of Zhang's calibration method at low zoom settings, but at high zoom settings, this method has a calibration precision slightly higher than that of the Zhang's calibration method. In comparison with the method in Ref. [16], the calibration precision of our method is slightly higher than it, and the BP neural network-based calibration method in Ref. [16] can only be calibrated for fixed-focus cameras, so calibration of the above eight zoom settings requires separate training of eight different neural networks, which makes the calibration process more complicated. As can be seen, by the mean error, the overall calibration precision of this method is better than that of both methods mentioned above, reaching 0.98 µm.

3.3. Measurement accuracy verification

The ultimate goal of camera calibration is to achieve a mapping from the camera coordinate system to the world coordinate system, according to which the actual object can be measured, so it is necessary to verify the measurement precision of the FNN model established by this method in the actual measurement process. This part of the experiment was divided into three parts. First, a line segment was randomly selected in the test plane included in the dataset of this paper, and its length was measured to verify the measurement precision of the model in the test plane; second, another object to be measured was selected to measure different lengths of its surface at randomly selected zoom setting to verify the repeatability measurement precision of this method; finally, a line segment was randomly selected on the surface of the object to be measured, and its length was measured at different zoom settings to verify the generalisability and precision of the method at different zoom settings.

3.3.1. Test plane measurement

In this part of the experiment, a line segment was randomly selected in the test plane at 6.5× and measured using this method. To verify the measurement precision of this method, the length of the same line segment was measured using a ZEISS Axio Vert. A1 high-precision microscope, the results and nominal dimensions of this line segment are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Test plane measurement results and nominal dimensions.

Download Full Size | PDF

Figure 11 shows, from left to right, the microscope measurements, the nominal size of the calibration plate and the measurements using the method of this paper, from the figure it can be seen that the nominal size of the line segment being measured is 1.4 mm. The lengths of individual squares and line segments measured by the microscope were within the tolerance range of the dimensional deviations indicated in the checkerboard dimension figure, so it was clear that the microscope measurements are accurate and therefore the microscope measurements are used as the true values for comparison with the results of this method in subsequent experiments. As can be seen from Fig. 11, when a line segment in the test plane is measured using the method in this paper, the measured value deviates very little from the theoretical value by around 0.28 µm, which is of the same order of magnitude as the calibration error, but the difference between the measured value and the true value is magnified by the manufacturing error in the calibration plate, to around 5 µm.

3.3.2. Repeatable measurement

To verify the measurement precision of this method in practice, an actual object to be measured that is different from the calibration object was selected for the measurement experiment. The repeatability of this method was first checked by randomly selecting a certain zoom setting to measure different line lengths on the surface of the object. The actual object to be measured and the microscope measurements are shown in Fig. 12. Six different line segments on the surface of the object to be measured were numbered and the length of each line segment was measured under the microscope and the results were taken as the true value. The lengths of the six line segments were then measured using the method of this paper at 3.5× and the results are shown in Table 6.

 figure: Fig. 12.

Fig. 12. Results from microscopic measurements of different lengths on the surface of the object to be measured.

Download Full Size | PDF

Tables Icon

Table 6. Measurement of the length of the line segment on the surface of the object to be measured at 3.5×

The results in the table show that the average error in measuring the length of different line segments on the surface of the object to be measured at 3.5× is 6.7 µm. These results verify the repeatability of the method in the actual measurement process and also demonstrate the high accuracy of the measurement using this method. When analysing the results in the table, it was found that the error decreases with decreasing length of the object to be measured, following the basic principle of error, except for line 6, which may be due to some random errors.

3.3.3. Generalization measurement

This method uses the FNN model to achieve continuous calibration of zoom lenses at any zoom setting by calibrating typical zooms, so it is necessary to verify the generalisation of this method, i.e. to verify the accuracy of this method at multiple zoom settings. So repeated measurements of the length of a line segment on the surface of the object to be measured were carried out at multiple zoom settings, including not only those used in the training and hyper-parameter adjustment of the neural network but also 1.5×, 3.5×, 4.5× and 5.5×, which were not used. And to ensure that the selected line segment had a good definition and was fully present in the field of view at all zoom settings, the line segment ② in 3.3.2 was selected for measurement, and the results are listed in Table 7.

Tables Icon

Table 7. Line ② measurement results at multiple zoom settings.

From the measurement results in the table above, it can be seen that the average absolute error of the measurement for line segment ② is 13 µm, which verifies the generalisation capability and high measurement accuracy of this method at multiple zoom settings in the actual measurement process. The experiments also demonstrate that this method achieves high-precision calibration of continuous zoom lenses at all zoom settings by measuring at 1.5×, 3.5×, 4.5× and 5.5×, zoom settings not used in the training of the neural network and can be applied to high precision optical measurement machines. Further analysis of the results shows that, except at a low zoom of 1.5×, the measurement error for line segments ② at other zoom settings is basically stable at around teens of microns, so higher zoom should be used for shorter lengths of line segments, and the accuracy of this method can be further improved if a suitable zoom range is chosen for the measurement of the object to be measured.

4. Discussion

From the experimental results in Chapter 3 above, it can be seen that the calibration precision of the zoom lens calibration method based on FNN proposed in this paper can reach micron or even sub-micron magnitudes, while the measurement precision at different zoom settings can only reach micron and ten-micron magnitudes, so it is necessary to analyse the causes of this difference. This chapter presents a qualitative analysis of the factors affecting the actual measurement precision of this method from the point of view of error analysis.

The reasons for the errors in the actual measurement process are various, but this paper found that one of the most influential factors in the measurement precision was the longitudinal deviation between the measurement plane and the calibration plane. As the optical measurement machine uses monocular imaging equipment, and when using a single camera to measure a two-dimensional target, it is necessary to assume that the measurement plane and the calibration plane are in the same plane or that the longitudinal deviation between the measurement plane and the calibration plane is known from Eq. (1), while the camera used in the optical measurement machine has a small depth of field and the deviation is not known in the actual measurement process. In this study, it was assumed that the measurement plane and the calibration plane were in the same plane. Thus, under these assumptions, when there is a longitudinal deviation between the measurement plane and the calibration plane, a measurement error is introduced as shown in Fig. 13.

 figure: Fig. 13.

Fig. 13. Measurement error due to longitudinal deviation.

Download Full Size | PDF

As shown in Fig. 13, when there is a longitudinal deviation between the measurement plane and the calibration plane, measuring the same length of a line segment will produce a measurement error in the imaging plane. There are various reasons for this longitudinal deviation during the measurement process, such as depth of field, light source, etc. The measurement plane is clearly imaged within the depth of field, so the measurement plane and the calibration plane may be located on different planes within the depth of field provided that they are both clearly imaged, which introduces longitudinal deviation and leads to measurement error. Although the depth of field of the optical measurement machine is small, the depth of field increases as the zoom setting decreases [20], so the effect of depth of field is more pronounced at low zoom settings, as shown in the reconstruction results in Table 5, where the reconstruction error at 2.5× is significantly greater than that at 6.5×.

Similarly, differences in light source settings can introduce longitudinal deviations in the measurement process, as shown in Fig. 14, which shows the original images of the checkerboard grid in the same plane at the 6× with different light source settings. It is clear from Fig. 14 that the different light settings in the same plane result in different levels of sharpness of the checkerboard grid, and therefore the location of the sharpest image surface varies between light settings. For example, the light source used in this study is a combination of coaxial and ring light for the acquisition of the calibration plate images, and the light intensity needs to be adjusted at different zoom settings; while for the measurement of the actual object to be measured, the base structure of the object to be measured is transparent, so the backlight illumination was used. The image taken of the object to be measured is shown in Fig. 15. The comparison between Fig. 14 and Fig. 15 shows that the difference in light source settings between the images of the object to be measured and the calibration plate is large, which leads to a large deviation between the calibration plane and the measurement plane even at the same zoom, and therefore the measurement accuracy is lower than the calibration accuracy during the experiments in section 3.3.3.

 figure: Fig. 14.

Fig. 14. Checkerboard grid image with different light settings.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Image of the object to be measured.

Download Full Size | PDF

In addition to the measurement errors introduced by the longitudinal deviation between the measurement plane and the calibration plane, several other factors can also have an impact on the measurement precision. As shown in Fig. 11, the calibration plate itself has manufacturing errors, deviations between its nominal dimensions and the true values would affect the measurement precision. Other factors, such as variations in ambient temperature and humidity, the precision of the displacement stage and the accuracy of the corner detection algorithm, also have an impact on the measurement precision. As a result of the combination of these factors, the results of the measurement experiments are shown in Tables 6 and 7, with the measurement precision being lower than the calibration precision overall.

5. Conclusions

Zoom lenses have the flexibility that fixed-focus lenses do not have due to their variable lens setting parameters. However, the camera calibration parameters also change with the lens settings, which increases the complexity of zoom lens calibration. To achieve high-precision calibration of the zoom lens in optical measurement machines, this paper proposes an FNN-based calibration method, which constructs a five-layer FNN zoom camera calibration model and achieves continuous calibration of the zoom lens at any zoom settings by calibrating at six typical zooms. The method has the following features:

  • 1. The calibration model is simple. Compared to other zoom camera calibration methods, this method does not require knowing the specific zoom camera imaging model and uses a simple five-layer FNN model to replace it, as well as not needing to fit specific calibration parameters during the calibration process, thus eliminating the need to choose different fitting polynomials for different parameters and different systems, reducing the complexity of the calibration.
  • 2. This calibration method is highly generalisable. The method proposed in this paper can be used not only for the calibration to the zoom lens in the optical measurement machine, but it can also be used for the calibration of any single or even multi-mirror zoom or fixed-focus camera with known lens settings, and the calibration of other systems can be achieved by simply modifying the input parameters of the model accordingly, adding, for example, focus, aperture settings or adjusting the input pixel coordinate parameters according to the number of cameras, when the system changes.
  • 3. Simple calibration process. For example, in this study, only 38 images were sampled at 8 zoom settings, including the test sample, to achieve continuous calibration at all zoom settings. Compared to other calibration methods, for example, such as Zhang's method, the number of images to be sampled is much greater than this method to ensure calibration accuracy at the above 8 zoom settings, and this method does not require rotation of the calibration plate, but only translation using a displacement stage, which also reduces the complexity of the calibration operation.
Finally, the calibration and measurement experiments in Section 3 show that the average calibration error of the method is 9.83×10−4 mm and the average measurement error at any zoom is 0.01317 mm. The above experiments validate the repeatability and generalization of this method and prove that it can be used for high-precision optical measurement machines and mechanical parts measurement because of its high calibration and measurement precision. The method proposed in this paper can also be widely used in photogrammetry and 3D reconstruction, which is of great theoretical and practical significance.

Funding

Sichuan Province Science and Technology Support Program (2021JDRC0089, 2022YFG0223, 2022YFG0249); Bureau of Development and Planning, Chinese Academy of Sciences (YJKYYQ20200060, YJKYYQ20210041).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Chen, J. Xu, and H. Yang, “Camera calibration method based on double neural network,” Infrared and Laser Engineering 50, 294–302 (2021).

2. Z. Wang, J. Mills, W. Xiao, R. Huang, S. Zheng, and Z. Li, “A Flexible, Generic Photogrammetric Approach to Zoom Lens Calibration,” Remote Sens. 9(3), 244 (2017). [CrossRef]  

3. M. T. Ahmed and A. A. Farag, “A neural optimization framework for zoom lens camera calibration,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), (IEEE, 2000), pp. 403–409.

4. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

5. K. Tarabanis, R. Y. Tsai, and D. S. Goodman, “Modeling of a computer-controlled zoom lens,” in Proceedings 1992 IEEE International Conference on Robotics and Automation, (IEEE Computer Society, 1992), pp. 1545–1551.

6. Y.-S. Chen, S.-W. Shih, Y.-P. Hung, and C.-S. Fuh, “Simple and efficient method of calibrating a motorized zoom lens,” Image Vis. Comput. 19(14), 1099–1110 (2001). [CrossRef]  

7. R. G. Willson, “Modeling and calibration of automated zoom lenses,” in Videometrics III, (SPIE, 1994), pp. 170–186.

8. M. T. Ahmed, E. E. Hemayed, and A. A. Farag, “Neurocalibration: a neural network that can tell camera calibration parameters,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, (IEEE, 1999), pp. 463–468.

9. N. El Akkad, M. Merras, A. Saaidi, and K. Satori, “Camera self-calibration with varying intrinsic parameters by an unknown three-dimensional scene,” Vis. Comput. 30(5), 519–530 (2014). [CrossRef]  

10. J. Li and Z. Liu, “Efficient camera self-calibration method for remote sensing photogrammetry,” Opt. Express 26(11), 14213–14231 (2018). [CrossRef]  

11. Y. Zhang, W. Liu, F. Wang, Y. Lu, W. Wang, F. Yang, and Z. Jia, “Improved separated-parameter calibration method for binocular vision measurements with a large field of view,” Opt. Express 28(3), 2956–2974 (2020). [CrossRef]  

12. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014). [CrossRef]  

13. H. Cai, Y. Song, Y. Shi, Z. Cao, Z. Guo, Z. Li, and A. He, “Flexible multicamera calibration method with a rotating calibration plate,” Opt. Express 28(21), 31397–31413 (2020). [CrossRef]  

14. C. Yin, X. Chu, S. Yang, L. Li, and G. Sui, “High-precision zoom camera calibration of stereo vision measurement system whit single camera,” Optical Technique 45, 668–676 (2019). [CrossRef]  

15. C. S. Fraser and S. Al-Ajlouni, “Zoom-dependent camera calibration in digital close-range photogrammetry,” Photogramm. Eng. Remote Sens. 72(9), 1017–1026 (2006). [CrossRef]  

16. T. Liang, C. Zhu, and H. Chen, “Research on Binocular Telecentric Lens Calibration Method based on Neural Network,” Journal of Jiangsu Normal University (Natural Science Edition) 38, 75–78 (2020).

17. J. Dou, C. Pan, and J. Liu, “Robustness of neural network calibration model for accurate spatial positioning,” Opt. Express 29(21), 32922–32938 (2021). [CrossRef]  

18. Edmund, “20×20mm-opal-checkerboard-target”, https://www.edmundoptics.cn/p/20-x-20mm-opal-checkerboard-target/40643/.

19. Q. Cheng, F. Pan, and Y. Yuan, “Hand-eye calibration method of gantry robot based on 3D vision sensor,” Opto-Electronic Engineering 48, 30–38 (2021). [CrossRef]  

20. S. Zhou, X. Chai, and F. Shao, “Stereoscopic zoom for visual optimization based on grid deformation,” Opto-Electronic Engineering 48, 15–29 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. The relationship between the conversion of each coordinate system in the camera calibration.
Fig. 2.
Fig. 2. Fully Connected Neural Network (FNN) models.
Fig. 3.
Fig. 3. (a) Optical measurement machine; (b) Schematic diagram of the image acquisition experiment.
Fig. 4.
Fig. 4. (a) Initial calibration plane at 2 × [18]; (b) First calibration image acquisition at 7×.
Fig. 5.
Fig. 5. (a) Dimensional drawing of the calibration plate ; (b) Physical view of the calibration plate.
Fig. 6.
Fig. 6. (a) The first image at 7× before pre-processing; (b) The first image at 7× after pre-processing
Fig. 7.
Fig. 7. (a) Traditional Harris corner point detection algorithm (b) Improved Harris corner point detection algorithm
Fig. 8.
Fig. 8. The hyperparameter grid of coarse adjustment.
Fig. 9.
Fig. 9. (a) The training effect of the output layer using the purelin activation function; (b) The training effect of the output layer using the tansig activation function.
Fig. 10.
Fig. 10. Data reconstruction in the 2.5× plane.
Fig. 11.
Fig. 11. Test plane measurement results and nominal dimensions.
Fig. 12.
Fig. 12. Results from microscopic measurements of different lengths on the surface of the object to be measured.
Fig. 13.
Fig. 13. Measurement error due to longitudinal deviation.
Fig. 14.
Fig. 14. Checkerboard grid image with different light settings.
Fig. 15.
Fig. 15. Image of the object to be measured.

Tables (7)

Tables Icon

Table 1. Parameters of the Hexagon Optiv Advance 4.5.2 optical measurement machine

Tables Icon

Table 2. Image acquisition settings at different zoomsa

Tables Icon

Table 3. Training effect with different number of nodes in each hidden layer

Tables Icon

Table 4. FNN model training resulta

Tables Icon

Table 5. Comparison results of calibration methods.

Tables Icon

Table 6. Measurement of the length of the line segment on the surface of the object to be measured at 3.5×

Tables Icon

Table 7. Line ② measurement results at multiple zoom settings.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

s [ u v 1 ] = [ f / d x 0 u 0 0 0 f / d y v 0 0 0 0 1 0 ] [ R T 0 T 1 ] [ X W Y W Z W 1 ] .
{ x 1 = x [ 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + o ( k n r n ) ] y 1 = y [ 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 + o ( k n r n ) ] .
X = [ x 1 ( 1 ) x 2 ( 1 ) x n x ( 1 ) x 1 ( 2 ) x 2 ( 2 ) x n x ( 2 ) x 1 ( m ) x 2 ( m ) x n x ( m ) ] , Y = [ y 1 ( 1 ) y 2 ( 1 ) y n y ( 1 ) y 1 ( 2 ) y 2 ( 2 ) y n y ( 2 ) y 1 ( m ) y 2 ( m ) y n y ( m ) ] , A ( l ) = [ a 11 ( l ) a 12 ( l ) a 1 s l ( l ) a 21 ( l ) a 22 ( l ) a 2 s l ( l ) a m 1 ( l ) a m 2 ( l ) a m s l ( l ) ] .
θ ( l ) = [ θ 11 ( l ) θ 12 ( l ) θ 1 s l ( l ) θ 21 ( l ) θ 22 ( l ) θ 2 s l ( l ) θ s l + 1 1 ( l ) θ s l + 1 2 ( l ) θ s l + 1 s l ( l ) ] .
A ( 1 ) = g 1 ( X θ ( 1 ) T ) .
A ( l + 1 ) = g l + 1 ( A ( l ) θ ( l ) T ) .
Y = g L ( A ( L 1 ) θ ( L 1 ) T ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.