Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Monocular vision-based dynamic calibration method for determining the sensitivities of low-frequency tri-axial vibration sensors

Open Access Open Access

Abstract

Low-frequency vibrations exist widely in the natural environment and in human activities. Low-frequency tri-axial vibration sensors are enormously applied in the fields of seismic monitoring, building structure health monitoring, aerospace navigating, etc. Their sensitivity calibration accuracy directly determines whether their applications can work reliably. Although the laser interferometry recommended by the International Standardization Organization (ISO) is commonly used to achieve the vibration calibration, it suffers from the shortages of low-frequency range, high cost, low efficiency, and limited applicable environment. In this study, a novel monocular vision-based dynamic calibration method is proposed, which determines the whole sensitivities of tri-axial sensors by the monocular vision method to accurately measure the spatial input excitation. This method improves the calibration performance by eliminating the installation error and enhancing calibration efficiency via decreasing reinstallations. The experimental results compared with the laser interferometry demonstrate that the investigated method can obtain similar calibration accuracy in the range of 0.16-2 Hz with more efficiency. The corresponding maximum relative deviations of X-, Y-, and Z-axial sensitivities were approximately 2.5%, 1.8%, and 0.4%. In addition, the maximum relative standard deviation of the investigated method was only about 0.3% in this range.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

To obtain the displacement, velocity, and acceleration in natural environment and human activities, the low-frequency vibration sensors have been used in various applications of aerospace exploration, marine monitoring, unmanned driving, pose estimation, and precision control with high-efficiency and low-cost [13]. According to the difference of output axes, the sensors can be easily divided to single- and multi-axial sensors. Compared with single axial sensors, the multi-sensors are widely used in engineering application because they have ability to acquire the more vibration information simultaneously. The sensitivity is the most important parameter in vibration sensors, which is usually taken as a known value and directly determine the validity of their measurement data. Therefore, the sensors must be calibrated before used and after they have been used for a period to ensure measurement accuracy.

At present, the vibration calibration method can be categorized into static and dynamic calibration. Static calibration is to determine the sensitivity of the sensors by placing it in different positions. The most representative method is the Earth gravity method (EG) recommended by ISO 16063-16, which uses gravitational acceleration as input excitation for the sensors, placing the sensor at different angular positions with the direction of the gravitational field, and combines the output of sensor for sensitivity amplitude calibration [4]. The results obtained by the above calibration method is the static sensitivity of the sensors. Although static calibration can quickly and efficiently obtain the sensitivity of the sensors, the sensors are mostly faced with dynamic scenarios in practical use. Therefore, dynamic calibration is more valuable than static calibration methods. Dynamic calibration methods are usually used to determine the sensitivity of the calibrated sensor at each frequency by using shaker to excite the sensor with different frequencies of vibration.

A lot of literatures focused on the dynamic vibration calibration and obtained a certain accuracy in a wide frequency range especially at medium and high frequency. The calibration of multi-axial vibration sensors mostly uses the laser interferometry recommended by ISO 16063-11, which is applied to dynamic calibration of sensitivity in the frequency from 1 Hz to 10 kHz and acceleration range from 0.1 to 1000 m/s2 [5]. The conventional calibration methods usually utilize a linear shaker to calibrate the sensitivities of the sensors by multiple mounting. Dosch J [6] using a rotary table combined with the Earth gravity method to calibrate low frequency sensor in range of DC-5 Hz. Dobosz M et al. [7] carried out an in-depth study on the theory of laser interferometry for vibration measurements. Martens H J et al. [8] carried out a study on the traceability of laser interferometry based on laser interferometry and a long stroke shaker to improve the resolution of the measurement system. Yu M et al. [9] carried out an in-depth study on the technique of sinusoidal linear vibration measurement based on laser interferometry. Mende M et al. [10] addressed the problems faced by laser interferometry in calibrating vibration sensors below 1 Hz. However, the above methods cannot achieve the calibration of the sensors in one installation, and the calibration uncertainty introduced by multiple repeated installations reduced the calibration efficiency and increased the uncertainty of the sensitivities.

Germany PTB [11] developed a three-component shaker with a frequency range of 1 Hz-1 kHz and a maximum acceleration of 100 m/s2, which realized the calibration of tri-axial vibration sensors in one installation. Japan AIST [12] utilized a tri-axial shaker with three laser interferometers to realize the calibration of the sensors above 50 Hz. China NIM [13] also developed a three-component shaker with a calibration frequency of 5 Hz-1.6 kHz, which was used with three laser interferometers to achieve the sensitivity calibration of the sensor. However, the conventional three-component shaker is limited by the size of the stroke, which cannot provide high acceleration vibration excitation for the sensors at low-frequencies and cannot guarantee high signal-to-noise ratio (SNR) calibration. Stewart platform has the advantages of high accuracy and large displacement space, which is gradually gaining attention in the field of vibration testing. Liu Z et al. [14,15] used the Stewart platform for inclinometer testing, and investigated a method to generate spatial trajectories to calibrate tri-axial vibration sensor.

In fact, the calibration accuracy of sensors mainly depends on the measurement accuracy of their input excitation acceleration. In low-frequency vibration calibration, it is common to increase the excitation displacement to obtain sufficient acceleration excitation. But the laser interferometry is difficult to meet the measurement accuracy because of the low SNR and speckle noise in low frequency [16]. In recent years, machine vision has the advantages of wide dynamic range, high resolution and precision, flexibility and efficiency, and has been widely used in high-precision dynamic measurement. Yang M et al. [1720] applied monocular vision to the field of low-frequency vibration dynamic metrology in recent years, which obtained similar calibration accuracy of line vibration sensor in the low-frequency range to that of laser interferometry, and they demonstrated that the performance of the monocular vision method for low-frequency vibration calibration by uncertainty assessment. Zhang Y et al. [21] used monocular vision combined with laser projection point method to decouple the vibration of a Stewart platform and achieved acceptable accuracy. However, this method does not account for installation errors, which can decrease its reliability.

In order to address the issues of low accuracy and efficiency in traditional calibration methods for low-frequency calibration, this study proposes a dynamic calibration method based on monocular vision and laser projection. This method projects the spatial motion of the Stewart platform onto a two-dimensional panel using three laser beams, enabling an efficient and accurate extraction of motion feature centers through monocular vision techniques. Additionally, it accurately measures input excitation provided by the Stewart platform to sensors.

The remainder of this article is organized as follows: Section 2 describes the composition of monocular vision-based calibration system and error correction model. In section 3, the points motion information is extracted by monocular vision method to accurately reproduce the spatial motion, and the sensor’s calibration model is used to calibrate the sensor. Several comparison experiments and results analysis are provided in Section 4. The conclusion is conducted in Section 5.

2. Monocular vision-based calibration system of low-frequency tri-axial vibration sensors

2.1 Monocular vision-based dynamic calibration system description

Figure 1 depicts the monocular vision-based tri-axial vibration sensor calibration system, which is constituted by a Stewart platform, laser projected device, calibrated tri-axial sensor, projection panel, CMOS camera, and data acquisition (DAQ) card.

 figure: Fig. 1.

Fig. 1. Schematic diagram of monocular vision-based calibration system for low-frequency tri-axial vibration sensors.

Download Full Size | PDF

The Stewart platform composed of the base platform and moving platform and it could produce vibration along the spatial orbit, which the calibrated tri-axial vibration sensor firmly mount-ed on the center of Stewart platform along three dynamic axes. The spatial tri-axial vibration is generated by moving platform, which provides input sinusoidal excitation for the calibrated tri-axial vibration sensor, the most important problem is that how to reproduce the motion of the Stewart platform efficiently and accurately. Figure 2 shows the relationship of each coordinate system.

 figure: Fig. 2.

Fig. 2. Coordinate system for monocular vision-based calibration system.

Download Full Size | PDF

Firstly, the coordinate system {OM-xyz} is established at the center Omotion of the moving platform with its three motion axes X, Y, and Z. The laser coordinate system {OL-xyz} is established at the center of the laser Olaser as its origin, and the three rays A, B, and C projected by the laser are used as the axes. Once the laser is mounted, there is a rotation matrix MRL and translation matrix MTL that allows a point (Lx, Ly, Lz) in the laser to be expressed in the {OM} of moving platform.

$$\left[ \begin{array}{l} {}^\textrm{M}x\\ {}^\textrm{M}y\\ {}^\textrm{M}z \end{array} \right] = {}^\textrm{M}{\boldsymbol{R}_\textrm{L}}\left[ \begin{array}{l} {}^\textrm{L}x\\ {}^\textrm{L}y\\ {}^\textrm{L}z \end{array} \right] + {}^\textrm{M}{\boldsymbol{T}_\textrm{L}},$$
where,
$${}^\textrm{M}{\boldsymbol{R}_\textrm{L}} = \left[ {\begin{array}{ccc} {{r_{11}}}&{{r_{12}}}&{{r_{13}}}\\ {{r_{21}}}&{{r_{22}}}&{{r_{23}}}\\ {{r_{31}}}&{{r_{32}}}&{{r_{33}}} \end{array}} \right],{}^\textrm{M}{\boldsymbol{T}_\textrm{L}} = \left[ \begin{array}{l} {h_1}\\ {h_2}\\ {h_3} \end{array} \right]$$
in which, h1, h2, h3 are the translation distance between Omotion and Olaser.

The Stewart platform produces spatial linear vibration of varying frequencies and amplitudes along its X, Y, and Z axes through control of the moving platform. The motion information is projected onto a panel in front of the device using three laser beams that are perpendicular to each other, forming three laser spots. A two-dimensional coordinate system is established on the panel plane with Opanel as the origin and {OP-xyz} representing the panel coordinate system for convenience. Figure 3 illustrates how OlaserL1, OlaserL2, and OlaserL3 intersect with the panel plane to form points L1, L2, and L3 respectively. The points O1, O2, and O3 are the midpoints of line segments formed by each pair of laser points. All points in this plane are expressed using homogeneous coordinates.

 figure: Fig. 3.

Fig. 3. Schematic drawing of laser-based spatial excitation decoupling.

Download Full Size | PDF

Three points O1(Px1, Pz1, 1), O2(Px2, Pz2, 1), and O3 (Px3, Pz3,1) are used as the centers of three spheres, respectively, and the lengths dL1L2, dL2L3, and dL1L3 of L1L2, L2L3, and L1L3 as the diameters. The three spheres intersect in space at two points as shown in formula (3), where one of the solutions (Pxl, Pyl, Pzl) is the origin Olaser of {OL}.

$$\left\{ \begin{array}{l} {({{}^\textrm{P}{x_l} - {}^\textrm{P}{x_1}} )^2} + {({{}^\textrm{P}{y_l} - {}^\textrm{P}{z_1}} )^2} + {({{}^\textrm{P}{z_l} - 1} )^2} = {\left( {\frac{{{d_{{L_1}{L_2}}}}}{2}} \right)^2}\\ {({{}^\textrm{P}{x_l} - {}^\textrm{P}{x_2}} )^2} + {({{}^\textrm{P}{y_l} - {}^\textrm{P}{z_2}} )^2} + {({{}^\textrm{P}{z_l} - 1} )^2} = {\left( {\frac{{{d_{{L_1}{L_3}}}}}{2}} \right)^2}\\ {({{}^\textrm{P}{x_l} - {}^\textrm{P}{x_3}} )^2} + {({{}^\textrm{P}{y_l} - {}^\textrm{P}{z_3}} )^2} + {({{}^\textrm{P}{z_l} - 1} )^2} = {\left( {\frac{{{d_{{L_2}{L_3}}}}}{2}} \right)^2} \end{array} \right.$$

The point Olaser is the spatial coordinates of the center of the Laser in the panel coordinate system {OP}, and the coordinates of the laser can be transformed into the coordinates in the moving platform coordinate system {OM} by MTL in formulas (1) and (2), and the spatial sinusoidal motion with different frequencies and amplitudes is generated by controlling the X, Y, Z axes of the moving platform, and the motion can be expressed by

$$\left\{ \begin{array}{c} {}^\textrm{M}x(t) = {}^\textrm{M}\hat{x}\cos ({\omega_x}t + {\varphi_x})\\ {}^\textrm{M}y(t) = {}^\textrm{M}\hat{y}\cos ({\omega_y}t + {\varphi_y})\\ {}^\textrm{M}z(t) = {}^\textrm{M}\hat{z}\cos ({\omega_z}t + {\varphi_z}) \end{array} \right.$$
where $\hat{x}$, $\hat{y}$ and $\hat{z}$ are the amplitudes of x(t), y(t) and z(t), respectively. ωx, ωy, and ωz are the corresponding vibration angular frequency. φx, φy and φz are the initial phases of x(t), y(t) and z(t), respectively.

2.2 Installation error correction in the calibration system

In this system model, two errors within the system need to be taken into account. In fact, since it is not possible to accurately guarantee that the panel axially parallel to the moving platform, as shown in Fig. 4, the coordinate system {OM} deviates from {OP}.

 figure: Fig. 4.

Fig. 4. Installation error of the low-frequency tri-axial vibration sensor calibration.

Download Full Size | PDF

The actual panel coordinate system can be expressed as {OP′-x′y′z′}, and the space coordinates of the moving platform calculated by laser spot are biased. In other words, the vibration information of the laser center coordinates expressed by the panel coordinate system {OP} cannot accurately reflect the tri-axial vibration of the moving platform. Secondly, the mechanical components of the Stewart platform are affected by machining errors and assembly errors, so that the moving platform cannot accurately generate three completely orthogonal vibration axes, the real coordinate system of the moving platform is {OM′-x′y′z′} actually. Stewart platforms are calibrated after assembly to achieve high static accuracy, typically less than 10 µm. However, the dynamic performance of Stewart platform needs to be utilized in vibration calibration. At this time, the dynamic accuracy of the moving platform is significantly reduced by the non-orthogonality of the vibration axes and the mechanical structure error. In order to reduce the precision reduction caused by these factors, using multiple sets of unrelated moving platform expected value position coordinate point sets {Mxs, Mys, Mzs} and the coordinate point set {Pxs, Pys, Pzs} obtained by monocular vision method, where s = 1, 2, …, n, n is the number of coordinates. The conversion relation-ship MRP′ from {OP′} to {OM′}is calculated by least squares fitting by the following formula:

$$\left( {\begin{array}{ccc} {{}^{\textrm{M}^{\prime}}{x_1}}& \cdots &{{}^{\textrm{M}^{\prime}}{x_s}}\\ {{}^{\textrm{M}^{\prime}}{y_1}}& \cdots &{{}^{\textrm{M}^{\prime}}{y_s}}\\ {{}^{\textrm{M}^{\prime}}{z_1}}& \cdots &{{}^{\textrm{M}^{\prime}}{z_s}} \end{array}} \right) = {}^{{}^{\textrm{M}^{\prime}}}{\boldsymbol{R}_{{}^{\textrm{P}^{\prime}}}}\left( {\begin{array}{ccc} {{}^{\textrm{P}^{\prime}}{x_1}}& \cdots &{{}^{\textrm{P}^{\prime}}{x_s}}\\ {{}^{\textrm{P}^{\prime}}{y_1}}& \cdots &{{}^{\textrm{P}^{\prime}}{y_s}}\\ {{}^{\textrm{P}^{\prime}}{z_1}}& \cdots &{{}^{\textrm{P}^{\prime}}{z_s}} \end{array}} \right)$$
which MRP′ is a coefficient matrix of 3×3, and the matrix’s parameters are obtained by least squares fitting optimization with formula (6) as the objective function.
$$\textrm{J} = \sum\limits_{s = 1}^n {\min ({{{({}^{\textrm{P}^{\prime}}{x_s} - {}^{\textrm{M}^{\prime}}{x_s})}^2} + {{({}^{\textrm{P}^{\prime}}{y_s} - {}^{\textrm{M}^{\prime}}{y_s})}^2} + {{({}^{\textrm{P}^{\prime}}{z_s} - {}^{\textrm{M}^{\prime}}{z_s})}^2}} )}$$

After obtaining the transformation matrix MRP′, the center point of the laser coordinate system Olaser (P′xl, P′yl, P′zl) in the actual panel coordinate system {OP′}, which is converted to the (M′x, M′y, M′z) in the actual Stewart moving platform coordinate system {OM′} can be expressed by formula (7).

$$\left[ \begin{array}{l} {}^{\textrm{M}^{\prime}}x\\ {}^{\textrm{M}^{\prime}}y\\ {}^{\textrm{M}^{\prime}}z \end{array} \right] = {}^{\textrm{M}^{\prime}}{\boldsymbol{R}_{\textrm{P}^{\prime}}}\left[ \begin{array}{l} {}^{\textrm{P}^{\prime}}{x_l}\\ {}^{\textrm{P}^{\prime}}{y_l}\\ {}^{\textrm{P}^{\prime}}{z_l} \end{array} \right] + {}^\textrm{M}{\boldsymbol{T}_\textrm{L}}$$

The tri-axial vibration sensor to be calibrated is installed in the center of the moving platform to ensure that the three axes of the sensor coincide with the coordinate system of the moving platform. The camera is fixed behind the panel to capture the motion of the three points, which on the projection plane will follow the motion generated by the Stewart platform. The camera collects the motion sequence images of the three points through the trigger signal, and the data acquisition device collects the tri-axial output signal of the sensor.

3. Monocular vision-based calibration method for the sensitivities of low-frequency tri-axial vibration sensors

3.1 Determination of the relation between the image pixel and world coordinates

As the system diagram displayed in Fig. 2, the conversion relationship from the panel coord-inate system to the moving platform coordinate system is obtained. After the camera collects the image of the spot motion sequence, the panel coordinate system {OP} is used as the world coordinate system {OW}in the camera calibration by using the Zhang’s camera calibration method [22], and the conversion relationship from the pixel coordinates in camera {OC} to the world coordinate system is determined by

$$\left[ \begin{array}{c} {}^\textrm{C}x\\ {}^\textrm{C}y\\ 1 \end{array} \right] = \boldsymbol{H}\left[ \begin{array}{c} {}^\textrm{W}x\\ {}^\textrm{W}y\\ 1 \end{array} \right]$$
where (Cx, Cy, 1) is the homogeneous coordinate representation of the pixels point in {OC} and H is the homography matrix.
$$\boldsymbol{H} = \left[ {\begin{array}{ccc} {{h_{11}}}&{{h_{12}}}&{{h_{13}}}\\ {{h_{21}}}&{{h_{22}}}&{{h_{23}}}\\ {{h_{31}}}&{{h_{32}}}&{{h_{33}}} \end{array}} \right]$$
which h11, h12, h13, h21, h22, h23, h31, h32, h33 jointly express the internal and external parameters of the camera.

3.2 Spatial excitation measurement with the high-accuracy spot center extraction method

For the j frame spot motion sequence images {Fj(x, y), j = 1, 2, …, G} acquired by the camera, where sub-script j is the frame number and G is the number of acquired sequence images. The image noise must be suppressed in order to ensure the full extraction of laser edge information. Firstly, a 5×5 mean filtering template g(u, v) is created, and the acquired j frame spot motion images {Fj(x, y)} are processed with g(u, v) to obtain the processed j frame spot motion images {Mj(x, y)}, which can be expressed by

$${M_j}(x,y) = {F_j}(x,y) \otimes g(u,v)$$

Using a 10×10 spot feature template T with each frame of the sequence image Mj(x, y) to obtain the region of interest, the correlation coefficient Rk(x, y) of region Mj(x, y)is calculated as follows:

$${R_k}({x,y} )= \frac{{\sum\limits_{u = 1}^M {\sum\limits_{v = 1}^N {P({x + u,y + v} )Q({u,v} )} } }}{{\sqrt {\sum\limits_{u = 1}^M {\sum\limits_{v = 1}^N {{{[{P({x + u,y + v} )} ]}^2}} } \sum\limits_{u = 1}^M {\sum\limits_{v = 1}^N {{{[{Q({u,v} )} ]}^2}} } } }}$$
which,
$$\left\{ {\begin{array}{c} {P({x + u,y + v} )= {M_j}({x + u,y + v} )- {{\bar{F}}_j}}\\ {Q({u,v} )= T({u,v} )- \bar{T}} \end{array}} \right.$$
where M and N are the rows and columns of template T, respectively. Rk(x, y) is the correlation coefficient of Mj(x, y) at point (x, y), ${\mathrm{\bar{F}}_\textrm{j}}$ and $\bar{T}$ are the grayscale averages of Mj(x, y) and T, respectively.

Find the maximum point of correlation coefficient Rk(x, y) for each frame Mj(x, y), determine it as the location of the spot of the jth frame, expand the 100×100 region centered on this point as the ROI of the region of interest of the jth frame, and record this region as I. It can improve the running speed of the program while eliminating the factors of background interference. Because the image area is small and the camera exposure time is low, the gray characteristics of image is obvious. The threshold selection method [23] is used to determine the best grayscale threshold σ that can accurately separate the spot information from the background information in the region I.

$${S^2} = \frac{{{{({{m_G} \ast {p_1} - m} )}^2}}}{{{p_1}(1 - {p_1})}}$$
where p1 is the probability that the gray level of a pixel in region I is less than the gray level threshold σ, mG is the average gray level of region I, and m is the average gray level of gray level σ.
$$m = \sum\limits_{i = 0}^\sigma {i{\mu _i}}$$
where, μi is the probability when the pixel point is grayscale i. The optimal segmentation threshold is utilized to segment the I region to obtain two areas, I1, a spot information region larger than the grayscale threshold σ, and I2, a background smaller than the grayscale threshold σ. The adaptive threshold segmentation method not only improves the extraction efficiency, but also eliminates the influence of local noise in the region of interest on the adaptive threshold segmentation method. After the I1 region is obtained, the spot information in the ROI region is further accurately, the traditional spot extraction method usually extracts the pixel-level edges of the optical center, and then finds the subpixel position of the spot center through fitting method. However, there are often multiple layers of edge information in the light spot image, and the grayscale of these edge information changes more slowly as shown in Fig. 5, which is usually mis-detected by the edge detection algorithm, resulting in the absence of the original information, thus leading to inaccurate light center extraction [24].

 figure: Fig. 5.

Fig. 5. (a) Image of the laser spot, (b) 3D-Grayscale distribution of laser spots, (c) 2D-Grayscale distribution of laser spots.

Download Full Size | PDF

Since the optical center fitting algorithm is superior to the linear edge extraction algorithm, the optical center fitting algorithm with high precision can accurately obtain the spot center position. However, due to the effect of perspective projection, it is not a standard circle on the projection panel as well as on the camera sensor, which leads to a decrease in the spot center extraction accuracy [25]. In fact, the diffusion function of the spot produced by the laser can approximate as a Gaussian distribution [26], and using the Gaussian fitting method can consider all the information of the spot as much as possible, reduce the influence of noise and eccentricity error of the image, and can obtain the laser spot optical center position quickly and accurately [27].

Due to the extremely high light intensity of the laser and the insufficient bit depth of the recorded data of the camera, the grayscale of the laser spot will be saturated, which will cause inaccurate spot extraction. In order to eliminate the influence of gray saturation point on the extraction of light spot center point, a new point set {I1′(x, y)} is obtained by eliminating the gray scale saturated region of the pixel point set {I1(x, y)} in I1, and then the center of spot is fitted by Gaussian fitting method in {I1′(x, y)} [28,29].

$$I_1^{\prime}({x,y} )= K{e^{\left( { - \frac{{{{(x - {x_{center}})}^2}}}{{2\delta_x^2}} - \frac{{{{(y - {y_{center}})}^2}}}{{2\delta_y^2}}} \right)}}$$
which K is the peak grayscale of the spot, the coordinates of the fitted spot center are (xcenter, ycenter), δx2is the standard deviation of the image x axis, and δy2 is the standard deviation of the image y axis.

For the I1′ region of each frame of Mj(x, y), the spot centers of the three lasers point (xjL1, yjL1), (xjL2, yjL2), (xjL3, yjL3) using formula (15), and their corresponding world coordinates can obtain by formula (8). The corresponding coordinates (Mx, My, Mz) under the moving platform coordinate system of each frame image Mj(x, y) can be obtained by formula (7). By subtracting the reference zero position (Wxl1, Wyl1, Wl1) obtained by the first frame M1(x, y) from (Wxlj, Wylj, Wzlj) obtained by each frame Mj(x, y) by formulas (3) and (8). The spatial vibration displacement information (dxj, dyj, dzj) of the jth frame of the vibration generating device can be obtained.

$$\left[ \begin{array}{l} {d_{xj}}\\ {d_{yj}}\\ {d_{zj}} \end{array} \right] = {}^{\textrm{M}^{\prime}}{\boldsymbol{R}_{\textrm{P}^{\prime}}}\left[ \begin{array}{c} {}^\textrm{W}{x_{lj}} - {}^\textrm{W}{x_{l1}}\\ {}^\textrm{W}{y_{lj}} - {}^\textrm{W}{y_{l1}}\\ {}^\textrm{W}{z_{lj}} - {}^\textrm{W}{z_{l1}} \end{array} \right]$$

So far, we have obtained the mathematical model expression of the laser measuring sys-tem based on monocular vision, and the input acceleration excitation of the calibrated sensor can be obtained by extracting and solving the information of the sequential spot image Mj(x, y) at any acquisition moment tj.

3.3 Spatial sensitivity calibration of tri-axial vibration sensors

The core parameters of the tri-axial vibration sensor are the sensitivity of the three sensitivity axes and their phases. The calibration of the tri-axial vibration sensor requires the moving platform to provide vibration excitation in three directions. Since the calibrated sensor is an acceleration sensor, the acceleration of the tri-axial vibration excitation generated by the moving platform measured by the monocular vision method aX(t), aY(t), aZ(t) can be obtained by quadratic differentiation of (dxj, dyj, dzj) in formula (16) by j times,

$$\left[ \begin{array}{l} {a_X}(t)\\ {a_Y}(t)\\ {a_Z}(t) \end{array} \right] = \left[ \begin{array}{l} {{\hat{a}}_X}\cos ({\varphi_X})\\ {{\hat{a}}_Y}\cos ({\varphi_Y})\\ {{\hat{a}}_Z}\cos ({\varphi_Z}) \end{array} \right.\textrm{ }\left. \begin{array}{l} - {{\hat{a}}_X}\sin ({\varphi_X})\\ - {{\hat{a}}_Y}\sin ({\varphi_Y})\\ - {{\hat{a}}_Z}\sin ({\varphi_Z}) \end{array} \right]\left[ \begin{array}{l} \cos (\omega t)\\ \sin (\omega t) \end{array} \right]$$
of which, ${{\hat{a}}_{x}}$, ${{\hat{a}}_{y}}$ and ${{\hat{a}}_{z}}$ are the input excitation acceleration amplitude of the accelerometer, and ISO 16063-11 recommends using the sinusoidal approximation method to calculate the input excitation displacement amplitude of the vibration sensor. To obtain the amplitude of the tri-axial excitation acceleration measured by monocular vision method, The sine approximation method shown in formula (18) is used to fit j measured excitation accelera-tions axj(tj), ayj(tj), azj(tj) and corresponding acquisition time tj,
$$\left[ \begin{array}{l} {a_{xj}}({t_j})\\ {a_{yj}}({t_j})\\ {a_{zj}}({t_j}) \end{array} \right] = \left[ {\begin{array}{cccc} {{A_X}\textrm{ } - {B_X}}&{{C_X}}&{{D_X}}\\ {{A_Y}\textrm{ } - {B_Y}}&{{C_Y}}&{{D_Y}}\\ {{A_Z}\textrm{ } - {B_Z}}&{{C_Z}}&{{D_Z}} \end{array}} \right]\left[ \begin{array}{c} \cos ({\omega_v}{t_j})\\ \sin ({\omega_v}{t_j})\\ {t_j}\\ 1 \end{array} \right]$$
take axj(tj) for example, which AX and BX represent the sinusoidal acceleration components, which are used to find the fitted amplitude and initial phase; CX is the linear perturbation component and DX is the bias component, which is used to eliminate the effects of linear perturbation and bias. Based on the least squares principle solved by solving the j×3 tri-axial excitation acceleration (axj, ayj, azj) and the corresponding tj composed of the overdetermined systems, the values of 12 parameters in formula (18) can be obtained. The amplitude of the sensor input excitation acceleration ${\mathrm{\hat{a}}_\textrm{x}}$, ${\mathrm{\hat{a}}_\textrm{y}}$, and ${\mathrm{\hat{a}}_\textrm{z}}$ with its initial phase φx, φy, φz measured by monocular vision method are obtained as follows:
$$\left\{ \begin{array}{c} {{\hat{a}}_X} = \sqrt {{A_X}^2 + {B_X}^2} , {\varphi_X} = \arctan ({{{{B_X}} / {{A_X}}}} )\\ {{\hat{a}}_Y} = \sqrt {{A_Y}^2 + {B_Y}^2} , {\varphi_Y} = \arctan ({{{{B_Y}} / {{A_Y}}}} )\\ {{\hat{a}}_Z} = \sqrt {{A_Z}^2 + {B_Z}^2} , {\varphi_Z} = \arctan ({{{{B_Z}} / {{A_Z}}}} )\end{array} \right.$$
to facilitate the expression of tri-axial sensor sensitivity, formula (17) is rewritten as:
$$\left[ \begin{array}{l} {a_X}(t)\\ {a_Y}(t)\\ {a_Z}(t) \end{array} \right] = \left[ \begin{array}{l} {{\hat{a}}_X}\cos ({\varphi_X})\\ {{\hat{a}}_Y}\cos ({\varphi_Y})\\ {{\hat{a}}_Z}\cos ({\varphi_Z}) \end{array} \right.\textrm{ }\left. \begin{array}{l} \textrm{i}{{\hat{a}}_X}\sin ({\varphi_X})\\ \textrm{i}{{\hat{a}}_Y}\sin ({\varphi_Y})\\ \textrm{i}{{\hat{a}}_Z}\sin ({\varphi_Z}) \end{array} \right]\left[ \begin{array}{l} \cos (\omega t)\\ \textrm{i}\sin (\omega t) \end{array} \right]$$
which i is an imaginary unit, and Euler's formula is used to further rewrite formula (20) as:
$$\left[ \begin{array}{l} {a_X}(t)\\ {a_Y}(t)\\ {a_Z}(t) \end{array} \right] = \left[ \begin{array}{l} \textrm{Re} ({{\hat{a}}_X}{e^{\textrm{i}{\varphi_X}}})\\ \textrm{Re} ({{\hat{a}}_Y}{e^{\textrm{i}{\varphi_Y}}})\\ \textrm{Re} ({{\hat{a}}_Z}{e^{\textrm{i}{\varphi_Z}}}) \end{array} \right.\textrm{ }\left. \begin{array}{l} \textrm{Im(}{{\hat{a}}_X}{e^{\textrm{i}{\varphi_X}}})\\ \textrm{Im(}{{\hat{a}}_Y}{e^{\textrm{i}{\varphi_Y}}})\\ \textrm{Im(}{{\hat{a}}_Z}{e^{\textrm{i}{\varphi_Z}}}) \end{array} \right]\left[ \begin{array}{l} \textrm{Re} ({e^{\textrm{i}\omega t}})\\ \textrm{Im(}{e^{\textrm{i}\omega t}}) \end{array} \right]$$

Using the voltage signals VXj, VYj, VZj of the tri-axial output of the calibrated sensor collected by the DAC at j moments, the acceleration amplitudes ${\hat{V}_{Xi}}$, ${\hat{V}_{Yi}}$ and ${\hat{V}_{Zi}}$ of the three axes output of the sensor are also calculated by using formula (18), and the initial phases φX0, φY0, φZ0, the voltage output from the sensor can be expressed as:

$$\left[ \begin{array}{l} {V_X}(t)\\ {V_Y}(t)\\ {V_Z}(t) \end{array} \right] = \left[ \begin{array}{l} \textrm{Re} ({{\hat{V}}_X}{e^{\textrm{i}{\varphi_{\textrm{X0}}}}})\\ \textrm{Re} ({{\hat{V}}_Y}{e^{\textrm{i}{\varphi_{\textrm{Y0}}}}})\\ \textrm{Re} ({{\hat{V}}_X}{e^{\textrm{i}{\varphi_{\textrm{Z0}}}}}) \end{array} \right.\textrm{ }\left. \begin{array}{l} \textrm{Im(}{{\hat{V}}_X}{e^{\textrm{i}{\varphi_{\textrm{X0}}}}})\\ \textrm{Im(}{{\hat{V}}_X}{e^{\textrm{i}{\varphi_{\textrm{Y0}}}}})\\ \textrm{Im(}{{\hat{V}}_X}{e^{\textrm{i}{\varphi_{\textrm{Z0}}}}}) \end{array} \right]\left[ \begin{array}{l} \textrm{Re} ({e^{\textrm{i}\omega t}})\\ \textrm{Im(}{e^{\textrm{i}\omega t}}) \end{array} \right]$$

ISO 16063-11 specifies that the sensitivity of the sensor is the ratio of the axial output voltage to the axial input excitation, the phase is the difference between the initial phase of the input excitation and the initial phase of the sensor output, combined with formulas (20) and (21), the spatial sensitivity matrix of the calibrated tri-axial vibration sensor as following:

$$\left[ \begin{array}{l} {S_X}\\ {S_Y}\\ {S_Z} \end{array} \right] = \left[ {\begin{array}{ccc} {\frac{1}{{{a_X}(t)}}}&0&0\\ 0&{\frac{1}{{{a_Y}(t)}}}&0\\ 0&0&{\frac{1}{{{a_Z}(t)}}} \end{array}} \right]\left[ \begin{array}{l} {V_X}(t)\\ {V_Y}(t)\\ {V_Z}(t) \end{array} \right] = \left[ \begin{array}{l} \frac{{{{\hat{V}}_X}}}{{{{\hat{a}}_X}}}{e^{\textrm{i}({\varphi_X} - {\varphi_{\textrm{X0}}})}}\\ \frac{{{{\hat{V}}_X}}}{{{{\hat{a}}_X}}}{e^{\textrm{i}({\varphi_Y} - {\varphi_{\textrm{Y0}}})}}\\ \frac{{{{\hat{V}}_X}}}{{{{\hat{a}}_X}}}{e^{\textrm{i}({\varphi_Z} - {\varphi_{\textrm{Z0}}})}} \end{array} \right]$$

To verify the repeatability of the monocular vision method for the calibration of the tri-axial vibration sensor, the sensitivity of each axis of the sensor calibrated by the monocular vision method of formula (24) is used to relative the standard deviation SRStd,

$${S_{\textrm{RStd}}} = \frac{{\sqrt {\frac{1}{{Q - 1}}\sum\limits_{i = 1}^H {{{({S_i} - \bar{S})}^2}} } }}{{\bar{S}}} \times 100\%$$
which Q represent as the number of measurements, which general value of Q ≥ 10. Si is the sensitivity of the sensitive axis on which the sensor is calibrated, and $\bar{S}$ is the average of the sensor's sensitivity.

4. Experimental verifications and results analysis

To verify the performance of the investigated monocular vision (MV) method, a low-frequency tri-axial vibration sensor calibration setup based on monocular vision is constructed as shown in Fig. 6. The Stewart platform with a frequency range of DC-2 Hz and a displacement amplitude range of 0-80 mm provides input excitation for the calibrated tri-axial vibration sensor. The calibrated tri-axial vibration sensor (msv-3100A) is mounted to the center of the moving platform of the Stewart platform. The three light spots of the laser projection device should be projected on the laser receiving panel intact; a CMOS camera (OS10 V3-4 K) with a maximum frame rate of 1000 fps and a resolution of 9 megapixels is used to capture the laser spot motion sequence images on the projection panel. The DAQ card simultaneously acquires the output signal of the sensor being calibrated. The sensor was also calibrated using the laser interferometry (LI) recommended by ISO 16063-11 to enable comparative verification.

 figure: Fig. 6.

Fig. 6. Monocular vision-based low-frequency tri-axial vibration sensors calibration setup. (I)-(III): laser interferometry, (IV): DAQ card, (V): tri-axial vibration sensor, (VI): laser projected device, (VII): camera, (VIII): laser-points

Download Full Size | PDF

4.1 Measured excitation results

Using the setup shown in Fig. 6, sinusoidal displacement excitation of 3-80 mm amplitude is performed at a frequency selected at 1/3 octave in the range of 0.16-2 Hz, and the amplitude of the excitation displacement decreases with increasing frequency. The camera frame rate was set to 50 times the vibration frequency to ensure the accuracy of the excitation acceleration obtained by acquiring motion sequence images. The displacement excitation of each frequency was measured 10 times by formulas (16) and (18) with the laser interferometry. Figure 7 shown the average value of displacement amplitude in each axis by the MV method after correction with the installation error model and the original MV method without any compensation in the range of 0.16-2 Hz. It can be demonstrated that the measurement results by MV method after correction is closer to LI. Table 1 listed the relative deviations of the MV method before and after correction compared with LI. For the X-axis, the maximum deviation between the original MV method and LI is approximately 2.4%, while that between the MV method after correction and LI is reduced by 1%. For the Y-axis, the measurement accuracy of the MV method after correction is also improved above 1%. For the Z-axis, except for 1.6 Hz, the accuracy is increased by about 0.3%. It indicated that the proposed MV method significantly reduces the relative deviations in measurement error between the original MV method without compensation and LI.

 figure: Fig. 7.

Fig. 7. The measurement results by the original MV method, MV method after correction and LI: (a) X- axial displacement; (b) Y- axial displacement; (c) Z- axial displacement.

Download Full Size | PDF

Tables Icon

Table 1. The measured relative deviations between LI and original MV method, MV method after correction.

4.2 Calibrated sensitivity results

The measurement accuracy of the input excitation directly determines the calibration accuracy of sensors. The three sensitive axes of the vibration sensor were calibrated in the range of 0.16-2 Hz using the monocular vision method after correction by formulas (17)-(23) with the laser interferometry, the sensitivity of each frequency was calibrated 10 times, and the repeatability of the two methods was evaluated using formula (24). In the extremely low-frequency range, the sensitivity of the sensor calibration at very low frequencies was compensated using the bending guide correction method investigated by Bruns T et al [30].

Table 2 listed the average value of sensitivity amplitude and relative deviation for ten calibrations of these two methods in the range of 0.16-2 Hz. The results in the X-axis show that the maximum relative deviation between the monocular vision method and the LI is about 2.5%, and the results in the Y-axis show that the maximum relative deviation between the monocular vision method and the LI is about 1.8%, and the results in the Z-axis show that the maximum The relative deviation is about 0.4%. Figure 8 shows the average value of the sensitivity amplitude of each axis of the sensor calibrated by these two methods in the range of 0.16-2 Hz and the repeatability of the calibration results. The SRStd of both methods is less than 0.2% for the X-axis calibration, less than 0.16% for the Y-axis calibration, and less than 0.35% for the Z-axis calibration.

 figure: Fig. 8.

Fig. 8. The SRStd of calibrated sensitivities of tri-axial sensor by the monocular vision method and laser interferometry: (a) X- axial sensitivity; (b) Y- axial sensitivity; (c) Z- axial sensitivity.

Download Full Size | PDF

Tables Icon

Table 2. The calibration results by the MV, LI and their relative deviations (RD)

4.3 Discussion

Figure 7 and Table 1 illustrates that the proposed MV method reduces the measurement impact caused by installation errors. As the vibration frequency increases and the excitation displacement decreases, resulting the reduction of the relative measurement resolution of the camera and an increase of measurement relative deviation between the MV and LI. The effects can be eliminated by adjusting the distance between the camera and object, enabling the MV method to achieve measurement frequencies and precision far beyond the current experiment. In certain situations, the MV method may even extend into quasi-static. As illustrated in Table 2 and Fig. 8, the calibrated tri-axial sensitivities in the frequency range of 0.16-2 Hz, the maximum relative deviation of the calibration results of these two methods was less than 2.5%, and the maximum relative deviation appears in the calibration results of the X-axis. For the X-axis, the excitation displacement becomes smaller as the frequency increases in the range of 0.16-0.8 Hz. The influence is caused by the lens distortion and the slight bending of the laser receiving panel, the relative deviation between the monocular vision method and the laser interferometry gradually decreases, and as the frequency continues to increase, the relative resolution of the camera decreases, making the relative deviation of the sensor sensitivity calibrated by the vision method gradually increase. For the Y-axis, the same trend as the X-axis calibration results also exists above 0.2 Hz, with the maximum deviation occurring at 2 Hz, and the relative deviation from laser interferometry is about 1.8%. For the Z-axis, due to the robot structure, the excitation displacement provided at each frequency point is smaller than that of the X- and Y-axes. It is less affected by the smaller bending of the panel and lens distortion, which makes its calibration results in the full frequency range smaller than that of the laser interferometry. The relative deviation of the calibration results in the full frequency range from the laser interferometry is small, and the maximum relative deviation occurs at 1.6 Hz, which is about 0.4%.

Figure 8 also displayed the relative standard deviations of the LI and MV method. When the frequency is low, the lower signal-to-noise ratio of the output signal of the calibrated sensor due to the small excitation acceleration results in a larger SRStd for both methods. As the frequency gradually increases, the SRStd of both methods decreases significantly. When calibrating the X-axis, the SRStd of both methods is less than 0.2% except for 0.63 Hz, and the results of these methods are similar. In the calibration of Y-axis, the SRStd of both methods are less than 0.16% except for 0.2 Hz, but the SRStd of monocular vision method is significantly lower than that of laser interferometry at all frequency. In the calibration of Z-axis, the SRStd of monocular vision method is slightly higher than that of laser interferometry, but it can also meet the calibration requirements of vibration sensors for current practical engineering applications well.

5. Conclusion

In this study, a novel monocular vision-based dynamic calibration method for low-frequency tri-axial vibration sensors is investigated. This method utilizes the reliable monocular vision to measure the input spatial acceleration, which can significantly improve the measurement efficiency. The whole axial sensitivities of the sensor can be simultaneously determined only once installation, which can efficiently decrease the calibration uncertainty caused by the reinstallation. The implementation of the investigated method only requires a simple, flexible, and low-cost vision measurement system, which is very economic and flexible compared with the commonly used laser interferometry. Experimental results confirmed that the investigated monocular vision method has a considerable calibration accuracy and extremely improves the calibration efficiency. In the future, we will apply the investigated method to calibrate the tri-axial sensors in a broader frequency range and evaluate its calibration uncertainty.

Funding

National Key Research and Development Program of China (2021YFF0600103, 2017YFF0205003); National Natural Science Foundation of China (52265066, 62203132, 52075512); Youth Science and Technology Talents Development Project of Guizhou Education Department (Qianjiaohe KY [2022]138); Doctor Foundation Project of Guizhou University (GuidaRenji Hezhi [2020] 30).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this study are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Chen, T. Chang, Q. Fu, J. Lang, W. Gao, Z. Wang, M. Yu, Y. Zhang, and H. Cui, “A Fiber-Optic Interferometric Tri-Component Geophone for Ocean Floor Seis-mic Monitoring,” Sensors 17(12), 47 (2017). [CrossRef]  

2. A. George, A. Sunny, and J. Cyriac, “Shock response spectrum of launch vehicle using LabVIEW,” International Conference on Innovations in Information, Embedded and Communication Systems (2017). [CrossRef]  

3. J. Li, “Applications and Prospects of Mems Sensors in Automotive,” J. Phys.: Conf. Ser. 1884(1), 012010 (2021). [CrossRef]  

4. ISO 16063-16. “Methods for the calibration of vibration and shock transducers-Part 16: Calibration by Earth's gravitation,” (2014).

5. ISO 16063-11. “Methods for the calibration of vibration and shock sensors-Part 11: primary vibration calibra-tion by laser interferometry,” (1999).

6. J. Dosch, “Low Frequency Accelerometer Calibration Using Earth's Gravity,” PCB Piezotronics Inc. (2007).

7. M. Dobosz, T. Usuda, and T. Kurosawa, “Methods for the calibration of vibration pick-ups by laser interferometry: I. Theoretical analysis,” Meas. Sci. Technol. 9(2), 232–239 (1998). [CrossRef]  

8. H. Martens, “Current state and trends of ensuring traceability for vibration and shock measurements,” Metrologia 36(4), 357–373 (1999). [CrossRef]  

9. M. Yu, “Measurement uncertainty of magnitude and phase shift for sensitivity of a low frequency vibration transducer with amplifier,” J. Vibration Shock 28(4), 106–109 (2009).

10. M. Mende and H. Nicklich, “Calibration of Very Low Frequency Accelerometers-A Challenging Task,” Sound & Vibration 45(5), 14–17 (2011).

11. H. Martens and C. Weissenborn, “Simultaneous multicomponent calibration-a new research area in the field of vibration and shock,” 1st Meeting of the Consultative Committee for Acoustics, Ultrasound and Vibration (1999).

12. A. Umeda, M. Onoe, K. Sakata, T. Fukushia, K. Kanari, H. Lioka, and T. Kobayashi, “Calibration of three-axis accelerometers using a three-dimensional vibration generator and three laser interferometers,” Sens. Actuators, A 114(1), 93–101 (2004). [CrossRef]  

13. Z. Liu, C. Cai, M. Yu, and M. Yang, “Applying Spatial Orbit Motion to Accelerometer Sensitivity Measure-ment,” IEEE Sens. J. 17(14), 4483–4491 (2017). [CrossRef]  

14. Z. Liu, C. Cai, M. Yang, and M. Yu, “Development of a tri-axial primary vibration calibration system,” ACTA IMEKO 8(1), 33–39 (2019). [CrossRef]  

15. Z. Liu, C. Cai, and M. Yang, “Testing of a MEMS dynamic inclinometer using the Stewart platform,” Sensors 19(19), 4233 (2019). [CrossRef]  

16. M. Yang, C. Cai, Z. Liu, and Y. Wang, “Monocular Vision-Based Calibration Method for Determining Frequ-ency Characteristics of Low-Frequency Vibration Sensors,” IEEE Sens. J. 21(4), 4377–4384 (2021). [CrossRef]  

17. M. Yang, Z. Liu, C. Cai, Y. Wang, J. Yang, and J. Yang, “Monocular Vision-Based Calibration Method for the Axial and Transverse Sensitivities of Low-Frequency Triaxial Vibration Sensors with the Elliptical Orbit Excitation,” IEEE Trans. Ind. Electron. 69(12), 13763–13772 (2022). [CrossRef]  

18. M. Yang, Y. Wang, C. Cai, Z. Liu, H. Zhu, and S. Zhou, “Monocular vision-based low-frequency vibration calibration method with correction of the guideway bending in a long-stroke shaker,” Opt. Express 27(11), 15968–15981 (2019). [CrossRef]  

19. M. Yang, Z. Liu, Y. Wang, C. Cai, and J. Yang, “Monocular Vision-Based Multiparameter Dynamic Calibration Method Used for the Low-Frequency Linear and Angular Vibration Sensors,” IEEE Trans. Ind. Electron. 70(5), 5365–5374 (2022). [CrossRef]  

20. M. Yang, W. Liu, Z. Liu, C. Cai, Y. Wang, and J. Yang, “Binocular Vision-Based Method Used for Determining the Static and Dynamic Parameters of the Long-Stroke Shakers in Low-Frequency Vibration Calibration,” IEEE Trans. Ind. Electron. 70(8), 8537–8545 (2022). [CrossRef]  

21. Y. Zhang, C. Cai, Z. Liu, and D. Zheng, “Space-to-plane decoupling method for Stewart motion measurements,” Meas. Sci. Technol. 32(12), 125005 (2021). [CrossRef]  

22. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

23. N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems Man & Cybernetics 9(1), 62–66 (2007). [CrossRef]  

24. J. Zhu, Z. Xu, D. Fu, and C. Hu, “Laser Spot Center Detection and Comparison Test,” Photonic Sens. 9(1), 49–52 (2019). [CrossRef]  

25. Y. Shen, X. Zhang, C. W. Cheng, and L. Zhu, “Quasi-eccentricity error modeling and compensation in vision metrology,” Meas. Sci. Technol. 29(4), 045006 (2018). [CrossRef]  

26. C. Liebe, “Accuracy performance of star trackers - a tutorial,” IEEE Trans. Aerosp. Electron. Syst. 38(2), 587–599 (2002). [CrossRef]  

27. X. Yuan, G. Li, X. Tang, X. Gao, G. Huang, and Y. Li, “Centroid Automatic Extraction of Spaceborne Laser Spot Image,” Acta Geodaetica et Cartographica Sinica 47(2), 135–141 (2018). [CrossRef]  

28. J. Zhao and F. Zhou, “High-precision center location algorithm of small-scale focal spot,” Infrared and Laser Engineering 43(8), 2690–2693 (2014).

29. L. Wang, Z. Hu, and X. Hang, “Laser spot center location algorithm based on Gaussian fitting,” J. Applied Optics 33(5), 985–990 (2012).

30. T. Bruns and S. Gazioch, “Correction of shaker flatness deviations in very low frequency primary accelerometer calibration,” Metrologia 53(3), 986–990 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this study are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of monocular vision-based calibration system for low-frequency tri-axial vibration sensors.
Fig. 2.
Fig. 2. Coordinate system for monocular vision-based calibration system.
Fig. 3.
Fig. 3. Schematic drawing of laser-based spatial excitation decoupling.
Fig. 4.
Fig. 4. Installation error of the low-frequency tri-axial vibration sensor calibration.
Fig. 5.
Fig. 5. (a) Image of the laser spot, (b) 3D-Grayscale distribution of laser spots, (c) 2D-Grayscale distribution of laser spots.
Fig. 6.
Fig. 6. Monocular vision-based low-frequency tri-axial vibration sensors calibration setup. (I)-(III): laser interferometry, (IV): DAQ card, (V): tri-axial vibration sensor, (VI): laser projected device, (VII): camera, (VIII): laser-points
Fig. 7.
Fig. 7. The measurement results by the original MV method, MV method after correction and LI: (a) X- axial displacement; (b) Y- axial displacement; (c) Z- axial displacement.
Fig. 8.
Fig. 8. The SRStd of calibrated sensitivities of tri-axial sensor by the monocular vision method and laser interferometry: (a) X- axial sensitivity; (b) Y- axial sensitivity; (c) Z- axial sensitivity.

Tables (2)

Tables Icon

Table 1. The measured relative deviations between LI and original MV method, MV method after correction.

Tables Icon

Table 2. The calibration results by the MV, LI and their relative deviations (RD)

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

[ M x M y M z ] = M R L [ L x L y L z ] + M T L ,
M R L = [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ] , M T L = [ h 1 h 2 h 3 ]
{ ( P x l P x 1 ) 2 + ( P y l P z 1 ) 2 + ( P z l 1 ) 2 = ( d L 1 L 2 2 ) 2 ( P x l P x 2 ) 2 + ( P y l P z 2 ) 2 + ( P z l 1 ) 2 = ( d L 1 L 3 2 ) 2 ( P x l P x 3 ) 2 + ( P y l P z 3 ) 2 + ( P z l 1 ) 2 = ( d L 2 L 3 2 ) 2
{ M x ( t ) = M x ^ cos ( ω x t + φ x ) M y ( t ) = M y ^ cos ( ω y t + φ y ) M z ( t ) = M z ^ cos ( ω z t + φ z )
( M x 1 M x s M y 1 M y s M z 1 M z s ) = M R P ( P x 1 P x s P y 1 P y s P z 1 P z s )
J = s = 1 n min ( ( P x s M x s ) 2 + ( P y s M y s ) 2 + ( P z s M z s ) 2 )
[ M x M y M z ] = M R P [ P x l P y l P z l ] + M T L
[ C x C y 1 ] = H [ W x W y 1 ]
H = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ]
M j ( x , y ) = F j ( x , y ) g ( u , v )
R k ( x , y ) = u = 1 M v = 1 N P ( x + u , y + v ) Q ( u , v ) u = 1 M v = 1 N [ P ( x + u , y + v ) ] 2 u = 1 M v = 1 N [ Q ( u , v ) ] 2
{ P ( x + u , y + v ) = M j ( x + u , y + v ) F ¯ j Q ( u , v ) = T ( u , v ) T ¯
S 2 = ( m G p 1 m ) 2 p 1 ( 1 p 1 )
m = i = 0 σ i μ i
I 1 ( x , y ) = K e ( ( x x c e n t e r ) 2 2 δ x 2 ( y y c e n t e r ) 2 2 δ y 2 )
[ d x j d y j d z j ] = M R P [ W x l j W x l 1 W y l j W y l 1 W z l j W z l 1 ]
[ a X ( t ) a Y ( t ) a Z ( t ) ] = [ a ^ X cos ( φ X ) a ^ Y cos ( φ Y ) a ^ Z cos ( φ Z )   a ^ X sin ( φ X ) a ^ Y sin ( φ Y ) a ^ Z sin ( φ Z ) ] [ cos ( ω t ) sin ( ω t ) ]
[ a x j ( t j ) a y j ( t j ) a z j ( t j ) ] = [ A X   B X C X D X A Y   B Y C Y D Y A Z   B Z C Z D Z ] [ cos ( ω v t j ) sin ( ω v t j ) t j 1 ]
{ a ^ X = A X 2 + B X 2 , φ X = arctan ( B X / A X ) a ^ Y = A Y 2 + B Y 2 , φ Y = arctan ( B Y / A Y ) a ^ Z = A Z 2 + B Z 2 , φ Z = arctan ( B Z / A Z )
[ a X ( t ) a Y ( t ) a Z ( t ) ] = [ a ^ X cos ( φ X ) a ^ Y cos ( φ Y ) a ^ Z cos ( φ Z )   i a ^ X sin ( φ X ) i a ^ Y sin ( φ Y ) i a ^ Z sin ( φ Z ) ] [ cos ( ω t ) i sin ( ω t ) ]
[ a X ( t ) a Y ( t ) a Z ( t ) ] = [ Re ( a ^ X e i φ X ) Re ( a ^ Y e i φ Y ) Re ( a ^ Z e i φ Z )   Im( a ^ X e i φ X ) Im( a ^ Y e i φ Y ) Im( a ^ Z e i φ Z ) ] [ Re ( e i ω t ) Im( e i ω t ) ]
[ V X ( t ) V Y ( t ) V Z ( t ) ] = [ Re ( V ^ X e i φ X0 ) Re ( V ^ Y e i φ Y0 ) Re ( V ^ X e i φ Z0 )   Im( V ^ X e i φ X0 ) Im( V ^ X e i φ Y0 ) Im( V ^ X e i φ Z0 ) ] [ Re ( e i ω t ) Im( e i ω t ) ]
[ S X S Y S Z ] = [ 1 a X ( t ) 0 0 0 1 a Y ( t ) 0 0 0 1 a Z ( t ) ] [ V X ( t ) V Y ( t ) V Z ( t ) ] = [ V ^ X a ^ X e i ( φ X φ X0 ) V ^ X a ^ X e i ( φ Y φ Y0 ) V ^ X a ^ X e i ( φ Z φ Z0 ) ]
S RStd = 1 Q 1 i = 1 H ( S i S ¯ ) 2 S ¯ × 100 %
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.