## Abstract

A novel calibration method for the line-structured light vision sensor that only requires the image of the light stripe on the target using a movable parallel cylinder target is proposed in this paper. The corresponding equations between two ellipses obtained from the intersection of the light stripe and the target and their projected images are established according to the perspective projection transformation, and the light plane equation is solved based on the constraint conditions that the minor axis of the ellipse is equal to the diameter of the cylinder. In the physical experiment, the field of view of the line-structured light vision sensor is about 500 mm × 400 mm, and the measurement distance is about 700 mm. A calibration accuracy of 0.07 mm is achieved using the proposed method, which is comparable to that when planar targets are used.

© 2015 Optical Society of America

## 1. Introduction

Among many vision measurement methods [1–5], the light-structured vision measurement method is one of the most widely applied in industrial environment because of its large measurement range, non-contact, rapidity, and high precision [6–8]. This technology is especially suitable for dynamic measurements. Depending on the form of projected light, the light-structured vision measurement method can be divided into four categories, namely, point-structured light method, line-structured light method, grating-structured light method, and coded-structured light method. The point-structured light method enables the acquisition of 1D data but is incapable of 3D shape measurement. The coded-structured light method [9–13] usually involves the use of a camera and a projector to form a 3D optical sensor that is suited to static and dynamic measurements. However, the limited power of the projector makes it unsuitable for dynamic measurements in complex industrial environments, especially 3D profiles of fast-moving or high-temperature objects. By combining the large-power laser and the camera to construct a 3D vision sensor, the line-structured light method and the grating-structured light method can be applied to measure the 3D profile of objects in complex industrial environments. These methods have been applied to on-line measurements in the rail transport industry, such as the measurement of train wheels, steel rails, and pantographs, and in the steel industry, such as the measurement of geometric dimension of hot steel.

Line-structured light vision sensor and grating-structured light vision sensor share similar calibration procedures, which consist of the calibration of intrinsic parameters of the camera and light plane parameters. Many studies concerning the calibration of intrinsic parameters of the camera have been published. For example, calibration methods that use 3D targets [14], 2D targets [15], 1D targets [16], and spherical targets [17,18] have been reported. We assume that the intrinsic parameters of the camera are known; hence, the present study focuses on the calibration of light plane parameters. The calibration method for light plane parameters has been widely reported [19–23]. For the calibration of light plane parameters, movable 3D targets [19], 2D targets [20,21], 1D targets [22] or a single spherical target [23] are usually used. Huynh, et al. [19] utilize the principle of cross ratio invariability to determine the calibration points of the light plane by using a 3D target. The main goal is to acquire at least three collinear points with accurate coordinates by using the targets. The principle of cross ratio invariability is then used to obtain the calibration points on the light plane with high precision. Zhou, et al. [20] propose a method for the calibration of light plane parameters based on a planar target. In this method, the calibration points on the light plane are acquired using the cross ratio invariability. The planar target is repeatedly moved to obtain the calibration points on the light plane, and the light plane equation is then fitted using the calibration points. Liu, et al. [21] also report the use of planar targets but he adopted Plücker's equations to describe the line of light stripes. Compared with those reported in literature [20] where only a few characteristic points of light stripes are used, the calibration precision is improved. Wei, et al. [22] propose a method for calibrating the line-structured light vision system based on a 1D target. The 3D coordinates of the intersection between the light plane and the 1D target are solved using the distances between the characteristic points of the 1D target. The light plane equation is solved by fitting the 3D coordinates with several intersections. Liu, et al. [23] propose a method based on a single ball target. The method needs to extract the image of the profile of the ball target to solve the ball target position under the camera coordinate frame, and combines the cone profile that decided by the laser stripe to solve the light plane equation. This method has some advantages since the profile feature of the ball target is unaffected by the placement angle of the target, but also needs to extract the image of the profile of the ball target.

According to the above analyses, the traditional on-site calibration methods for the line-structured light vision sensor require information about the characteristic points and the light stripe on the target to calculate the light plane equation. In some complex light environments such as strong sunlight or at night, clear images of the light stripes and the characteristic points on the target are difficult to obtain simultaneously. Thus, several auxiliary methods are used to help the vision sensor, such as awnings or auxiliary lighting. All of these factors make calibrating the line-structured light vision sensor difficult in complex light environments. In addition, the line-structured light vision sensor always is equipped with an optical filter to reduce the impact of complex light environments; however, the filter makes obtaining clear images of the characteristic points on the target impossible.

To solve the problem presented above, a novel calibration method for the line-structured light vision sensor that only requires the image of the light stripe on the target is proposed in this paper. The light plane of the laser projector intersects with the parallel cylinder target to form two ellipses in space. Based on the perspective projection transformation, the equations relating the two ellipses in space with the corresponding projected images are then established. The light plane equation is solved at constraint conditions where the minor axis of the ellipse is equal to the radius of the cylinder. In this paper, the remaining chapters are arranged as follows: Section 2 is a detailed introduction of the basic principle of the proposed algorithm; Sections 3 and 4 present the simulation and physical experiments, respectively; and section 5 concludes the study.

## 2. Principle of the algorithm

The procedures of calibrating the light plane parameters of the line-structured light vision sensor are shown in Fig. 1. Suppose ${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$ represents the coordinate system of the camera; ${O}_{\text{u}}{x}_{\text{u}}{y}_{\text{u}}$is the coordinate system of the image; $\pi $ is the light plane, the equation of which is written as $ax+by+cz+d=0$, where $\sqrt{{a}^{2}+{b}^{2}+{c}^{2}}=1$ . The coordinate system of the line-structured light vision sensor is established upon${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$. ${Q}_{\text{1}}^{}=\left[\begin{array}{ccc}1/{\beta}_{1}^{2}& 0& 0\\ 0& 1/{\alpha}_{1}^{2}& 0\\ 0& 0& -1\end{array}\right]$ and ${Q}_{\text{2}}^{}=\left[\begin{array}{ccc}1/{\beta}_{2}^{2}& 0& 0\\ 0& 1/{\alpha}_{2}^{2}& 0\\ 0& 0& -1\end{array}\right]$ are the expressions of the two ellipses obatined from the intersections of the light plane and the target in space, respectively. ${\alpha}_{1}$ and ${\alpha}_{2}$ are the semi-major axes of ${Q}_{1}$ and ${Q}_{2}$, respectively. ${\beta}_{1}$ and ${\beta}_{2}$ are the semi-minor axes of the ellipse ${Q}_{1}$ and ${Q}_{2}$, respectively. ${C}_{1}$ and ${C}_{2}$ are the images of ${Q}_{1}$ and ${Q}_{2}$, respectively.

As shown in Fig. 1, the two ellipses ${Q}_{1}$ and ${Q}_{2}$ can be obtained from the tangency of the light plane and the target. Through ellipse fitting for the two light stripe in the image, the images of ${Q}_{1}$ and ${Q}_{2}$ are obtained as ${C}_{1}$ and ${C}_{2}$, respectively. The *y* axis of ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ is same to the direction of the major axis of ${Q}_{1}$, the *x* axis is same to the direction of the minor axis of ${Q}_{1}$, and the center of the ellipse is taken as ${O}_{\text{w1}}$. The coordinate system ${O}_{\text{w2}}{x}_{\text{w2}}{y}_{\text{w2}}{z}_{\text{w2}}$ is then established for ${Q}_{2}$ by using a similar method. ${T}_{1}=\left[\begin{array}{cc}{R}_{1}& {t}_{1}\\ 0& 1\end{array}\right]$ is the transformation matrix from ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ to ${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$.${R}_{1}$ and ${t}_{1}$are the rotation matrix and the translation vector from ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ to ${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$, respectively.

The two cylinders with the same radius are parallel with each other. ${Q}_{1}$ and ${Q}_{2}$ can be obtained from the tangency of the light plane and the two cylinders of the target. It’s obviously that the coordinate frames of ${Q}_{1}$ and ${Q}_{2}$ are parallel with each other. According to the literature [24], the lengths of semi-major axes and the distances of two foci of ${Q}_{1}$ and ${Q}_{2}$ are calculated, as follows:

We have:

Based on Eq. (1) and Eq. (2), we have:

According to the above analyses, the following conclusions are obtained:

Conclusion 1: The two ellipses ${Q}_{1}$ and ${Q}_{2}$ have exactly the same size. The minor axes of the two ellipses ${Q}_{1}$ and ${Q}_{2}$ are both equal to the diameter of cylinder.

Conclusion 2: The major and minor axes of two ellipses ${Q}_{1}$ and ${Q}_{2}$ are parallel with each other.

The above conclusions are based on the assumptions that two cylinders are parallel with each other and they have the same diameter. Hence, the diameter error and the parallelism between two cylinders will influence the calibration accuracy. The machining accuracy of the proposed target is higher than that of complex targets in the same condition since the cylinder is easily machined, so the machining error of the proposed target can bring fewer effects on the calibration accuracy than that of other complex targets.

The specific procedures of the proposed method are as follows:

Step 1: The target is placed at the proper place at least once. The light stripe on the target is captured by the line-structured light vision sensor. The central points of the two light stripes in the image are extracted. After distortion correction, ${C}_{1}$ and ${C}_{2}$are obtained by fitting.

Step 2: Using perspective projection model, the equations of ${C}_{1}$, ${C}_{2}$, ${Q}_{1}$ and ${Q}_{2}$ are established using a perspective projection model. ${T}_{\text{1}}$ is then obtained using the orthogonality of the rotation matrix.

Step 3: The linear and non-linear solution of the light plane equation can be obtained by ${T}_{\text{1}}$ and the non-linear optimization method, respectively.

*2.1 Solving *${C}_{\text{1}}^{}$*and *${C}_{\text{2}}^{}$

According to Steger's method [25], the centers of the two light stripes in the image are extracted. Combined with ellipse fitting, ${C}_{\text{1}}^{}$ and ${C}_{\text{2}}^{}$ are solved, as shown in Fig. 2.

*2.2 Solving *${T}_{\text{1}}$

${C}_{\text{1}}^{}$and ${C}_{\text{2}}^{}$are 3 × 3 matrices, for which the expressions are written as Eq. (4).

*j*-th ellipse under ${O}_{\text{u}}{x}_{\text{u}}{y}_{\text{u}}$.

${Q}_{1}^{}$and ${Q}_{2}^{}$are 3 × 3 matrices, for which the expression is written as Eq. (5).

*j*-th ellipse below ${O}_{\text{w}j}{x}_{\text{w}j}{y}_{\text{w}j}$. According to the camera model,

*u*

_{0}and

*v*

_{0}are the coordinates of the principal point, ${a}_{x}$ and ${a}_{y}$ are the scale factors in the image axes

*u*and

*v*, and the parameter $\gamma $is the skew of the two image axes. As known from conclusion 2, ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$completely parallels the coordinate axis of ${O}_{\text{w2}}{x}_{\text{w2}}{y}_{\text{w2}}{z}_{\text{w2}}$, so ${R}_{1}={R}_{2}=\text{[}\begin{array}{ccc}{r}_{1}& {r}_{2}& {r}_{3}\end{array}\text{]}$.${R}_{2}$ and ${t}_{2}$are the rotation matrix and the translation vector from ${O}_{\text{w2}}{x}_{\text{w2}}{y}_{\text{w2}}{z}_{\text{w2}}$ to ${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$, respectively.

Substituting Eq. (6) into Eq. (4),

Combining Eqs. (5) and (7),

where ${\rho}_{j}$ is the non-zero scale factor.By expanding Eq. (8), the equation relating${C}_{\text{1}}^{}$, ${C}_{\text{2}}^{}$ to ${Q}_{1}^{}$, ${Q}_{2}^{}$ is obtained in Eq. (9):

A target with one cylinder is used, and the three rotation angles of ${R}_{1}$ contained in Eq. (9) are determined to be unknown quantities; ${t}_{1}$contains three unknown quantities; $\alpha $and non-zero scale factor ${\rho}_{1}$ are unknown quantities; $\beta $ is a known quantity. Thus, there are a total of eight unknown quantities. However, Eq. (9) only provides six constraint equations. So calibrating the light plane parameters by using a target with one cylinder is impossible. When a target with two parallel cylinder is used, the three rotation angles of ${R}_{1}={R}_{2}$ are determined to be unknown quantities; ${t}_{1}$and ${t}_{2}$contain six unknown quantities; $\alpha $, ${\rho}_{1}$ and ${\rho}_{2}$ are also unknown quantities; $\beta $ is a known quantity. Thus, there are twelve unknown quantities. Given that Eq. (9) provides twelve equations when a target with two parallel cylinders is used, the equation becomes solvable.

We can decompose Eq. (9) into twelve equations, as follows:

Establishing simultaneous equations with the first six equations in Eq. (10) as Eq. (11) and utilizing the orthogonality of ${r}_{1}$ and ${r}_{2}$,

By establishing simultaneous equations with the last six equations in Eq. (10) as Eq. (12), ${t}_{1}$ and ${t}_{2}$can be solved.

#### 2.3 Solving the light plane equation

Given that ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ is established on the light plane, the coefficients of the light plane equation [*a*,*b*,*c*,*d*] under${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$ are set as follows:

$\tilde{p}$ is the undistorted image homogeneous coordinate of *P* under ${O}_{\text{u}}{x}_{\text{u}}{y}_{\text{u}}$. If [*a*,*b*,*c*,*d*] are known, the homogeneous coordinate ${q}_{\text{c}}={[{x}_{\text{c}},{y}_{\text{c}},{z}_{\text{c}},1]}^{\text{T}}$ of *P* under ${O}_{\text{c}}{x}_{\text{c}}{y}_{\text{c}}{z}_{\text{c}}$ can be solved using Eq. (14):

#### 2.4 Non-linear optimization

To improve the calibration accuracy, the parallel cylinder target is placed at different positions. The light plane parameters are then optimized using the maximum likelihood criterion. The centers of the light stripe in the image of the target at the *i*-th position are extracted and undistorted. Suppose that the *m*-th undistorted image homogeneous coordinates of ellipses 1 and 2 are ${\tilde{p}}_{1i(m)}^{}$ and${\tilde{p}}_{2i(m)}$, respectively. ${q}_{1i(m)}^{}={[{x}_{1i(m)}^{},{y}_{1i(m)}^{},{z}_{1i(m)}^{},1]}^{\text{T}}$and ${q}_{2i(m)}^{}={[{x}_{2i(m)}^{},{y}_{2i(m)}^{},{z}_{2i(m)}^{},1]}^{\text{T}}$of ${\tilde{p}}_{1i(m)}^{}$ and ${\tilde{p}}_{2i(m)}$ under ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ are solved by Eq. (15), respectively.

The 3D coordinates of the centers of the light stripe in the image at each position of the target under ${O}_{\text{w1}}{x}_{\text{w1}}{y}_{\text{w1}}{z}_{\text{w1}}$ are solved using Eq. (15). The *z* coordinate components of these points are zero, thereby indicating that these points are located on the light plane. ${Q}_{1i}^{}$ and ${Q}_{2i}^{}$ can then be obtained via ellipse fitting. From ${Q}_{1i}^{}$ and ${Q}_{2i}^{}$, semi-major axes ${\alpha}_{1i}$ and ${\alpha}_{2i}$ and semi-minor axes ${\beta}_{1i}$ and ${\beta}_{2i}$, as well as the angles ${\phi}_{1i}$ and ${\phi}_{1i}$ between major axis and *x* axis, can be calculated. The objective function is established as follows:

*a*,

*b*,

*c*,

*d*] can then be solved using Eq. (13).

## 3. Simulation experiment

The proposed method is verified by the simulation experiment. Generally, image noise and the dimension of the target have a high impact on the calibration. The simulation experiment is performed to determine the effects of the above two factors on the calibration accuracy. The conditions of the simulation experiment are as follows: camera resolution 1380 pixels × 1080 pixels, focal length 17 mm, field of view 400 mm × 300 mm. The light plane equation is expressed as $0.774x-0.126y+0.621z-276.398=0$. The calibration accuracy is evaluated by the relative error of [*a*,*b*,*c*,*d*].

#### 3.1 Impact of the image noise on calibration accuracy

In the experiment, the diameter of the target is 60 mm. The target is placed at five different positions. Gaussian noise with zero mean and a standard deviation of 0.1 to 1 pixel with an interval of 0.1 pixels is added to the characteristic points. For each noise level, 100 experiments are carried out, and the relative errors of light plane parameters are computed. The relative errors of the calibration results at different noise levels are shown in Fig. 3(a).

As shown in Fig. 3(a), the calibration accuracy is improved with the reduction of noise. Thus, the calibration accuracy can be improved by increasing the image processing accuracy. The accuracy of the extraction of light stripe center in the image usually reaches 0.1 pixel. Based on the simulation results, the relative error of the light plane parameters calibration via the proposed method can reach 0.05%.

#### 3.2 Impact of the diameter of the cylinder on calibration accuracy

In the experiment, Gaussian noise is added to the characteristic points, and the noise level is $\sigma =0.1$pixel. The diameter of the target is varied from 30 mm to 84 mm with an interval of 6 mm. For each diameter level, 100 experiments are carried out, and the RMS error is computed. The relative errors of the calibration results at different diameter levels are shown in Fig. 3(b). From Fig. 3(b), the calibration accuracy is improved by increasing the diameter of the cylinder. When the diameter of the cylinder is larger than 50 mm, the extent of improvement of calibration precision decreased with increasing diameter. Therefore, the diameter of the cylinder need not be infinitely increased to increase the calibration accuracy. Good calibration accuracy is obtained when the ratio of the field range to the diameter of the cylinder is about 8 (400 mm/50 mm).

## 4. Physical experiment

The line-structured light vision sensor used in the physical experiment consists of one camera and one line laser projector, as shown in Fig. 4. The camera of Allied Vision Technologies equipped with 17 mm Schneider optical lens is used with an image resolution of about 1 360 pixels × 1 024 pixels, field range of 500 mm × 400 mm, and measuring distance of 700 mm. A single line red laser projector with the power of 10 mw is used. The diameter of the cylinder in the target is 60 mm, and the machining accuracy of the target is 0.02 mm.

The physical experiment consists of the following steps: first, the performance of different targets is evaluated in complex light environments. Secondly, the intrinsic parameters of the camera are calibrated via the method in literature [15]. The light plane parameters are then calibrated using the calibration method in literature [21] and the proposed method, and a comparison is made for the calibration accuracy by using a planar target. Finally, the validity is tested by applying the proposed method to measure standard steel rail, wheel, and plaster cast.

#### 4.1 Performance of different targets in complex light environments

In this section, the advantages and disadvantages of the normal planar target, the LED planar target, the spherical target, and the parallel cylinder target are evaluated in complex light environments, such as dim light, strong sunlight, high-powered laser projector, and optical filter. As shown in Figs. 5 to 10, the green line, the red line and the yellow point denote the extracted light strip, the extracted outline of the spherical target and the extracted characteristic point, respectively.

Target images obtained in the good light environment when the normal planar target, the LED planar target, and the spherical target are used are shown in Fig. 5. As shown in Fig. 5, all the characteristic points, the light strips and the outline of the spherical target can be extracted clearly.

Target images obtained in the dim light environment when normal planar target, the LED planar target, and the spherical target are used are shown in Fig. 6. Despite increase of exposure time of the camera, clear characteristic point images of the normal planar target and clear outline images of the spherical target cannot be obtained. The clear characteristic points of the LED planar target can be obtained because the characteristic points are LED. Consequently, the LED planar target has certain advantages in the dim light environment.

Target images obtained in the strong sunlight environment are shown in Fig. 7. The characteristic points of the normal planar target and the LED planar target have similar image intensity with the target background, and the extraction accuracy of the image feature will be reduced. In the strong sunlight environment, it is difficult for the spherical target to obtain the clear outline image of the spherical target and the clear light stripe image. If the laser power of the vision sensor is low, the clear light stripe image cannot be obtained by using the above three targets because the exposure time of the camera is short for strong sunlight.

Images of the three targets obtained by the vision sensor with a high-powered laser projector are shown in Fig. 8. To obtain a clear image of the target characteristic points, the exposure time of the camera is increased, resulting in poor image quality of light stripes, as shown in Fig. 8(a). To obtain a clear image of the light stripes, the exposure time of the camera is reduced, resulting in poor image quality of target characteristic points and outline of the spherical target, as shown in Fig. 8(b). Obtaining clear image of the target characteristic point and the light stripe simultaneously is very difficult for the vision sensor with a high-powered laser projector. To solve this problem and obtain the characteristic points of the target, we fixed the target and turned off the laser projector. Next, we turned on the laser to obtain light stripe images. However, this method is not applicable to on-site calibration of vision sensor in complicated field environments.

In addition, in order to reduce the impact of the complex light, the line-structured light vision sensor always is equipped with an optical filter. As shown in Fig. 9, the camera with optical filter cannot obtain characteristic points of the target and the spherical target outline image by using the above three targets.

As shown in Figs. 5 to 9, the normal planar target and the spherical target are only suitable for the calibration of line-structured light vision sensor in the good light environment. Thus, they perform poorly in complex light environments. The LED planar target has advantages over the normal planar target and the spherical target that is more applicable in the dim light environment. However, these targets perform poorly in the strong sunlight environment when the vision sensor uses high-powered laser projector or optical filter. Moreover, the LED planar target is costly and difficult to machine.

Furthermore, the light stripe on both the normal planar target and the LED planar target easily intersects with the target characteristic points, resulting in the failure of the characteristic point extraction, as shown in Fig. 6 and Fig. 8. Consequently, we will pay special attention to avoiding the intersection of the light stripe and the characteristic points during the calibration. The spherical target has no problems on intersection during calibration; however, extracting the outline of the spherical target in complex light environments is difficult. All these factors bring difficulties to the on-site calibration of the vision sensor.

As shown in Fig. 10, we can obtain the clear image in complex light environments using the parallel cylinder target, such as dim light, strong sunlight, high-powered laser projector, and optical filter. The proposed method only needs the image of the light stripe on the target to calibrate the line-structured light vision sensor, and the above experiments have proven that the proposed method has better adaptability than the current methods in complex light environments.

According to [21–23], the calibration method using the planar target has better calibration accuracy than the 1D target and the spherical target. Therefore, we will evaluate the calibration accuracy of the proposed method by comparing the proposed method with the calibration method in [21].

#### 4.2 Calibration of intrinsic parameters of camera

The intrinsic parameters of the camera are calibrated via the software in literature [27]. During calibration, the planar target is placed in front of the sensor 10 times. The machining accuracy of the target is 5 um. All images used for calibration are shown in Fig. 11.

The calibration results of the intrinsic parameters of camera are shown as follows:

Intrinsic parameters of camera: *a _{x}* = 2733.80;

*a*= 2733.63; $\gamma $ = 0;

_{y}*u*

_{0}= 684.23;

*v*

_{0}= 524.69;

*k*

_{1}= −0.23;

*k*

_{2}= 0.31.

The uncertainty degree of the intrinsic parameters:${u}_{{a}_{x}}\text{=}1.05$; ${u}_{{a}_{y}}=1.11$; ${u}_{{u}_{0}}=1.08$; ${u}_{{v}_{0}}=1.09$.

#### 4.3 Results of calibration of the light plane parameters

The light plane parameters are calibrated via the method in literature [21]. The LED planar target is placed at five different positions in front of the sensor. The machining accuracy of the LED planar target is 0.02 mm. The images used in the calibration are shown in Fig. 12(a), and the calibration result is $\text{0}\text{.7811}x\text{-0}\text{.2185y+0}\text{.5812z}\text{-424}\text{.2294=0}$. The light plane parameters are calibrated via the proposed method with the target placed at five different positions. Figure 12(b) shows the images used in the calibration, and the calibration result is $\text{0}\text{.7822}x\text{-0}\text{.2195y+0}\text{.5816z}\text{-424}\text{.5595=0}$.

#### 4.4 Analysis of experimental results

The LED planar target is placed twice. At each position, the 3D coordinates of the point of intersection between the light stripe and the grid line of the LED planar target on the horizontal direction are calculated (the point of intersection is called the testing point). The distance between any two testing points is calculated as the measurement distance *d*_{m}. Following the principle of cross-ratio invariability, the local coordinates of the testing points in the coordinate frame of the LED planar target are calculated. The distance between any two testing points in the coordinate frame of the LED planar target is calculated as the ideal distance *d*_{t}. A total of 12 testing point can be obtained, as shown in Table 1.

The distance between the first test point and the remaining five test points is calculated at each position via the two methods. The RMS errors of the distances between the test points are obtained by estimating the deviation $\Delta d$ between *d*_{m} and *d*_{t}. The RMS errors of the distances obtained via the two methods are shown in Table 2. As shown in Table 2, the RMS error of the calibration method when the LED planar target is used is about 0.05mm and that when the proposed method is used is 0.07m. Thus, the calibration accuracy of the proposed algorithm is comparable to that when a LED planar target is used.

#### 4.5 Applications

The line-structured light vision sensor is applied to the measurements of standard rail, wheel, and plaster cast. The light stripe images are extracted via Steger's method [25]. After that, the 3D profiles are reconstructed using the above two calibration algorithms.

As shown in Fig. 13(a), the rail is 60kg/m rail. The rail wear measured by the high-precision three-coordinate measurement device (the vertical wear is 2.04 mm and the horizontal wear is 2.14 mm) is taken as the standard rail wear value. The measurement sites of wheel and plaster cast are shown in Figs. 13(b) and 13(c), respectively.

The correspondent reconstructed 3D profiles are shown in Fig. 14. The red color denotes the 3D profile obtained using the calibration results with the LED planar target, and the blue color denotes the 3D profile obtained via the proposed method. As shown in Fig. 14, the red color line in each sub-image overlaps closely to the blue line. It means the reconstructed 3D profiles using the two calibration results are similar to each other in three applications. The results further verify that the proposed method can achieve a calibration accuracy that is comparable to that when a LED planar target is used.

According to the standard value of rail wear and the correspondent reconstructed 3D profile of the standard rail, the RMS errors of the rail wear using the two calibration methods are calculated. The RMS errors of the vertical wear and horizontal wear using the proposed method are 0.17 mm and 0.14 mm, respectively. The RMS errors of the vertical wear and horizontal wear when a LED planar target is used are 0.14 mm and 0.13 mm, respectively. The results prove that the proposed method can meet the need for the high calibration accuracy of a line-structured light vision sensor in many industrial applications.

## 5. Conclusion

A novel calibration method for line-structured light vision sensor using a parallel cylinder target is proposed in this paper. The advantage of the proposed method is as follows:

The essential difference between the proposed method and the existing on-site method including the method based on a single ball is that the proposed method does not need any auxiliary information except for the image of the light stripe on the target that can be randomly moved. Thus, the proposed method is suitable for on-site calibration in complex light environments even when an optical filter is used for the line-structured light vision sensor. Moreover, the parallel cylinder target is easily machined with a high mechanical machining accuracy. A physical experiment is carried out with a field range of about 500 mm × 400 mm and a measuring distance of 700 mm. At this condition, the proposed method can achieve a calibration accuracy of 0.07 mm, which is comparable to that of algorithms involving the use of planar targets.

## Acknowledgments

The authors acknowledge the support from National Natural Science Foundation of China (NSFC) under Grant No. 51175027, 51575033 and the Beijing Natural Science Foundation under Grant No. 3132029.

## References and links

**1. **S. Shirmohammadi and A. Ferrero, “Camera as the instrument: the rising trend of vision based measurement,” IEEE Trans. Instrum. Meas. **17**(3), 41–47 (2014). [CrossRef]

**2. **Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. **49**(10), 1789–1801 (2010). [CrossRef] [PubMed]

**3. **W. Li and Y. F. Li, “Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror,” Opt. Express **19**(7), 5855–5867 (2011). [CrossRef] [PubMed]

**4. **L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express **21**(25), 30610–30622 (2013). [CrossRef] [PubMed]

**5. **E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, and J. D. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. **21**(2), 171–188 (2003). [CrossRef]

**6. **R. S. Lu, Y. F. Li, and Q. Yu, “On-line measurement of straightness of seamless steel pipe using machine vision technique,” Sens. Actuators A Phys. **94**(1-2), 95–101 (2001). [CrossRef]

**7. **A. Okamoto, Y. Wasa, and Y. Kagawa, “Development of shape measurement system for hot large forgings,” Kobe Steel Eng. Rep. **57**(3), 29–33 (2007).

**8. **Z. Liu, F. Li, B. Huang, and G. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A **31**(8), 1721–1729 (2014). [CrossRef] [PubMed]

**9. **X. Zhang, Y. Li, and L. Zhu, “Color code identification in coded structured light,” Appl. Opt. **51**(22), 5340–5356 (2012). [CrossRef] [PubMed]

**10. **Y. Chen and Y. F. Li, “Self-recalibration of a colour-encoded light system for automated three-dimensional measurements,” Meas. Sci. Technol. **14**(1), 33–40 (2003). [CrossRef]

**11. **P. Griffin, L. Narasimhan, and S. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. **25**(6), 609–616 (1992). [CrossRef]

**12. **A. K. C. Wong, P. Niu, and X. He, “Fast acquisition of dense depth data by a new structured light scheme,” Comput. Vis. Image Underst. **98**(3), 398–422 (2005). [CrossRef]

**13. **T. P. Koninckx and L. Van Gool, “Real-time range acquisition by adaptive structured light,” IEEE Trans. Pattern Anal. Mach. Intell. **28**(3), 432–445 (2006). [CrossRef] [PubMed]

**14. **R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. **3**(4), 323–344 (1987). [CrossRef]

**15. **Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(11), 1330–1334 (2000). [CrossRef]

**16. **Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. **26**(7), 892–899 (2004). [CrossRef] [PubMed]

**17. **H. Zhang, K. Y. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. **29**(3), 499–502 (2007). [CrossRef] [PubMed]

**18. **K. Y. Wong, G. Zhang, and Z. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. **20**(2), 305–316 (2011). [CrossRef] [PubMed]

**19. **D. Q. Huynh, R. A. Owens, and P. E. Hartmann, “Calibration a structured light stripe system: a novel approach,” Int. J. Comput. Vis. **33**(1), 73–86 (1999). [CrossRef]

**20. **F. Q. Zhou and G. J. Zhang, “Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,” Image Vis. Comput. **23**(1), 59–67 (2005). [CrossRef]

**21. **G. J. Zhang, Z. Liu, J. H. Sun, and Z. Z. Wei, “Novel calibration method for multi-sensor visual measurement system based on structured light,” Opt. Eng. **49**(4), 043602 (2010). [CrossRef]

**22. **Z. Z. Wei, L. J. Cao, and G. J. Zhang, “A novel 1D target-based calibration method with unknown orientation for structured light vision sensor,” Opt. Laser Technol. **42**(4), 570–574 (2010). [CrossRef]

**23. **Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based a single ball target,” Opt. Lasers Eng. **69**(6), 20–28 (2015). [CrossRef]

**24. **A. R. Partridge, “Ellipses from a Circular and Spherical Point of View,” Two-year. Coll. Math. J. **14**(5), 436–438 (1983).

**25. **C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. **20**(2), 113–125 (1998). [CrossRef]

**26. **J. MORE, *The Levenberg-Marquardt Algorithm, Implementation and Theory* (Numerical Analysis, 1977).

**27. **J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/.