Abstract

The binocular vision galvanometric laser scanning (BGLS) system, which consists of a traditional galvanometric laser scanning (GLS) system and a stereo vision system, is widely applied in various advanced manufacturing fields such as 3-dimensional (3D) laser cutting and 3D laser projection positioning. The BGLS system usually needs to be recalibrated before it is put to a certain use, since the camera parameters, as well as the relative pose between the camera and the GLS, may be changed with different applications. However, a full calibration of the BGLS system requires elaborate modeling and mass calibration data, which considerably affect the efficiency of the BGLS system. A rapid on-site recalibration method is proposed, which can substantially improve the efficiency and flexibility of the BGLS system. With the method, the BGLS system needs to be carefully off-site calibrated for only one time. The on-site precise recalibration can be quickly realized by taking only two images of the laser spots projected by the GLS on a planar board. Moreover, an ingenious linear solving method is proposed to make the whole computation process more stable and timesaving. On-site recalibration and target shooting experiments are respectively conducted to verify the proposed method.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Due to its superiority in scanning speed and precision, the galvanometric laser scanning (GLS) system is commonly applied in various fields such as material processing [1–5], laser projection [6,7], shape measurement [8,9], and medical imaging [10,11]. However, the GLS system itself is hard to project or process specific features on irregular three-dimensional (3D) objects. Since the relative pose between the GLS system and the 3D object is typically unknown, it is difficult to generate the specific 3D laser trajectory on the object surface. Binocular stereo vision, as a noncontact 3D sensing approach, has the ability to measure the geometry of the 3D object and align it with the GLS system to the same coordinate system. Therefore, the binocular vision has been widely employed in various 3D laser applications, such as laser cutting/engraving [12,13], laser drilling [14] and laser projection positioning. The system consisting of a GLS and a binocular camera is called the binocular vision galvanometric laser scanning (BGLS) system in the context of this paper.

It is necessary for a BGLS system to be calibrated in advance to implement a specific application. Qi et al [15] calibrate the BGLS system to cut duck feathers using the method proposed in [16], in which the GLS system is adapted to a pinhole camera based on the similarities between the distortion model of the GLS and the camera. However, a real GLS system does not have an optical center as that of a camera, so more optimization parameters are used to ensure the calibration accuracy. Too many parameters may lead to hard and non-convex optimization problems with a growing risk for local minima. There exist some other methods that can be used to calibrate the BGLS system such as the physical-model-based method [17], the Look-Up-Table (LUT) based method [18,19] and data-driven based method [19]. All these calibration methods establish the relationship between the input control signals of the GLS

To ensure the calibration result valid, the positional relation between the GLS system and the binocular camera must keep fixed once the calibration of the BGLS system has been completed. However, the fixed camera-GLS structure may greatly lower the flexibility of the BGLS system, since the binocular camera usually needs to independently adjust its position to guarantee the 3D objects be within the field-of-view. In some situations, even the lenses of the binocular system need to be changed to adapt to the 3D objects in different sizes and/or distances. The above changes will invalid the calibration results of the BGLS system. No matter what existing method is used, a full BGLS system calibration process is quite time-consuming, since the calibration commonly requires large amount of data samplings. The requirement for frequent and time-consuming calibration severely hinder the application of the BGLS system with unfixed binocular camera. Most existing GLS calibration methods focus on eliminating the distortion and improving the accuracy of the GLS [20,21], but few literature concerns the reuse of the calibration results in the case of binocular camera changes.

In order to increase the efficiency and the flexibility of the BGLS system, we propose a rapid on-site recalibration method to support the binocular camera adjustments while working on different 3D objects. To begin with, the mapping between the input control signals and the corresponding space vectors of the laser beams is obtained through a full off-site calibration process. The calibration result is saved and then reused in the following on-site recalibration. The recalibration can be quickly and precisely realized by taking only two images of the laser spots projected by the GLS on a planar board. It is worthy of notice that the proposed recalibration method works no matter what off-site full calibration method is used.

The rest of this paper is organized as follows. The flexible structure BGLS system configuration, the off-site calibration method we used, and the proposed recalibration method are described in Section 2. Experiments on recalibration and targets shooting are demonstrated in Section 3 to validate the proposed method. Finally, the paper is concluded in Section 4.

2. Method

2.1 BGLS system configuration

As shown in Fig. 1, the BGLS system to be recalibrated is composed of two parts: the GLS system and the binocular vision system. The GLS system consists of a laser transmitter, a double-mirror galvanometric scanner and a controller. The transmitter emits a laser beam, which is then deflected by the dual mirrors of the galvanometric scanner. The controller is responsible for controlling the opening/closing of the laser transmitter and the rotation angles of the dual mirrors. The binocular vision system consists of two calibrated cameras, which can realize 3D reconstruction of feature points in the field-of-view. The GLS and the two cameras are not rigidly fixed together.

 figure: Fig. 1

Fig. 1 BGLS system configuration.

Download Full Size | PPT Slide | PDF

2.2 The full off-site calibration

For the sake of completeness, we first briefly introduce the off-site full calibration method we used. Let OoXoYoZo represent the coordinate system of the binocular vision system in the off-site calibration. Denote the digital voltage value applied to the motor of the first mirror as dx, and that of the second mirror asdy. The symbolD represents [DxDy]T. The outgoing laser beam corresponding to a specificDis represented byl. InOoXoYoZo, Vo=[vo1vo2vo3vo4vo5vo6]Trepresents the six-dimensional (6D) vector of l, where [vo1vo2vo3]Trepresents the direction of l, and [vo4vo5vo6]Trepresents a point onl. The mapping between the 2D digital control voltages [DxDy]Tand the 6D vector [vo1vo2vo3vo4vo5vo6]Tis denoted as Mo:DVo.

It’s mentioned in [19,22] that the calibration result of the data-driven method is more accurate compared to that of others in case of enough data. As a result, we use a data-driven method [22] to off-site calibrate the BGLS system for achieving the mapping Mo:DVo.The whole off-site calibration process is shown in Fig. 2, which contains two main steps: 1) acquiring the training data set, and 2) fitting the regression model of Mo:DVo via statistical learning.

 figure: Fig. 2

Fig. 2 The off-site full calibration process.

Download Full Size | PPT Slide | PDF

Firstly, we sample the training data in OoXoYoZo. As shown in Fig. 2, control signals Dk=[DxkDyk]T,(k=1,2,Q)is successively sent to the GLS system to emit Q laser beams into the field-of-view of binocular system. An opaque planar board is used to intercept the laser beams to form a grid of laser spots. The laser spot grid on the board is imaged by the binocular system. By means of the binocular stereo vision algorithm, 3D coordinates(xok,yok,zok),(k=1,2,Q) of the laser spots are generated in OoXoYoZo. We denote the 3D coordinates of the laser spots on one grid as G. As the planar board moves to N equally spaced positions along the approximate direction of the outgoing laser beams, data sets Gi,i=1,2,N can be obtained. Denote the 3D coordinates of the k-th laser spot on the i-th grid as pik. The 6D vector Vok=[vo1kvo2kvo3kvo4kvo5kvo6k]Tof the laser beam lk in OoXoYoZo is then estimated by fitting {poik,i=1,2,,N}. The fitting is performed by minimizing the error measure shown in Eq. (1)

E(Vok)=i=1N(dik)2
where dik is the distance from the point poik to the laser beam lk. In this way, we achieve the 6D vectors Vok,k=1,2,Qof all the outgoing laser beams, and the training data set (Dk,Vok),k=1,2,Q is fully achieved.

Secondly, utilizing the training data set (Dk,Vok),k=1,2,Q, we solve the mapping Mo:DVo by a statistical learning method. More specifically, a single hidden layer feedforward neural network (SLFN) [23], which has been applied in many applications for its good learning ability, is chosen to model the mapping Mo:DVo. Taking the pair of control signals [DxkDyk]Tas input and the parameters of the outgoing laser beam[vo1kvo2kvo3kvo4kvo5kvo6k]T as output, we establish a SLFN as shown in Fig. 2. By using the learning methods of neural networks like BP [24] and ELM [25], the SLFN can be trained with the training data set. In terms of calculation efficiency, we use the ELM to train the SLFN. The fully trained SLFN uniquely determines the mapping Mo:DVo.

2.3 Recalibration strategy

With the mapping Mo:DVo achieved in Section 2.2, the BGLS system can be directly put to use under the precondition that the pose between the binocular system and the GLS system keeps unchanged. However, the vision system usually needs to be adjusted on-site for the reasons mentioned in Section 1. Since the mapping Mo:DVo is relative to the coordinate system of the stereo camera OoXoYoZo, any adjustment of the binocular camera will invalid the original calibration result of the BGLS system. In order to avoid the full calibration process in the case of camera adjustment, a rapid on-site recalibration strategy is designed in this section.

The relative position between the laser source and the galvanometric scanner is fixed, and the repetitive accuracy of the scanner is usually in the order of 1 μrad or even smaller. As a result, we can consider that the outgoing laser beam corresponding to an arbitrary GLS control signal is unchanged in GLS’s own coordinate system. Let OrXrYrZr represent the coordinate system of the binocular vision system in the on-site recalibration. It is noteworthy that the binocular system needs to be calibrated in advance of the BGLS system recalibration. Since the stereo camera calibration is a well-studied problem, we do not detail this issue here. Compared with the calibration of the cameras, the calibration of a GLS system is more time-consuming and inconvenient to operate. As shown in Fig. 3, we manage to reuse the original calibration via a coordinate transformation [R|T] between OoXoYoZo and OrXrYrZr. Here, R and T represent respectively the rotation matrix and the translation vector between OoXoYoZo andOrXrYrZr. In OoXoYoZo, the laser beams corresponding to the digitals Dk are denoted as lok,k=1,2,Q, whose 6D space vector is Vok=[vo1kvo2kvo3kvo4kvo5kvo6k]T. In OrXrYrZr, the laser beams corresponding to the same digitals are denoted as lrk,k=1,2,Q, whose 6D space vector is Vrk=[vr1kvr2kvr3kvr4kvr5kvr6k]T. In the recalibration process, a planar board is put to two different positions so that two grids of laser spots on the board are formed. The images of the two laser spot grids are captured by the on-site binocular system for calculating[R|T].

 figure: Fig. 3

Fig. 3 The coordinate transformation between OoXoYoZo and OrXrYrZr.

Download Full Size | PPT Slide | PDF

The two groups of space vectors Vok=[vo1kvo2kvo3kvo4kvo5kvo6k]Tand Vrk=[vr1kvr2kvr3kvr4kvr5kvr6k]Tsatisfy the following two equations:

{[vr1kvr2kvr3k]T=R[vo1kvo2kvo3k]T[vr4kvr5kvr6k]T=R[vo4kvo5kvo6k]T+T,k=1,2,Q
where [vo1kvo2kvo3k]Tand [vr1kvr2kvr3k]T represent respectively the direction vector of the laser beam corresponding to the control digital Dk in OoXoYoZo and OrXrYrZr, [vo4kvo5kvo6k]T and [vr4kvr5kvr6k]Trepresent respectively the 3D coordinates of the point on the laser beam in OoXoYoZo and OrXrYrZr. Therefore, we can estimate the transformation matrix [R|T] by aligning the two groups of space vectors [vr1kvr2kvr3kvr4kvr5kvr6k]T and[vo1kvo2kvo3kvo4kvo5kvo6k]T. The space vectors [vo1kvo2kvo3kvo4kvo5kvo6k]T,k=1,2,Q in OoXoYoZo have been obtained in the off-site calibration. Utilizing the captured images, the two grids of 3D points Gr1={pr1k|(xr1k,yr1k,zr1k),k=1,2,Q} andGr2={pr2k|(xr2k,yr2k,zr2k),k=1,2,Q} on the planar board are easy obtained by the binocular stereo vision algorithm. According to Eq. (2), the two groups of space vectors are aligned by minimizing the error measure in Eq. (3)
E(R,T)=k=1Q((d1k)2+(d2k)2)
where d1kis the distance from the point pr1k to the laser beam lk in OrXrYrZr, and d2k is the distance from the point pr2k tolk. The transformation matrix [R|T] can be achieved by minimizing the objective function in Eq. (3). A good initial estimate [R0|T0] is the key for preventing the optimization process from being trapped into local minima and shortening the convergence time. In the following two subsections, we propose a two-step linear method to estimate the initial values of R0 and T0.

2.3.1 Initial estimation of rotation matrix

As shown in Fig. 4(a), the initial rotation matrix R0 is estimated firstly. To achieve this, the 3D coordinates [vo4kvo5kvo6k]T on the laser beam lok is replaced by the corresponding 3D coordinates [xr1kyr1kzr1k]T. In this way, the group of laser beams lok,k=1,2,Q are transformed into a group of laser beams, denoted as l1k,k=1,2,Q. Then by further applying a proper pure rotation R0 to l1k, it would go through the corresponding laser spots pr1k andpr2k. So, we have Eq. (4).

 figure: Fig. 4

Fig. 4 The two-step method for estimating the initial [R0|T0]. (a) The estimation of rotation matrixR0. (b) The estimation of translation vector T0.

Download Full Size | PPT Slide | PDF

[vo1kvo2kvo3k]=R0[xr2kxr1kyr2kyr1kzr2kzr1k]

For reducing the number of parameters inR0, we use the Roderick matrix to replace the classic rotation matrix by Eq. (5)

R0=(I+S)(IS)1
whereSis the antisymmetric matrix [0cbc0aba0], anda,b,c are three independent parameters. I is an 3×3 identity matrix. By substituting Eq. (5) into Eq. (4), we have
[0z21kvo3ky21kvo2kz21kvo3k0x21k+vo1ky21k+vo2kx21k+vo1k0][abc]=[vo2kx21kvo2ky21kvo2kz21k]
wherex21k=xr2kxr1k, y21k=yr2kyr1k,z21k=zr2kzr1k. With k varies from 1 to Q, a linear equation system is constituted. The coefficient matrix of the linear system is a column full rank matrix when Q>2. In other words, two laser beams are enough to determine the rotation matrix R0 in theory. In order to improve the stability of the solution, we increase the sample number Q to constitute an over determined linear system. In principle, more samples are helpful to alleviate the influence of the random error involved in the sample data. On the other side, too many samples could exceed the capacity of the GLS and increase the computation burden for image feature extraction of the laser spots. The sample number Q is a tradeoff between the two sides. It is acceptable in practice if only Q is large enough within the GLS capacity and the Q laser beams can form a high quality laser spot array in the recalibration image. The parameters a,b,c in the over determined linear system can be solved by the least square method. With the parametersa,b,c the rotation matrix R0 is derived according to Eq. (5).

2.3.2 Initial estimation of translation vector

The initial translation vector T0=[X0Y0Z0]T is also achieved by solving a linear equation system as shown in Fig. 4(b). By applying the estimated rotation matrixR0, the laser beams lok,k=1,2,Qare transformed intol2k,k=1,2,Q. The direction vector of l2kis R0[vo1kvo2kvo3k]T, and the 3D coordinate of a point pk on l2k is R0[vo4kvo5kvo6k]T. If R0 is exactly accurate, a unique pure translation exists to align the two groups of laser beams l2k and lrk,k=1,2,Q. According to the fact that the points pr1k and pr2k are on the laser beam lrk, we have the following two linear equations:

[b1kt1kb2kt1kb3kt1k]+[b4kb5kb6k]+[X0Y0Z0]=[xr1kyr1kzr1k]
[b1kt2kb2kt2kb3kt2k]+[b4kb5kb6k]+[X0Y0Z0]=[xr2kyr2kzr2k]
where [b1kb2kb3k]T is the direction vector of l2k, [b4kb5kb6k]T is the 3D coordinate of a point pk on l2k; t1kand t2k represent the distances that pk respectively moves to pr1k and pr2k along the direction vector of l2k. As the index k varies from 1 to Q, a linear system with 2Q equations can be constituted as Eq. (9).

The rank of the coefficient matrix of Eq. (9) is 2Q+3whenQ>1. We can directly derive the translation vectorT0=[X0Y0Z0]Tby the least square solution of the linear system. Theoretically, two unparallel laser beams are sufficient to uniquely determine the translation vectorT0. As mentioned above, the number of the laser beams Q is much larger than 2 in practice to overcome the random errors.

[b111b211b311b111b211b311b1Q1b2Q1b3Q1b1Q1b2Q1b3Q1]6Q×2Q+3[t11t21t1Qt2QX0Y0Z0]2Q+3=[xr11b41yr11b51zr11b61xr21b41yr21b51zr21b61xr1Qb41yr1Qb51zr1Qb61xr2Qb41yr2Qb51zr2Qb61]

2.3.3 Transformation parameters refinement

As mentioned in Section 2.3.1, we use the point[xr1kyr1kzr1k]Tin Gr1 to replace the corresponding point [vo4kvo5kvo6k]Tonlok. The point [vo4kvo5kvo6k]Tis achieved from the straight line fitting with many grids of laser points in the off-site full calibration. Obviously, its position accuracy is higher than [xr1kyr1kzr1k]T, which comes from only one grid. Due to the replacement, the relative position accuracy of l1k,k=1,2,Qis accordingly lowered relative to the one oflok,k=1,2,Q. To guarantee the recalibration accuracy, a global optimization for the transformation parameters is necessary. Taking R0 and T0 as the initial values, the transformation parameters refinement is performed by minimizing the objective function in Eq. (3). This optimization task is solved by the Levenberg-Marquardt [26] method.

3. Experiments and analysis

Figure 5 illustrates the specific hardware of the BGLS system used in the following experiments. The GLS system makes use of an economic 520nm semiconductor laser and a TSH8050A/D galvanometer (Century Sunny, Beijing, China). Both the laser transmitter and the galvanometric scanning head are controlled by a GT-400-Scan control board (GuGao, Shenzhen, China). The binocular system consists of a tripod, two MG 419B CMOS cameras (Schneider, German) and two lenses. The software for completing the whole experimental process is installed in a personal computer with 3.1GHz CPU and 8G RAM.

 figure: Fig. 5

Fig. 5 The hardware setup of the BGLS system in the full calibration.

Download Full Size | PPT Slide | PDF

3.1 Full system calibration

Two lenses with 12mm focal length are used in the original full calibration. In the sampling process, 900 pairs of control digitals Dk,k=1,2,900 are inputted to generate 900 outgoing laser beams in the field-of-view of the binocular system. The 900 pairs of digitals are uniformly spaced on the virtual digital plane as shown in Fig. 6. A total of 100 laser spot grids Gi,i=1,2,100 (as shown in Fig. 2) are used for fitting the straight lines of the laser beams. From the first grid G1 to the last grid G100, the planar board moved approximately 1 meter. The 3D coordinates of the laser spots in the 100 grids are obtained by the binocular system. The space vectors Vok,k=1,2,900 of the outgoing laser beams and subsequently the mapping Mo:DVois computed according to the methods in Section 2.2.

 figure: Fig. 6

Fig. 6 Distribution of the 900 control digitals.

Download Full Size | PPT Slide | PDF

3.2 Recalibration experiment

For validating the proposed recalibration method, two common situations that require recalibration are simulated. In the first case as shown in Figs. 7(a) and 7(b), the binocular system is moved to different positions from the original place (see Fig. 5). In the second case, the camera lenses of the binocular system are changed as shown in Fig. 7(c). In the recalibration experiment, we use the planar board shown in Fig. 8(a) to acquire the laser spot grids (Fig. 8(b)).

 figure: Fig. 7

Fig. 7 The situations that the binocular system is changed in the recalibration experiment. (a)-(b) position between the binocular system and the GLS system. (c) Camera lenses with different focal lengths used.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 Data collection for recalibration. (a) The planar board for acquiring the laser spot grids. (b) An actual laser spot grid.

Download Full Size | PPT Slide | PDF

To evaluate the performance of the proposed recalibration method, we use the root mean squared error (RMSE) measure in Eq. (10) to estimate the error of the solved coordinate transformation [R|T]

Ed=k=1900((d1k)2+(d2k)2)/900
where d1k is the distance from the point pr1k to the corresponding laser beam lrk, and d2k is the distance from the pointpr2k tolrk. The space vector [vr1kvr2kvr3kvr4kvr5kvr6k]T of lrk is calculated by [vr1kvr2kvr3k]T=R[vo1kvo2kvo3k]T, and [vr4kvr5kvr6k]T=R[vo4kvo5kvo6k]T+T. The space vector [vo1kvo2kvo3kvo4kvo5kvo6k]Tis obtained in the original full calibration, and the coordinate transformation [R|T] is achieved by the proposed recalibration method. Obviously, the smaller the value of Ed is, the better the performance of the recalibration is.

The experimental results are listed in Table 1. The situations (a) and (b) in Table 1 respectively correspond to the situations in Fig. 7(a) and Fig. 7(b). The situations (c) and (d) in Table 1 respectively represent the situations that the original 12mm focal length camera lens are replaced with 17mm and 35mm focal length lenses as shown in Fig. 7(c).

Tables Icon

Table 1. Performance of the recalibration method in the representative situations.

As shown in Table 1, the recalibration error index Ed in each situation is less than 0.11mm, and the running time of the recalibration process is about 5s. Here, the running time includes not only the time of global optimization but also the time for estimating the initial value [R0|T0]. The experimental results indicate that the proposed recalibration method is robust and works well in four simulated situations.

The following issues should be taken into consideration in the recalibration process. 1) The inner structure of the GLS system is forbidden to change. If the relative position between the laser emitter and the two galvanometric mirrors are changed in the experiment, the BGLS system needs a full calibration again. 2) Too strong lighting environment should be avoided for the sampling of the laser spots. 3) The field-of-view of the changed binocular system should be able to cover the effective scanning region of the GLS.

3.3 Target shooting experiments

To further verify the proposed recalibration method, we conduct target shooting experiments right after the recalibration of each simulated situation. A pattern of 49 target circles (as shown in Fig. 9(a)) is placed in the depth of field of the binocular system. The 3D coordinatespndst=[xndstyndstzndst]T,n=1,2,49 of the circle centers are obtained by the binocular system. By using the originally calibrated mappingMo:DVo, the recalibrated coordinate transformation [R|T], and the coordinatespndst,n=1,2,49, the digital voltagesDndst=[DnxdstDnydst]T,n=1,2,49 for shooting the 49 target circles are achieved by

Dndst=argminDdn(D)
where dn(D) is the distance from the 3D point pndst=[xndstyndstzndst]T to the spatial beamlD, which is the corresponding laser beam of the digital voltages D=[DxDy]T. Utilizing these digital voltages, we control the GLS system to shoot the target circle centers as shown in Fig. 9(b). The coordinates pnspot,n=1,2,49 of the laser spot centers shot on the pattern are also obtained by the binocular system. Then we use the root mean squared deviation Sd=1/49npndstpnspot2 to measure the shooting precision as shown in Fig. 9(c). The shooting performances in the four simulated situations are shown in Fig. 10.

 figure: Fig. 9

Fig. 9 Target shooting experiment. (a) The pattern of target circles. (b) Experiment result of target shooting. (c) The distance between the centers of the laser spot and the target circle.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 The shooting performances in four different simulated situations. (a) and (b) The shooting performances in the situations shown in Figs. 7(a) and 7(b). (c) and (d) The shooting performances in the situations that the camera lenses of the binocular system are changed from12mm into 17mm and 35mm respectively.

Download Full Size | PPT Slide | PDF

For comparison, the target shooting experiments are also implemented right after the original full system calibration. The only difference is that the coordinate transformation [R|T] is no longer needed. Theoretically, the two grids of laser spots (as shown in Fig. 4) which are used to recalibrate the system also can be utilized to directly complete the original system calibration mentioned in Section 2.2. Taking this into consideration, we recalculate the 900 space vectors Vrk',k=1,2,900 with only two laser spot grids used for recalibration of situation (d) in Section 3.2. Then another mapping Mr':DVr'is obtained via the same statistical learning process. Using Mr':DVr' and Mo:DVo calibrated in Section 3.1 respectively, we perform two additional shooting experiments, the results of which are shown in Fig. 11. The root mean squared deviation Sd of the two additional shooting experiments are respectively listed in the first two rows of Table 2. And Sd of the four recalibration situations mentioned above are orderly listed in the last four rows in Table 2.

 figure: Fig. 11

Fig. 11 The shooting performances in the two additional experiments. (a) The shooting performance wit hMr':DVr' calibrated by two laser spot grids used for recalibration.(b) The shooting performance with Mo:DVo obtained in Section 3.1.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Performance measure Sd in the six situations.

As listed in the last four rows of Table 2, the shooting accuracy in the four recalibration situations is around 0.3mm. It demonstrates the good robustness and accuracy of the proposed recalibration method. By comparing the last four rows in Table 2 with the second row, we find that the accuracy loss of the recalibration relative to the original full calibration is very small.

The first row of Table2 shows that the shooting accuracy (0.794mm) based on the mapping Mr':DVr' is significantly deteriorated compared to the accuracy (0.257mm) based on the mapping Mo:DVo obtained in Section 3.1. This result reasonably illustrates that directly calibrating the system with only two laser spot grids is undesirable. In the original full calibration for establishing Mo:DVo, 100 grids of laser spots within the range of about 1m were collected and used to fit the laser beams. The large amount of redundant sampling data had eliminated the influence of random errors. Whereas in the situation that only two laser spot grids were used to originally calibrate the system, each laser beam is fitted by only two points. Therefore the random errors in the 3D coordinates of the laser spots inevitably resulted in the accuracy decrease.

In the recalibration, the two laser spot grids are used for solving the coordinate transformation [R|T] rather than for fitting the laser beams and then calculating the mappingMr':DVr'. The two grids provide up to 900×2 3D points to jointly determine the transformation matrix. The redundancy can also effectively eliminate the random errors. That is the reason why two laser spot grids work well for the recalibration but are undesirable for the original calibration.

3.4 Projection positioning experiment

In some industry fields, the 3D laser curve projection positioning technique is widely applied to guide workers fulfilling relative tasks, such as prepreg ply placement in composite material part fabrication, logo painting for product decoration, parts/components assembling, quality inspection and so on. By the BGLS system recalibrated in Section 3.2, we can implement the projection positioning of 3D contour curves on a freeform surface. The CAD model of a car door is shown as in Fig. 12(a), and the two red contours are the designed target contours to be laser projected. Before the projection, the transformation matrix from the model system to the binocular system was calculated by utilizing some visual feature points placed on the surface of the car door. The 3D coordinates of these feature points in the model system, together with the car door surface, are measured in advance. In real production process, the object to be laser projected is typically fixed with a positioning jig. By setting technical holes/bosses on known positions of the jig, the relative geometric relationship between the holes/ bosses and the object can be strictly guaranteed by manufacturing process control. The centers of these technical holes/bosses can be used as the visual feature points for system alignment.

 figure: Fig. 12

Fig. 12 The projection positioning experiment. (a) The CAD model of the car door. (b) Projection result in linear interpolation scan mode. (c) Projection result in point scan mode.

Download Full Size | PPT Slide | PDF

The two red contours are discretized into 137 sequential points and transform into the binocular system, denoted aspjtgt,j=1,2,137. Utilizing pjtgt, the originally calibrated mappingMo:DVo, and the recalibrated coordinate transformation [R|T], the digital voltages Djtgt=[DjxdstDjydst]T,j=1,2,137 are obtained by solving

Djtgt=argminDdj(D)
where dj(D) is the distance from the 3D point pjtgt to the corresponding laser beam lD of the digital voltages D=[DxDy]T. These digital signals are input into the BGLS system, and then the laser projection results can be observed on the door surface. The GLS system has two scan modes: linear interpolation scan and pointwise scan. In the linear interpolation scan mode, the laser transmitter keeps turned on all the time and the marking trajectory is continuous as illustrated in Fig. 12(b). Whereas in the point scan mode, the laser transmitter is turned off between inputting any two adjacent pairs of digital signals, and the laser marking trajectory is pointwise as shown in Fig. 12(c). With human visual observation, the laser projection trajectories perfectly overlay the designed contour on the door surface. To further quantitatively evaluate the 3D laser projection accuracy, the 3D coordinates of the 137 laser spots actually projected on the door surface, denoted aspjspot,j=1,2,137, are obtained by the binocular system. The intuitive comparison between the 3D designed contour and the actual laser point contour is shown in Fig. 13(a). The Euclidean distances εj,j=1,2,137 between pjtgt and pjspot,j=1,2,137 are calculated and illustrated in Fig. 13(b). The average projection positioning error is ε¯=j=1137εj/137 = 0.52 mm, and the maximum of εj,j=1,2,137 is 0.98 mm, the standard deviation σ=j(εjε¯)2/137 = 0.44 mm.

 figure: Fig. 13

Fig. 13 Evaluation of the projection accuracy. (a) The theoretical target point (in red) and the actual laser projection point (in blue). (b) Projection errors of the 137 discrete points.

Download Full Size | PPT Slide | PDF

4. Conclusion

Based on the one-to-one mapping between the input control digital and the corresponding outgoing laser beam, we propose a novel recalibration strategy for the BGLS system. With this strategy, the originally calibrated mapping Mo:DVo can be rapidly reused in case of vision system changes. The special two-step linear method for estimating the initial rotation matrix R0 and translation vector T0 successively makes the whole solving process of the recalibration robust and efficient. The proposed recalibration strategy can greatly enhance the flexibility and efficiency of the BGLS system while keep high accuracy. The results of the recalibration, the shooting experiments and the projection positioning experiment comprehensively demonstrate the superiority of the proposed method.

As mentioned in Section 3.2, a prerequisite of the proposed recalibration strategy is that the inner structure of the GLS system is fixed. The recalibration may fail if the relative positions between the laser emitter and the two galvanometric mirrors are changed. Fortunately, the structure usually keep unchanged once the installation of the GLS system is completed. Besides, too strong lighting environment could probably lower the image quality of the laser spots and reduce the recalibration accuracy.

Funding

National Natural Science Foundation of China (NSFC) (51575276).

References

1. P. M. Yang, Y. L. Lo, and Y. H. Chang, “Laser galvanometric scanning system for improved average power uniformity and larger scanning area,” Appl. Opt. 55(19), 5001–5007 (2016). [CrossRef]   [PubMed]  

2. S. Naoto and T. Yukio, “Processing system with galvano scanner capable of high speed laser scanning,” U. S. patent 9,720,225 (August 1, 2017).

3. G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013). [CrossRef]  

4. M. Jofre, G. Anzolin, F. Steinlechner, N. Oliverio, J. P. Torres, V. Pruneri, and M. W. Mitchell, “Fast beam steering with full polarization control using a galvanometric optical scanner and polarization controller,” Opt. Express 20(11), 12247–12260 (2012). [CrossRef]   [PubMed]  

5. A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004). [CrossRef]  

6. K. Tetterington, “Two-dimensional laser projection system,” U. S. Patent Application 20050052720 (March 10, 2005).

7. J. P. Callison, J. S. Pease, and R. W. Pease, “Laser projection systems,” U. S. Patent 7,142,257 (November 28 2006).

8. M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000). [CrossRef]  

9. U. Haruo, O. Makoto, and O. Yasuomi, “Laser measurement apparatus,” U. S. patent 7,196,795 (March 27, 2007).

10. Y. Chang, C. Wen, C. Gu, and S. Chen, “Synchronization-free light sheet microscopy based on a 2D phase mask,” Optica 4(9), 1030–1033 (2017). [CrossRef]  

11. F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013). [CrossRef]  

12. J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011). [CrossRef]  

13. J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000). [CrossRef]  

14. M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007). [CrossRef]  

15. L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015). [CrossRef]  

16. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 (2009). [CrossRef]   [PubMed]  

17. A. Manakov, H. P. Seidel, and I. Ihrke, “A mathematical model and calibration procedure for galvanometric laser scanning systems,” in Vision, Modeling, and Visualization (VMV), P. Eisert, K. Polthier, and J. Hornegger, Eds. Berlin, Germany: Eurographics Association, 2011, pp. 207–214.

18. Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry,” in Proc. Symp. Close-Range Photogram., Falls Church, VA, USA, 1971, pp. 1–18.

19. T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015). [CrossRef]  

20. M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009). [CrossRef]  

21. J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005). [CrossRef]  

22. J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018). [CrossRef]   [PubMed]  

23. R. Setiono, “On the solution of the parity problem by a single hidden layer feedforward neural network,” Neurocomputing 16(3), 225–235 (1997). [CrossRef]  

24. L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992). [CrossRef]   [PubMed]  

25. G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006). [CrossRef]  

26. J. J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Lect. Notes Math. 630, 105–116 (1978). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. P. M. Yang, Y. L. Lo, and Y. H. Chang, “Laser galvanometric scanning system for improved average power uniformity and larger scanning area,” Appl. Opt. 55(19), 5001–5007 (2016).
    [Crossref] [PubMed]
  2. S. Naoto and T. Yukio, “Processing system with galvano scanner capable of high speed laser scanning,” U. S. patent 9,720,225 (August 1, 2017).
  3. G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
    [Crossref]
  4. M. Jofre, G. Anzolin, F. Steinlechner, N. Oliverio, J. P. Torres, V. Pruneri, and M. W. Mitchell, “Fast beam steering with full polarization control using a galvanometric optical scanner and polarization controller,” Opt. Express 20(11), 12247–12260 (2012).
    [Crossref] [PubMed]
  5. A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
    [Crossref]
  6. K. Tetterington, “Two-dimensional laser projection system,” U. S. Patent Application 20050052720 (March 10, 2005).
  7. J. P. Callison, J. S. Pease, and R. W. Pease, “Laser projection systems,” U. S. Patent 7,142,257 (November 28 2006).
  8. M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
    [Crossref]
  9. U. Haruo, O. Makoto, and O. Yasuomi, “Laser measurement apparatus,” U. S. patent 7,196,795 (March 27, 2007).
  10. Y. Chang, C. Wen, C. Gu, and S. Chen, “Synchronization-free light sheet microscopy based on a 2D phase mask,” Optica 4(9), 1030–1033 (2017).
    [Crossref]
  11. F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
    [Crossref]
  12. J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
    [Crossref]
  13. J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
    [Crossref]
  14. M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007).
    [Crossref]
  15. L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
    [Crossref]
  16. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 (2009).
    [Crossref] [PubMed]
  17. A. Manakov, H. P. Seidel, and I. Ihrke, “A mathematical model and calibration procedure for galvanometric laser scanning systems,” in Vision, Modeling, and Visualization (VMV), P. Eisert, K. Polthier, and J. Hornegger, Eds. Berlin, Germany: Eurographics Association, 2011, pp. 207–214.
  18. Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry,” in Proc. Symp. Close-Range Photogram., Falls Church, VA, USA, 1971, pp. 1–18.
  19. T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
    [Crossref]
  20. M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
    [Crossref]
  21. J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
    [Crossref]
  22. J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018).
    [Crossref] [PubMed]
  23. R. Setiono, “On the solution of the parity problem by a single hidden layer feedforward neural network,” Neurocomputing 16(3), 225–235 (1997).
    [Crossref]
  24. L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992).
    [Crossref] [PubMed]
  25. G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
    [Crossref]
  26. J. J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Lect. Notes Math. 630, 105–116 (1978).
    [Crossref]

2018 (1)

J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018).
[Crossref] [PubMed]

2017 (1)

2016 (1)

2015 (2)

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

2013 (2)

G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
[Crossref]

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

2012 (1)

2011 (1)

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

2009 (2)

M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
[Crossref]

S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 (2009).
[Crossref] [PubMed]

2007 (1)

M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007).
[Crossref]

2006 (1)

G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
[Crossref]

2005 (1)

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

2004 (1)

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

2000 (2)

M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
[Crossref]

J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
[Crossref]

1997 (1)

R. Setiono, “On the solution of the parity problem by a single hidden layer feedforward neural network,” Neurocomputing 16(3), 225–235 (1997).
[Crossref]

1992 (1)

L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992).
[Crossref] [PubMed]

1978 (1)

J. J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Lect. Notes Math. 630, 105–116 (1978).
[Crossref]

Abdel-Aziz, Y. I.

Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry,” in Proc. Symp. Close-Range Photogram., Falls Church, VA, USA, 1971, pp. 1–18.

Anzolin, G.

Bracun, D.

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

Bruder, R.

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Chang, Y.

Chang, Y. H.

Chen, M. F.

M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
[Crossref]

M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007).
[Crossref]

Chen, S.

Chen, Y. P.

M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
[Crossref]

M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007).
[Crossref]

Cuccolini, G.

G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
[Crossref]

Cui, S.

Diaci, J.

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

Duan, Z. C.

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

Ernst, F.

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Fortunato, A.

G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
[Crossref]

Gorkic, A.

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

Gu, C.

Holmstrom, L.

L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992).
[Crossref] [PubMed]

Hsiao, W. T.

M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
[Crossref]

Huang, G. B.

G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
[Crossref]

Huang, S. H.

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

Jang, J. H.

J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
[Crossref]

Jofre, M.

Kaldos, A.

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

Karara, H. M.

Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry,” in Proc. Symp. Close-Range Photogram., Falls Church, VA, USA, 1971, pp. 1–18.

Koistinen, P.

L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992).
[Crossref] [PubMed]

Krause, M.

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

Lee, J. M.

J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
[Crossref]

Lo, Y. L.

Mitchell, L. D.

M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
[Crossref]

Mitchell, M. W.

Moré, J. J.

J. J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Lect. Notes Math. 630, 105–116 (1978).
[Crossref]

Možina, J.

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

Oliverio, N.

Orazi, L.

G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
[Crossref]

Pieper, H. J.

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

Pruneri, V.

Qi, L.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Schweikard, A.

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Setiono, R.

R. Setiono, “On the solution of the parity problem by a single hidden layer feedforward neural network,” Neurocomputing 16(3), 225–235 (1997).
[Crossref]

Shi, Y.

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

Siew, C.-K.

G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
[Crossref]

Stafne, M. A.

M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
[Crossref]

Steinlechner, F.

Stüber, P.

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Stüeber, P.

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

Tang, Z. Q.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Torres, J. P.

Tu, J.

J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018).
[Crossref] [PubMed]

Wagner, B.

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Wang, S.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Wang, W.

Wen, C.

Wen, S.

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

West, R. L.

M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
[Crossref]

Wissel, T.

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Wolf, E.

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

Xie, J.

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

Xie, Y.

Yang, H.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Yang, P. M.

Yoo, T. K.

J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
[Crossref]

Zhang, L.

J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018).
[Crossref] [PubMed]

Zhang, X. P.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Zhang, Y. X.

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

Zhu, Q.-Y.

G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
[Crossref]

Zhu, X.

Appl. Opt. (2)

Appl. Phys. A (1)

J. M. Lee, J. H. Jang, and T. K. Yoo, “Scribing and cutting a blue LED wafer using a Qswitched Nd: YAG laser,” Appl. Phys. A 70(5), 561–564 (2000).
[Crossref]

IEEE Trans. Neural Netw. (1)

L. Holmstrom and P. Koistinen, “Using additive noise in back-propagation training,” IEEE Trans. Neural Netw. 3(1), 24–38 (1992).
[Crossref] [PubMed]

Int. J. Mach. Tools Manuf. (1)

M. F. Chen and Y. P. Chen, “Compensating technique of field-distorting error for the CO2 laser galvanometric scanning drilling machines,” Int. J. Mach. Tools Manuf. 47(7), 1114–1124 (2007).
[Crossref]

J. Mater. Process. Technol. (1)

A. Kaldos, H. J. Pieper, E. Wolf, and M. Krause, “Laser machining in die making—a modern rapid tooling process,” J. Mater. Process. Technol. 155(1), 1815–1820 (2004).
[Crossref]

Lect. Notes Math. (1)

J. J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Lect. Notes Math. 630, 105–116 (1978).
[Crossref]

Measurement (1)

M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional calibration of galvanometric scanners used in laser Doppler vibrometers,” Measurement 28(1), 47–59 (2000).
[Crossref]

Neurocomputing (2)

G. B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme Learning Machine: Theory and Applications,” Neurocomputing 70(1), 489–501 (2006).
[Crossref]

R. Setiono, “On the solution of the parity problem by a single hidden layer feedforward neural network,” Neurocomputing 16(3), 225–235 (1997).
[Crossref]

Opt. Express (1)

Opt. Laser Technol. (1)

J. Xie, S. H. Huang, Z. C. Duan, Y. Shi, and S. Wen, “Correction of the image distortion for laser galvanometric scanning system,” Opt. Laser Technol. 37(4), 305–311 (2005).
[Crossref]

Opt. Lasers Eng. (4)

M. F. Chen, Y. P. Chen, and W. T. Hsiao, “Correction of field distortion of laser marking systems using surface compensation function,” Opt. Lasers Eng. 47(1), 84–89 (2009).
[Crossref]

G. Cuccolini, L. Orazi, and A. Fortunato, “5 Axes computer aided laser milling,” Opt. Lasers Eng. 51(6), 749–760 (2013).
[Crossref]

L. Qi, S. Wang, Y. X. Zhang, Z. Q. Tang, H. Yang, and X. P. Zhang, “Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system,” Opt. Lasers Eng. 68, 180–187 (2015).
[Crossref]

J. Diaci, D. Bračun, A. Gorkič, and J. Možina, “Rapid and flexible laser marking and engraving of tilted and curved surfaces,” Opt. Lasers Eng. 49(2), 195–199 (2011).
[Crossref]

Optica (1)

Proc. SPIE (1)

F. Ernst, R. Bruder, T. Wissel, P. Stüber, B. Wagner, and A. Schweikard, “Real time contact-free and non-invasive tracking of the human skull: First light and initial validation,” Proc. SPIE 8856, 88561G (2013).
[Crossref]

Sensors (Basel) (2)

T. Wissel, B. Wagner, A. Schweikard, P. Stüeber, and F. Ernst, “Data-driven learning for calibrating galvanometric laser scanners,” Sensors (Basel) 15(10), 5709–5717 (2015).
[Crossref]

J. Tu and L. Zhang, “Effective data-driven calibration for galvanometric laser scanning system using binocular stereo vision,” Sensors (Basel) 18(2), 197–214 (2018).
[Crossref] [PubMed]

Other (6)

A. Manakov, H. P. Seidel, and I. Ihrke, “A mathematical model and calibration procedure for galvanometric laser scanning systems,” in Vision, Modeling, and Visualization (VMV), P. Eisert, K. Polthier, and J. Hornegger, Eds. Berlin, Germany: Eurographics Association, 2011, pp. 207–214.

Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry,” in Proc. Symp. Close-Range Photogram., Falls Church, VA, USA, 1971, pp. 1–18.

S. Naoto and T. Yukio, “Processing system with galvano scanner capable of high speed laser scanning,” U. S. patent 9,720,225 (August 1, 2017).

U. Haruo, O. Makoto, and O. Yasuomi, “Laser measurement apparatus,” U. S. patent 7,196,795 (March 27, 2007).

K. Tetterington, “Two-dimensional laser projection system,” U. S. Patent Application 20050052720 (March 10, 2005).

J. P. Callison, J. S. Pease, and R. W. Pease, “Laser projection systems,” U. S. Patent 7,142,257 (November 28 2006).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 BGLS system configuration.
Fig. 2
Fig. 2 The off-site full calibration process.
Fig. 3
Fig. 3 The coordinate transformation between O o X o Y o Z o and O r X r Y r Z r .
Fig. 4
Fig. 4 The two-step method for estimating the initial [ R 0 | T 0 ]. (a) The estimation of rotation matrix R 0 . (b) The estimation of translation vector T 0 .
Fig. 5
Fig. 5 The hardware setup of the BGLS system in the full calibration.
Fig. 6
Fig. 6 Distribution of the 900 control digitals.
Fig. 7
Fig. 7 The situations that the binocular system is changed in the recalibration experiment. (a)-(b) position between the binocular system and the GLS system. (c) Camera lenses with different focal lengths used.
Fig. 8
Fig. 8 Data collection for recalibration. (a) The planar board for acquiring the laser spot grids. (b) An actual laser spot grid.
Fig. 9
Fig. 9 Target shooting experiment. (a) The pattern of target circles. (b) Experiment result of target shooting. (c) The distance between the centers of the laser spot and the target circle.
Fig. 10
Fig. 10 The shooting performances in four different simulated situations. (a) and (b) The shooting performances in the situations shown in Figs. 7(a) and 7(b). (c) and (d) The shooting performances in the situations that the camera lenses of the binocular system are changed from12mm into 17mm and 35mm respectively.
Fig. 11
Fig. 11 The shooting performances in the two additional experiments. (a) The shooting performance wit h M r ' :D V r ' calibrated by two laser spot grids used for recalibration.(b) The shooting performance with M o :D V o obtained in Section 3.1.
Fig. 12
Fig. 12 The projection positioning experiment. (a) The CAD model of the car door. (b) Projection result in linear interpolation scan mode. (c) Projection result in point scan mode.
Fig. 13
Fig. 13 Evaluation of the projection accuracy. (a) The theoretical target point (in red) and the actual laser projection point (in blue). (b) Projection errors of the 137 discrete points.

Tables (2)

Tables Icon

Table 1 Performance of the recalibration method in the representative situations.

Tables Icon

Table 2 Performance measure S d in the six situations.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

E( V o k )= i=1 N ( d i k ) 2
{ [ v r1 k v r2 k v r3 k ] T =R [ v o1 k v o2 k v o3 k ] T [ v r4 k v r5 k v r6 k ] T =R [ v o4 k v o5 k v o6 k ] T +T ,k=1,2,Q
E( R,T )= k=1 Q ( ( d 1 k ) 2 + ( d 2 k ) 2 )
[ v o1 k v o2 k v o3 k ]= R 0 [ x r2 k x r1 k y r2 k y r1 k z r2 k z r1 k ]
R 0 =( I+S ) ( IS ) 1
[ 0 z 21 k v o3 k y 21 k v o2 k z 21 k v o3 k 0 x 21 k + v o1 k y 21 k + v o2 k x 21 k + v o1 k 0 ][ a b c ]=[ v o2 k x 21 k v o2 k y 21 k v o2 k z 21 k ]
[ b 1 k t 1 k b 2 k t 1 k b 3 k t 1 k ]+[ b 4 k b 5 k b 6 k ]+[ X 0 Y 0 Z 0 ]=[ x r1 k y r1 k z r1 k ]
[ b 1 k t 2 k b 2 k t 2 k b 3 k t 2 k ]+[ b 4 k b 5 k b 6 k ]+[ X 0 Y 0 Z 0 ]=[ x r2 k y r2 k z r2 k ]
[ b 1 1 1 b 2 1 1 b 3 1 1 b 1 1 1 b 2 1 1 b 3 1 1 b 1 Q 1 b 2 Q 1 b 3 Q 1 b 1 Q 1 b 2 Q 1 b 3 Q 1 ] 6Q×2Q+3 [ t 1 1 t 2 1 t 1 Q t 2 Q X 0 Y 0 Z 0 ] 2Q+3 =[ x r1 1 b 4 1 y r1 1 b 5 1 z r1 1 b 6 1 x r2 1 b 4 1 y r2 1 b 5 1 z r2 1 b 6 1 x r1 Q b 4 1 y r1 Q b 5 1 z r1 Q b 6 1 x r2 Q b 4 1 y r2 Q b 5 1 z r2 Q b 6 1 ]
E d = k=1 900 ( ( d 1 k ) 2 + ( d 2 k ) 2 ) / 900
D n dst =arg min D d n ( D )
D j tgt =arg min D d j ( D )

Metrics