Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Flexible calibration and measurement strategy for a multi-sensor fringe projection unit

Open Access Open Access

Abstract

In this paper, a strategy for the calibration and the measurement process of a multi-sensor fringe projection unit is presented. The objective is the development of an easy to use calibration and measurement procedure. Only one simple geometrical calibration target is needed and the calibration of the projection unit is not mandatory. To make the system ready for measurement tasks, a common world coordinate system is established. The geometrical camera calibration is derived with respect to the world frame. Note, that the cameras of the system are under Scheimpflug condition which is considered using a modified camera model. Furthermore an additional optimization step of the extrinsic camera parameters is presented to compensate the uncertainties of the calibration target. For completeness, a suitable calibration strategy for the projection unit is given, too. Additionally, the quality of the presented strategy is demonstrated by experimental data.

© 2015 Optical Society of America

1. Introduction

Fringe projection profilometry is, nowadays, a widely used optical measurement technique. Common industrial applications are quality control and reverse engineering [1, 2]. Traditionally, fringe projection units with one camera and one projection unit are the most popular system architectures [3, 4]. That is the reason why the calibration process and the measurement principle for those kinds of systems are well known and already explained in detail in several publications [5, 6]. In [7, 8], Zhang et al., for example, strategies for the simple and flexible calibration of basic fringe projection systems are shown.

A largely neglected assembly, are systems with multiple cameras and/or projectors [9, 10]. Those multi-sensor fringe projection units are ideally suited for measurement tasks with either complex shaped objects, to prevent the resulting shadowing, or with the need of an observation from multiple views [9–11]. An approach for an easy-to-use calibration strategy for a multi-sensor system is now presented in this paper.

It must be noted that it is common that the cameras and/or the projection units in such a system are under scheimpflug condition to line up the focus planes of the projectors and cameras while aligned with a triangulation angle [12–14].

In general, it can be distinguished between two main types of multi-sensor systems. The first possible setup consists of one camera and multiple projection units. In fact, such a setup can be simplified and considered as several basic fringe projection units with only one camera and one projection unit. This means that the calculation of 3D points takes place separately for every single system and that there is no integrated calculation process involving all projection units. Consequently, it is necessary that the individually calculated points are simply being transformed into one joint measurement output afterwards.

In contrast, the second system architecture consists of multiple cameras and one projection unit. The usage of such a system gives the advantage of a supplementary stereoscopic calculation of 3D points from multiple camera views as well from the projection pattern. Furthermore, a compensation calculation between the individual cameras is possible for every 3D point which can be seen from at least two cameras. In most applications this leads to an improvement of the measurement accuracy. It is even possible to perform a whole measurement without calibrating the projection unit. The projection unit itself is, then, only used for the structured marking of the object surface, e.g. through phase shifting with sinusoidal fringes, which provide the necessary point correspondences between the cameras. Any further information from the projector is not required which leads to a complete independence of the measurement process from inaccuracies of the projection unit. This already considers the fact that the projector is calibrated with the help of the cameras and the therefrom resulting error propagation [4].

Another advantage is the automatic creation of multiple view images of the measurement object which can be used to create a colored 3D mesh of the object’s surface.

Due to the many benefits of a multi-camera fringe projection unit compared to a setup with multiple projection units, a system consisting of four cameras under Scheimpflug condition and one projection unit is chosen for the presented calibration approach in this paper. The main focus lies on an easy-to-use calibration process with an associated system model which can be directly used for measurement tasks. Furthermore, the presented approach is suitable for general use with fringe projection systems incorporating any number of cameras.

In the following, a short description of the chosen multi-camera fringe projection setup is given. Afterwards, the complete geometrical calibration process for the cameras with Scheimpflug optics and an additional optimization step for the external camera parameters is presented in detail. This is followed by the explanation of the calibration process for the projection unit. In addition, a description of a proposed measurement principle is given which contains all mentioned advantages.

2. Setup of the multi-camera fringe projection system

In this chapter the setup of the example fringe projection unit is presented. The system consists of four identical cameras, which are under Scheimpflug condition, and one projection unit which can be referred to as a standard digital video projector with no further special features.

The assembly of the system can be seen in Fig. 1. It is shown that the projection unit is arranged atop the measurement object, the cameras, however, are placed on a circle around the object which leads to an oblique view. The resulting viewing angle of the cameras motivates the use of Scheimpflug optics to compensate for the resulting loss of coinciding focus areas. Therefore the cameras are assembled considering the Scheimpflug condition.

 figure: Fig. 1

Fig. 1 The assembly of the fringe projection system.

Download Full Size | PDF

Due to a better understanding of the Scheimpflug characteristics of the cameras, their structure is shown in detail in Fig. 2. It can be seen that the Scheimpflug angle describes a rotation between the optical axis of the camera lens and the camera sensor. This leads directly to a change of the depth of field of the camera which then can be aligned with the projector [13,14].

 figure: Fig. 2

Fig. 2 The structure of the Scheimpflug cameras.

Download Full Size | PDF

3. Geometrical camera system calibration

The camera calibration, mathematical model and calibration strategy, must consider the Scheimpflug condition of the cameras (Fig. 2), which results in additional parameters for the well known standard pinhole camera model. In [15, 16], Louhichi et al. and [17], Legarda et al. approaches for the Scheimpflug calibration have already been presented. The calibration strategy in these papers is based on the simultaneous calculation of all parameters, which is only possible if a significant lens distortion is present. Otherwise, the extrinsic camera parameters and the additional Scheimpflug rotation can not be separated and the optimization process results in an mathematically correct but bad conditioned state of the parameters, which leads to significant aberrations in the measurement process. This problem is also mentioned in [17], Legarda et al., with the suggestion of an extra consideration of the parameter sensitivity in the optimization process to get better estimations, which induces separated calculation steps.

Another approach for the Scheimpflug calibration is presented in [18], Fasogbon et al. Here, the sensitivity and dependency of the parameters are highlighted. The solution was a separated calibration process for the Scheimpflug, the distortion and the remaining parameters. Specifically, the optimization of the Scheimpflug parameters has been isolated whereby only small changes in relation to the starting values are possible. This can lead to an insufficient estimation of these parameters if their starting values are not chosen precise enough. In such a case the resulting parameter set would not satisfy the conditions of a reliable measurement.

In this paper, a Scheimpflug calibration, model and strategy, are presented which take into account the sensitivity and dependency of the parameters to allow an adequate scope for the parameter values.

The geometrical camera calibration is performed using a calibration target. In general, this calibration target is a planar object, such as a checkerboard pattern or a circular dot pattern. The internal and external parameters of the mathematical camera model can, then, be calculated from acquired images of the calibration target.

Hereinafter, the complete calibration process, including the calibration target and the camera model, for the cameras of the used fringe projection system (Sec. 2) is presented. It is separated into the single calibration of each camera and the following calculation of a common world coordinate system by merging of the previous results.

3.1. Calibration target

In this paper a planar circular dot pattern is used for the calibration of the cameras. A schema of the pattern is shown in Fig. 3. The pattern consists of a plane 5 × 5 dot grid with a center-to-center distance of 5mm. For the calibration process the information of the center coordinates, described in the object coordinate system, are required (Fig. 3). Therefore a matrix Pobj with the center coordinates is established:

Pobj=[x1,objx2,objx25,objy1,objy2,objy25,obj000]=[051002000052000000].

 figure: Fig. 3

Fig. 3 Schema of the calibration target.

Download Full Size | PDF

3.2. Camera model

The Scheimpflug camera model presented in this paper is separated into two parts. The first part is the transformation of an object point pobj,OCS = [xobj,OCS,yobj,OCS,zobj,OCS]T described in the object coordinate system OCS to the perpendicular plane of the camera (Fig. 4). The resulting point pper,CCS = [xper,CCS,yper,CCS, f]T is described in the camera coordinate system CCS. For this purpose a simplified standard pinhole camera model is used:

s1[xper,CCSyper,CCSf]=[f000f000f][ROCtCO][xobj,OCSyobj,OCSzobj,OCS1],
with s1 representing an arbitrary scale factor [19]. The six extrinsic parameters α, β, γ, tx, ty tz which denote the rotation ROC= R(α) R(β) R(γ) and translation tCO = [tx, ty, tz]T relation between the camera coordinate system CCS and the object coordinate system OCS and the intrinsic parameter f which is the focal length in pixel units.

 figure: Fig. 4

Fig. 4 Scheimpflug camera model part 1.

Download Full Size | PDF

The second part is the transformation of an image point pima,ICS, = [xima,ICS, yima,ICS,0]T described in the tilted image coordinate system ICS to the perpendicular plane of the camera (Fig. 5). The resulting point p˜per,CCS=[x˜per,CCS,y˜per,CCS,f]T is described in the camera coordinate system CCS. In this case the calculation is performed by:

s2[x˜per,CCSy˜per,CCSf]=[f000f000f][RS,ICtS,CI]([xima,ICSyima,ICS01][CxCy00]),
with s2 representing an arbitrary scale factor and the coordinates of the principal point, Cx and Cy, described in pixel units. The formed vector of Cx and Cy gives the translation relation between the tilted image coordinate system ICS and the centered tilted image coordinate system ICScentered. In addition, the Scheimpflug rotation and translation given by RS,IC = R(αS)R(βS) and tS,CI = [0,0, f]T denote the relation between the centered tilted image coordinate system ICScentered and the camera coordinate system CCS. Note that R(αS) is a rotation around the x-axis and R(βS) is a rotation around the y-axis of the tilted image coordinate system.

 figure: Fig. 5

Fig. 5 Scheimpflug camera model part 2.

Download Full Size | PDF

Afterwards, the lens distortion is taken into account on the perpendicular plane:

x˜per,CCS=x˜per,CCS(1+k1r2+k2r4+k3r6)+2p1x˜per,CCSy˜per,CCS+p2(r2+2x˜2per,CCS),y˜per,CCS=y˜per,CCS(1+k1r2+k2r4+k3r6)+p2(r2+2y˜2per,CCS)+2p1y˜per,CCSx˜per,CCS,
with r2=1f(x˜per,CCS2+y˜per,CCS2) and the radial and tangential distortion coefficients k1, k2, k3, p1 and p2 [4, 18]. The result is the undistorted point the p˜per,CCS=[x˜per,CCS+y˜per,CCS,f]T described in camera coordinate system CCS.

If the lens distortion is negligibly small it is possible to reduce the complexity of the model by ignoring the higher terms and using only the first radial distortion coefficient k1.

The necessary minimization difference Δpopti = [xopti, yopti]T for the optimization process is formed on the perpendicular plane by the reference point pairs:

[xoptiyopti]=[xper,CCSyper,CCS][x˜per,CCSy˜per,CCS].

3.3. Camera calibration procedure

The geometrical calibration is performed individually for each camera with the purpose to determine the intrinsic parameters (f,Cx,Cyss, k1, k2, k3, p1, p2). The process itself is separated into five subsequent steps.

The first step is the image acquisition and the registration of corresponding point pairs for the camera model (pima,ICS and pobj,OCS). Several images are required which show the calibration target in different orientations. More precisely, in order to estimate the five parameters of a standard camera model at least three images are required to calculate a unique solution [19]. But in general, the use of more images is recommended because the uncertainty of the calculated parameters decreases with the number of used images. Therefore, N = 40 images are acquired whereby the target is seen from multiple views. In Fig. 6, four example views are shown. The center points of the dots are detected for each image by image processing. The resulting M = 25 points of an image k are Pk,ima,ICS = [p1,k,ima,ICS, p2,k,ima,ICS,…, pM,k,ima,ICS]. These points are defined as the image point pima,ICS of Eq. (3). The corresponding object point pobj,OCS is given by the matrix Pobj from Eq. (1).

 figure: Fig. 6

Fig. 6 Example of acquired calibration images.

Download Full Size | PDF

The second step is the determination of initial values for the intrinsic and extrinsic parameters. Whereby, for each image an individual set of extrinsic parameters has to be calculated. The initial Scheimpflug angles αS and βS have to be estimated or taken from the data sheet. Furthermore, the distortion coefficients can be assumed as zero and, although not absolutely necessary but useful, the principal point, Cx and Cy, can be initialized at the center of the camera sensor. For the determination of the remaining parameters the homography Hk is established for the reference point pairs of each image [19,20]:

s[xm,k,ima,ICSym,k,ima,ICS1]=Hk[xm,obj,OCSym,obj,OCS1],
with s representing an arbitrary scale factor, the corresponding point number m = 1,2,…,M and the image number k = 1,2,…,N. It describes the mapping between the object points Pobj,OCS and the associated image points Pk,ima,ICS from image k. Note that the z-coordinate of the object points can be assumed to be zero because the calibration target is planar and is therefore not relevant in the homography mapping [19, 20]. Furthermore the Scheimpflug condition and lens distortion is neglected in this step. With Zhang’s method, described in [19], the initial values for the intrinsic and extrinsic parameters (f,α,β, γ, tx, ty, tz) can be calculated from the homographies of all images.

Afterwards the third step can be performed which is the estimation of the Scheimpflug angles. The optimization process is based on the difference presented in Eq. (5), which is calculated for each image and reference point pair:

minρ1k=1Nm=1M(xm,k,opti2+ym,k,opti2),
with the corresponding reference point pair number m = 1,2,…,M, the image number k = 1,2,…,N and the optimization parameter vector ρ1 = [f,αsskkk,tx,k,ty,k,tz,k]. Note that the principal point and the distortion parameters are not part of this optimization. Furthermore for this and the following minimization processes the Levenberg-Marquardt algorithm implemented in MATLAB is used.

Subsequently, the fourth step, the estimation of the principal point follows. Again, the optimization process is based on the difference presented in Eq. (5):

minρ2k=120m=125(xm,k,opti2+ym,k,opti2),
with the parameter vector ρ2 = [f,Cx,Cykkk,tx,k,ty,k,tz,k]. This time the Scheimpflug angles and the distortion parameters are not part of the optimization.

The last step is the determination of the distortion coefficients by another minimization process of the difference presented in Eq. (5):

minρ3k=120m=125(xm,k,opti2+ym,k,opti2),
with the parameter vector ρ3 = [f,k1,k2,k3, p1, p2,αkkk,tx,k,ty,k,tz,k]. In this case the optimization is performed without altering the parameters of the principal point and the Scheimpflug angles.

This separated stepwise procedure ensures that the dependency between the Scheimpflug angles and the coordinates of the principal points has no influence on their estimation. Furthermore it leads to enough scope for the parameter values in the optimization process which is necessary for a reliable calculation.

3.4. Common world coordinate system

In this section the common world coordinate system is established. The objective is to describe the extrinsic parameters of every camera, position and orientation, in relation to the same coordinate system. For this purpose, the previously calculated results are merged in the following way.

The geometrical relation between the cameras is set up by an additional image acquisition. In this case one image of the calibration target, taken by all cameras at the same time, contains enough information to determine the relation (Fig. 7). Again, the center points of the detected dots are defined as the image point pima,ICS and the corresponding object point pobj,OCS is given by the matrix Pobj from Eq. (1). For each camera a minimization process of the difference presented in Eq. (5) is performed but this time only the extrinsic parameters are optimized:

minρ4m=125(xm,opti2+ym,opti2),
with the corresponding reference point pair number m = 1,2,…,M, the parameter vector ρ4 = [α,β,γ,tx,ty,tz] and the resulting extrinsic parameters αwww,tx,w,ty,w,tz,w for each camera. The description is in relation to the object coordinate system which is now defined as the common world coordinate system WCS. Note that the previously determined intrinsic parameters are used for the calculation. In preparation for the easy to use measurement principle the associated extrinsic rotation ROC,w(αwww) and translation tCO,w = [tx,w,ty,w,tz,w] of each camera are inverted by:
RCW,w(αw,βw,γw)=ROC,w1(αw,βw,γw),tWC,w=(RCW,wtCO,w),
with the transformed extrinsic rotation RCW,w(αwww) and translation tCW,w = [tx,CW,w], which describe the transformation from the corresponding camera coordinate system CCS to the common world coordinate system WCS.

 figure: Fig. 7

Fig. 7 The common world coordinate system.

Download Full Size | PDF

4. Measurement principle

The presented measurement principle is based on a global mapping of related pixel regions between the camera sensors. This mapping can be determined by the projection and acquisition of fringe patterns on a measurement object. Note, that the calibration of the projection unit is not mandatory at this moment since the calculation is exclusively based on the information from the cameras. In [4], Peng and [21], Reich et al. fringe projection methods are presented to obtain such an unambiguous mapping between the sensor pixels of the individual cameras. Since the generating of such a mapping is not part of this paper, an already completed mapping is assumed in the following.

Figure 8 gives an overview of the measurement principle by the example of a 3D measurement point p1.

Note, that the process is based on a bundle adjustment and that is why the number of involved cameras can vary from two up to four for the example system. At first, the related pixel regions of the cameras are identified by the mapping. The calculation of the belonging 3D measurement point starts with the establishment of an optical ray for each related pixel region described in the world coordinate system. For this purpose the previously described system model (Eqs. (34)) and the extrinsic parameters determined in Eq. (11) are used to calculate the straight line equation of an optical ray r(l):

r(l)=vbp+lvdv,vbp=tCW,w,vdv=RCW,wp˜per,CCS,
with a variable parameter l, the base point vbp and the direction vector vdv.

 figure: Fig. 8

Fig. 8 Measurement principle.

Download Full Size | PDF

After an optical ray is calculated for each involved camera, the 3D surface point of the measurement object can be determined. Ideally, the rays would precisely meet at the object surface point, but coming from measurement data the rays evaluate to be slightly skew, so the surface point has to be calculated as the point with the smallest distance to each of the rays. Therefore, a bundle adjustment has to be performed by:

minρ5i=125(padj,WCSri(li))
with the parameter vector ρ5 = [li, padj,WCS] the camera number i, the number of involved cameras I ≥ 2, the according optical rays ri(li) and the adjustment point padj,WCS = [xadj,WCS,yadj,WCS,zadj,WCS]T which describes a surface point of the measurement object.

The triangulation process is performed for all corresponding pixel regions of the mapping which results in a surface description in form of a 3D point cloud.

Note, that the measurement accuracy is defined as the mean residual of Eq. (13) calculated over all measurement points.

5. Additional extrinsic optimization

In this section an additional extrinsic optimization for the world coordinate system is presented. Due to uncertainties in the image processing the accuracy of the detected reference points is limited. This can lead to a slightly suboptimal determination of the rotation and position of the camera coordinate systems. Another optimization process can rectify this issue. For this purpose, a measurement respectively the mapping of a measurement is needed. Ideally, a well-measurable object is used such as an optical cooperative cuboid but also any other object would be sufficient. Again, the mapping defines related pixel regions between the camera sensors and the corresponding rays which are calculated by Eq. (12). The subsequently executed minimization process is a modification of the measurement in Eq. (13). In this case, the extrinsic camera parameters (αwww,tx,w,ty,w,tz,w) are part of the optimization:

minρ6n=1Ni=1I(pn,adj,WCSri,n(li,n)),
with the parameter vector ρ6 = [li,n, pn,adj,WCSw,iw,iw,i,tx,w,i,ty,w,i,tz,w,i] the camera number i, the measurement point number n, the number of involved cameras I ≥2, the adjustment point pn,adj,WCS, the total number of considered measurement points N ≥ 100 and the according optical rays ri,n(li,n). In general, due to the previous optimization steps, the adjustment should vary the parameters only slightly. The effect on the measurement accuracy is nevertheless decisive. Note, that for reasons of stability it is recommended that the minimization process involves at least 100 related pixel regions from the mapping.

6. Calibration of the projection unit

The proposed measurement principle does not require a calibrated projection unit.

However, it could be useful to have another source of information. Additionally, the main advantage of the projector is the independence of errors during the image acquisition. The information of the projection unit in relation to the mapping of pixel regions is always known and correct which makes the projector to a reliable source.

The calibration of the projection unit is performed by previously calculated 3D surface points of a measurement object Pad j,WCS = [p1,ad j,WCS, p2,ad j,WCS,…, pm,ad j,WCS,…, pM,ad j,WCS]) with the point number m and the total number of points M (see Sec. 4). On basis of the mapping, each 3D point pad j,WCS = [xad j,WCS, yad j,WCS, zad j,WCS]T can be assigned to a pixel region of the projector p˜PCS=[x˜PCS,y˜PCS,1]T, whereby P˜PCS=[p˜1,PCS,p˜2,PCS,p˜m,PCS,,p˜M,PCS]. This leads to reference point pairs which are used as the necessary constraints to determine the parameters of the projection unit model. Again, a well measurable object should be used.

For the description of the projector a standard pinhole model is used [19]:

s[xPCSyPCS1]=[fp,xcpCx,p0fp,yCy,p001][Rp,WPtp,WP][xadj,WCSyadj,WCSzadj,WCS1],
with s representing an arbitrary scale factor, the parameter cp which describes the skewness of the two image axis [19], the 3D surface point padj,WCS and the associated pinhole model point pPCS = [xPCS, yPCS,1]T and the remaining intrinsic and extrinsic projector parameters fp,x, fp,y, Cx,p, Cy,p, αp, βp, γp and tp,PW = [tp,x,tp,y,tp,z]T. The additional consideration of lens distortion is given by:
x˜PCS=x˜PCS(1+k1,pr2+k2,pr4+k3,pr6)+2p1,px˜PCSy˜PCS+p2,p(r2+2x˜2PCS)),y˜PCS=y˜PCS(1+k1,pr2+k2,pr4+k3,pr6)+p2,p(r2+2y˜2PCS)+2p1,py˜PCSx˜PCS,
with r2=2fp,x+fp,y(x˜PCS2+y˜PCS2), the projector pixel region from the mapping p˜PCS and the undistorted coordinates p˜PCS=[x˜PCS,y˜PCS,fp]T. The distortion is represented by the coefficients k1,p, k2,p, k3,p, p1,p and p2,p.

The determination of initial values for the intrinsic and extrinsic parameters can be performed by the establishing of a projection matrix H˜ for the reference point pairs of the mapping [19,20]:

s[x˜m,PCSy˜m,PCS01]=H˜[xm,adj,WCSym,adj,WCSzm,adj,WCS1],
with s representing an arbitrary scale factor and the corresponding point number m = 1,2,…,M. It describes the mapping between the 3D surface points Pad j,WCS and the belonging projector points p˜PCS. Note that in this case the calibration target consist of the 3D points which makes the z-coordinate relevant. With the method described in [20], the initial values for the intrinsic and extrinsic parameters (fp,x, fp,y,cp,Cx,p,Cy,pppp,tp,x,tp,y,tp,z) can be calculated from the projection matrix H˜. The distortion coefficients can be assumed as zero.

The subsequent optimization process is performed by the difference:

[xopti,pyopti,p]=[xPCSyPCS][x˜PCSy˜PCS],
which is calculated for each reference point pair and the minimization over all summed up differences:
minρ7n=1M(xm,k,opti2+ym,k,opti2),
with the parameter vector ρ7 = [fp,x, fp,y,c,Cx,p,Cy,p,k1,p,k2,p,k3,p, p1,p, p2,pppp,tp,x,tp,y,tp,z] and the corresponding reference point pair number m = 1,2,…,M.

The transformation of the extrinsic parameters into the common world coordinate system is easily calculated by:

Rp,PW(αw,βw,γw)=Rp,WP1(αp,βp,γp),tp,WP=(Rp,PWtp,PW),
with the transformed extrinsic rotation Rp,PW (αwww) and translation tp,WP.

Afterwards, it is straight forward to consider the projection unit in the measurement principle presented in Sec. 4. The associated straight line equation of an optical ray r(l)p for the corresponding related pixel region of the projection unit is established by:

r(l)p=vbp,p+lvdv,p,vbp,p=tp,PW,vdv,p=Rp,PWp˜PCS,norm;
with a variable parameter l, the base point vbp,p, the direction vector vdv,p and the normalized undistorted coordinates p˜PCS,norm=[x˜PCSfp,x,y˜PCSfp,y,1].

7. Experimental results

In Table 1 the initial and optimized intrinsic parameter values of the geometrical calibration of Sec. 3 for the cameras of the example system are shown.

Tables Icon

Table 1. Intrinsic parameter values of the cameras (initialized and optimized)

It can be seen that both, the optimized principal point and the optimized Scheimpflug angles, vary just a little bit which corresponds with the nearly identical setup of the cameras. The reason for the deviation of the focal length parameter is reasoned in the unequal distance from the single camera to the level of the measurement object. This leads to an adjustment of the depth of field by changing the lens position in relation to the sensor which directly impacts the parameter of the focal length.

The extrinsic translation and rotation for each camera of the system resulting from the geometrical calibration of Sec. 3 are shown in Table 2.

Tables Icon

Table 2. Extrinsic parameters of the cameras in relation to the common world coordinate system

At this stage the in Sec. 4 as measurement accuracy defined mean residual of Eq. (13) is determined in various performed fringe projection measurements to 20μm with a standard deviation of 12μm. Note, that the calculation is based on the resulting parameter values of the geometrical calibration listed in Table 1 and 2. Furthermore, only the information of the cameras are considered since the projector has not been calibrated yet and only provides the needed structured light.

The subsequently in Sec. 5 calculated optimized extrinsic relations of the cameras to the world coordinate system (RCW,w,opti,tCW,w,opti) are shown in Table 3.

Tables Icon

Table 3. Optimized parameters of the cameras in relation to the common world coordinate system

By the use of the newly determined extrinsic relations listed in Table 3 the measurement accuracy is reduced to 15μm with a standard deviation of 9μm which is an improvement about roughly 25% compared to the previously listed results. Again, the measurement accuracy is determined on the basis of various performed fringe projection measurements. Note, that the calculation is once more based only on the information of the cameras and that the measurement accuracy is defined as the mean residual of Eq. (13).

Finally, the initial and optimized intrinsic parameter values of the calibration of Sec. 6 for the projection unit are shown in Table 4.

Tables Icon

Table 4. Intrinsic parameter values of the projection unit (initialized and optimized)

In Table 5 the resulting extrinsic parameters of the projection unit are listed.

To provide a more quantitative and comparative result beside the given measurement accuracy an additional experiment has been performed. Again, the calculation is exclusively based on the cameras. The objective is to show the achievable accuracy of the calculation of 3D object points based on the presented calibration procedure and resulting parameter values of Table 1 and 3.

For this purpose a planar grid distortion target consisting of a circular dot pattern is used whereby the manufacturer specifies the spacing with 1mm and an accuracy of ±3μm. An example image of the target can be seen in Fig. 9 on the left side.

 figure: Fig. 9

Fig. 9 Measurement example: camera image (left) and calculated object points (right).

Download Full Size | PDF

For the experiment the target is captured simultaneously by all cameras and the center points of the dot pattern are detected by image processing. Afterwards a global mapping of sensor coordinates between the cameras for every single dot of the target is determined. Based on these information the 3D object points of the pattern can be calculated by Eq. (13). The acquired point cloud is shown in Fig. 9 on the right side.

Subsequently the distances between the object points are determined and compared to the known spacing and tolerance. The calculated average deviation of the known point to point distance is ±3,1μm with a standard deviation of 1,5μm. Furthermore, the deviation from a plane which is fitted to the data points is calculated with the result of an average of ±1,6μm and a standard deviation of 1,2μm.

It can be noted, that the determined distance between the pattern dots is almost equal to the specifications of the manufacturing accuracy. Also, the evenness of the calculated points is in an excellent range.

8. Conclusion

In this paper a strategy for the calibration of a multi-camera fringe projection system was proposed. It includes the determination of the camera model as well as the projector model. Whereby the Scheimpflug condition of the cameras was successfully taken into account by the modified pinhole model. Additionally, the merging of the individual sensor information into one common world coordinate system has been presented which made the system ready for measurement tasks. However, the proposed measurement principle does not require a calibrated projection unit, an optional calibration of the projector was given. The complete implementation steps were described step by step and measurement results have shown the quality of the calibration. Furthermore, the presented additional extrinsic optimization had a great impact on the calibration result and the measurement accuracy which could be significantly increased. Therefore, the objective of an easy to use calibration process with an included measurement principle is completely fulfilled.

References and links

1. M. Kästner, Optische Geometrieprüfung Präzisionsgeschmiedeter Hochleistungsbauteile (Shaker Verlag, 2008).

2. J. Burke, T. Bothe, W. Osten, and C. F. Hess, “Reverse engineering by fringe projection,” in “International Symposium on Optical Science and Technology,” (2002), pp. 312–324.

3. H.-J. Przybilla, “Streifenprojektion–grundlagen, systeme und anwendungen,” Contributions from: 74th Society for Geodesy, Geoinformation, and Land Management (DVW) Seminar Terrestrical Laser Scanning–Terrestrisches-Laser-Scanning (TLS2007).

4. T. Peng, Algorithms and Models for 3-D Shape Measurement Using Digital Fringe Projections (2007).

5. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39, 10–22 (2000). [CrossRef]  

6. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?”; Opt. Lasers Eng. 48, 133–140 (2010). [CrossRef]  

7. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50, 1097–1106 (2012). [CrossRef]  

8. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3d calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21, 12218–12227 (2013). [CrossRef]   [PubMed]  

9. Y. Cai and X. Su, “Inverse projected-fringe technique based on multi projectors,” Opt. Lasers Eng. 45, 1028–1034 (2007). [CrossRef]  

10. C. Munkelt, I. Schmidt, C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, “Cordless portable multi-view fringe projection system for 3d reconstruction,” in “Proceedings of IEEE Computer Vision and Pattern Recognition,” (IEEE, 2007), pp. 1–2.

11. F. Chen, X. Chen, X. Xie, X. Feng, and L. Yang, “Full-field 3d measurement using multi-camera digital image correlation system,” Opt. Lasers Eng. 51, 1044–1052 (2013). [CrossRef]  

12. A. K. Prasad and K. Jensen, “Scheimpflug stereocamera for particle image velocimetry in liquid flows,” Appl. Optics 34, 7092–7099 (1995). [CrossRef]  

13. H. M. Merklinger, “Focusing the view camera,” Seaboard Printing Limited5 (1996).

14. H. M. Merklinger, “Scheimpflug’s patent,” Photo Techniques 17, 56 (1996).

15. H. Louhichi, T. Fournel, J. M. Lavest, and H. B. Aissia, “Camera self-calibration in scheimpflug condition for air flow investigation,” in “Advances in Visual Computing,” (Springer, Berlin Heidelberg, 2006), pp. 891–900.

16. H. Louhichi, T. Fournel, J. M. Lavest, and H. B. Aissia, “Self-calibration of scheimpflug cameras: an easy protocol,” Meas. Sci. Technol. 18, 2616 (2007). [CrossRef]  

17. A. Legarda, A. Izaguirre, N. Arana, and A. Iturrospe, “A new method for scheimpflug camera calibration,” in “Proceedings of 10th International Workshop on Electronics, Control, Measurement and Signals,” (2011), pp. 1–5.

18. P. Fasogbon, L. Duvieubourg, P.-A. Lacaze, and L. Macaire, “Intrinsic camera calibration equipped with scheimpflug optical device,” in “Proceedings of the International Conference on Quality Control by Artificial Vision,” (2015), p. 953416.

19. Z. Zhang, “A flexible new technique for camera calibration,” Pattern Analysis and Machine Intelligence , IEEE Transactions on 22, 1330–1334 (2000). [CrossRef]  

20. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

21. C. Reich, R. Ritter, and J. Thesing, “3-d shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39, 224–231 (2000). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The assembly of the fringe projection system.
Fig. 2
Fig. 2 The structure of the Scheimpflug cameras.
Fig. 3
Fig. 3 Schema of the calibration target.
Fig. 4
Fig. 4 Scheimpflug camera model part 1.
Fig. 5
Fig. 5 Scheimpflug camera model part 2.
Fig. 6
Fig. 6 Example of acquired calibration images.
Fig. 7
Fig. 7 The common world coordinate system.
Fig. 8
Fig. 8 Measurement principle.
Fig. 9
Fig. 9 Measurement example: camera image (left) and calculated object points (right).

Tables (5)

Tables Icon

Table 1 Intrinsic parameter values of the cameras (initialized and optimized)

Tables Icon

Table 2 Extrinsic parameters of the cameras in relation to the common world coordinate system

Tables Icon

Table 3 Optimized parameters of the cameras in relation to the common world coordinate system

Tables Icon

Table 4 Intrinsic parameter values of the projection unit (initialized and optimized)

Tables Icon

Table 5 Intrinsic unit

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

P o b j = [ x 1 , o b j x 2 , o b j x 25 , o b j y 1 , o b j y 2 , o b j y 25 , o b j 0 0 0 ] = [ 0 5 10 0 20 0 0 0 5 20 0 0 0 0 0 ] .
s 1 [ x p e r , C C S y p e r , C C S f ] = [ f 0 0 0 f 0 0 0 f ] [ R O C t C O ] [ x o b j , O C S y o b j , O C S z o b j , O C S 1 ] ,
s 2 [ x ˜ p e r , C C S y ˜ p e r , C C S f ] = [ f 0 0 0 f 0 0 0 f ] [ R S , I C t S , C I ] ( [ x i m a , I C S y i m a , I C S 0 1 ] [ C x C y 0 0 ] ) ,
x ˜ p e r , C C S = x ˜ p e r , C C S ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 x ˜ p e r , C C S y ˜ p e r , C C S + p 2 ( r 2 + 2 x ˜ 2 p e r , C C S ) , y ˜ p e r , C C S = y ˜ p e r , C C S ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + p 2 ( r 2 + 2 y ˜ 2 p e r , C C S ) + 2 p 1 y ˜ p e r , C C S x ˜ p e r , C C S ,
[ x o p t i y o p t i ] = [ x p e r , C C S y p e r , C C S ] [ x ˜ p e r , C C S y ˜ p e r , C C S ] .
s [ x m , k , i m a , I C S y m , k , i m a , I C S 1 ] = H k [ x m , o b j , O C S y m , o b j , O C S 1 ] ,
min ρ 1 k = 1 N m = 1 M ( x m , k , o p t i 2 + y m , k , o p t i 2 ) ,
min ρ 2 k = 1 20 m = 1 25 ( x m , k , o p t i 2 + y m , k , o p t i 2 ) ,
min ρ 3 k = 1 20 m = 1 25 ( x m , k , o p t i 2 + y m , k , o p t i 2 ) ,
min ρ 4 m = 1 25 ( x m , o p t i 2 + y m , o p t i 2 ) ,
R C W , w ( α w , β w , γ w ) = R O C , w 1 ( α w , β w , γ w ) , t W C , w = ( R C W , w t C O , w ) ,
r ( l ) = v b p + l v d v , v b p = t C W , w , v d v = R C W , w p ˜ p e r , C C S ,
min ρ 5 i = 1 25 ( p a d j , W C S r i ( l i ) )
min ρ 6 n = 1 N i = 1 I ( p n , a d j , W C S r i , n ( l i , n ) ) ,
s [ x P C S y P C S 1 ] = [ f p , x c p C x , p 0 f p , y C y , p 0 0 1 ] [ R p , W P t p , W P ] [ x a d j , W C S y a d j , W C S z a d j , W C S 1 ] ,
x ˜ P C S = x ˜ P C S ( 1 + k 1 , p r 2 + k 2 , p r 4 + k 3 , p r 6 ) + 2 p 1 , p x ˜ P C S y ˜ P C S + p 2 , p ( r 2 + 2 x ˜ 2 P C S ) ) , y ˜ P C S = y ˜ P C S ( 1 + k 1 , p r 2 + k 2 , p r 4 + k 3 , p r 6 ) + p 2 , p ( r 2 + 2 y ˜ 2 P C S ) + 2 p 1 , p y ˜ P C S x ˜ P C S ,
s [ x ˜ m , P C S y ˜ m , P C S 0 1 ] = H ˜ [ x m , a d j , W C S y m , a d j , W C S z m , a d j , W C S 1 ] ,
[ x o p t i , p y o p t i , p ] = [ x P C S y P C S ] [ x ˜ P C S y ˜ P C S ] ,
min ρ 7 n = 1 M ( x m , k , o p t i 2 + y m , k , o p t i 2 ) ,
R p , P W ( α w , β w , γ w ) = R p , W P 1 ( α p , β p , γ p ) , t p , W P = ( R p , P W t p , P W ) ,
r ( l ) p = v b p , p + l v d v , p , v b p , p = t p , P W , v d v , p = R p , P W p ˜ P C S , n o r m ;
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.