Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved geometrical model of fringe projection profilometry

Open Access Open Access

Abstract

The accuracy performance of fringe projection profilometry (FPP) depends on accurate phase-to-height (PTH) mapping and system calibration. The existing PTH mapping is derived based on the condition that the plane formed by axes of camera and projector is perpendicular to the reference plane, and measurement error occurs when the condition is not met. In this paper, a new geometric model for FPP is presented to lift the condition, resulting in a new PTH mapping relationship. The new model involves seven parameters, and a new system calibration method is proposed to determine their values. Experiments are conducted to verify the performance of the proposed technique, showing a noticeable improvement in the accuracy of 3D shape measurement.

© 2014 Optical Society of America

1. Introduction

Fringe Projection Profilometry (FPP) has been considered as an enabling technology for non-contact 3-D shape measurement due to such advantages as simple structure and fast measurement [1–4]. A typical FPP system consists of a digital projector, a camera, a computer and a reference plane. The digital projector generates a group of image patterns of periodic fringes, which are projected respectively onto the reference plane and the object surface to be measured. The camera captures the image patterns reflected from the reference plane and the object surface, the latter of which are deformed version of former. The 3-D information of the object surface shape can be extracted by analyzing the phases of the projected patterns acquired by means of a phase-to-height (PTH) mapping relationship. The PTH mapping is based on the triangulation relationship among the projector, the camera, and the corresponding point on the patterns acquired from the reference plane and the object surface. The effectiveness of the PTH mapping depends on if it matches the structure of FPP system, that is, the positions of the camera, the projector and the reference plane. In early years, the PTH mapping proposed by Takeda, et al. [1] based on a simple model, where the FPP system is assumed to have an ideal structure meeting three conditions, including (1) the optical centers of camera and projector are located at the same distance from the reference plane; (2) the optical axes of the camera and projector are coplanar and the plane is perpendicular to the reference plane; and (3) the optical axis of camera is vertical to reference plane. However, these conditions are not always met in practice, and measurement error will occur if the PTH [1] is employed to recover the 3-D shape. In order to remedy this problem, an improved PTH mapping relationship was proposed by Mao, et al. [2], where the first condition is removed in that camera and projector can be positioned with different distances from the reference plane. However, the second and third conditions are still required. Recently in 2012, further effort for solving the problem was reported in [3], where a PTH mapping was proposed to remove the first and third condition in that camera and projector are permitted to locate at different distance from the reference plane, and the optical axis of camera is not required to be vertical to the reference plane. However, the second condition remains. To the best of our knowledge, there is not a PTH mapping reported in literature where all these three conditions can be lifted, allowing a flexible positioning of camera and projector.

As the PTH mapping relationship is determined by the system structure, accurate evaluation of the parameters associated with the structure plays an important role, which is carried out by means of system calibration. These parameters include the intrinsic parameters of the camera and the projector, as well as extrinsic parameters associated with the geometrical structure of the FPP. Over the past decades, a number of approaches for system calibration are proposed, e.g [5–11]. Recently in 2013, a method to calibrate five essential parameters associated with the camera and the projector is presented by Song, et al. [12]. However, all the existing work reported in literature was based on the FPP structure meeting the three conditions. To the best of our knowledge, there is not work reported for system calibration with a flexible positioning of the camera and projector, that is, without the restrictions of the three conditions.

In this paper, we firstly propose a model to describe a general structure of FPP, where the three conditions are not required. Such a model involves seven parameters related to the system structure which are all required to be obtained. In order to achieve this, we propose a system calibration method. Before the seven parameters are calibrated, the camera and projector have been calibrated simultaneously. Camera calibration has been widely used for 3-D measuring system. A highly accurate and robust camera calibration method was proposed by Zhang [13],. Hence this paper adopts Zhang’s method [13]. Since the projector cannot take the picture as camera, the camera is used to help it to capture image. The phase-shifting method proposed by Zhang, et al. [14] in 2008 is adopted to transform the points in camera image into projector image. Then Zhang’s method [13] is used to calibrate projector by considering it as an inverse camera, since camera and projector share the same optical principle. The innovations of this paper are improving the existing PTH mapping, and presenting the calibration method for our improved PTH mapping. The experiments demonstrate the accuracy and flexibility of 3-D measuring system have been improved based on our method. Such an improved algorithm will be highly suitable to the practical application of 3-D measuring system.

2. Existing work and problem statement

In this section, we will introduce in the order of the PTH mapping relationship proposed by Takeda, et al. [1], an improved PTH mapping by Mao, et al. [2], the work by Xiao, et al. [3], and then problem statement.

2.1 Geometric model proposed in [1]

Figure 1 shows an ideal geometric model proposed in [1]. The fringe patterns are projected onto the reference plane by the projector, and then the fringe patterns reflected are captured by the camera. Without loss of generality, we consider a beam of light p1(up,vp)projected by the projector onto the reference plane on the pointp(x,y,z), which is reflected via p2(uc,vc)and captured by the camera. When the object is placed over the reference plane, the same light beam is projected onto the surface of object on point p1(up,vp), and reflection is taken by the camera via p(x,y,z). The PTH mapping relationship is following [1]:

h(x,y)=LCΔφDC(x,y)2πf0dΔφDC(x,y)
wheref0is the frequency of fringe pattern on reference plane. LCis the distance between pointOand optical center of camera ; dis the distance between optical centers of camera and projector; ΔφDC(x,y)is the difference between the phase of point Dand that of point C; and h(x,y)is the height of object.

 figure: Fig. 1

Fig. 1 The ideal geometric modal.

Download Full Size | PDF

The Eq. (1) is derived based on three conditions, including (1) the line OPOCis parallel to reference plane; (2) the line OpOand the line OOCare coplanar, and the plane is vertical to reference plane; and (3) the lineOCOis vertical to reference plane. If the three conditions are not satisfied at the same time, the PTH mapping will lead to measuring error.

2.2 Geometric model proposed in [2]

Figure 2 shows an improved geometric model proposed in [2], which allows the camera to move along z-axis direction, and hence is more flexible than the ideal one in [1]. The PTH mapping in [2] is as follows:

h(x,y)=ΔφDC(x,y)LC(LC+S1sinα1)2πf0LCrLCΔφDC(x,y)φD(x,y)S1sinα1
where, in addition to the parameters associated with the model in Eq. (1), another three parameters were introduced, includingS1the distance between the optical centers of camera and projector, α1 the angle between the linear OpFand OPOC, and rthe distance between point Oand K.

 figure: Fig. 2

Fig. 2 (a). An improved geometric model (b). Simplified geometric model.

Download Full Size | PDF

When α1=0 andr=d, Eq. (2) becomes the same as Eq. (1), and hence the PTH mapping in [1] can be considered as a special case of the PTH mapping proposed in [2]. In fact, sinceS1sinα1only represents the difference between the heights of camera and projector relative to reference plane, Fig. 2(a) can be simplified as Fig. 2(b). The geometric model proposed in [2] only removes the first condition that projector and camera must be located at the same height relative to the reference plane. Because the lineOcOis vertical to reference plane, the plane including the lines OcOand OpOis vertical to reference plane. Although the geometric model proposed in [2] improves the ideal geometric model, it is still limited by the second and third condition. If these two conditions are not satisfied, the PTH mapping proposed in [2] will still lead to error for measuring object.

2.3 Geometric model proposed in [3]

Another improved geometric model in Fig. 3 is proposed in [3], where camera and projector are allowed to be different height relative to reference plane, and optical axis of camera is permitted not to be vertical to reference plane. Hence, compared to the simplified geometric model as Fig. 2 (b), this improved geometric model is more flexible. The PTH mapping in [3] can be given as:

h(x,y)=ΔϕDC(x,y)Lccosθ2(Lccosθ2+S1sinα1)2πf0Lc(rcosθ2+S1sinα1sinθ2)ϕD(x,y)S1sinα1ϕDC(x,y)Lccosθ2
where, in addition to the parameters associated to the model in Eq. (2), another one parameter was introduced, that is,θ2 the angle between the linear OCOand OCO1.

 figure: Fig. 3

Fig. 3 Another improved geometric model in [3].

Download Full Size | PDF

when θ2=0, Eq. (3) can be the same as Eq. (2) indicating that the PTH mapping proposed in [2] is a specific case of the PTH mapping proposed in [3]. Although geometric model proposed in [3] improves the geometric model proposed in [2], it requires the lines OCOand OpOare coplanar, and the plane is perpendicular to reference plane.

2.4 Problem statement

When the plane formed by the axes of camera and projector is perpendicular to reference plane, the fringe patterns captured by camera are shown as Fig. 4(a). However, it is difficult to satisfy this condition in practice. When this condition is not met, the fringe patterns taken by the camera can be the form in Fig. 4(b), where the fringe patterns are not orthogonal to uc-axis or parallel to vc-axis. If the PTH mapping in [3] is used for such fringe patterns, error will occur for 3-D shape measurement.

 figure: Fig. 4

Fig. 4 (a). Captured ideal fringes (b). Captured actual fringes.

Download Full Size | PDF

3. Improved phase-to-height (PTH) mapping

Now let us introduce a new PTH mapping which does not require any limitation of these three conditions listed in the section of introduction. As shown in Fig. 5, plane OPOCO is not perpendicular to the reference plane, and hence θ0is not 0.

 figure: Fig. 5

Fig. 5 The proposed geometric model for FPP.

Download Full Size | PDF

Considering triangles OO3OC, O2O3OC and OpFOC, we have O3OC¯=LCcosθ0, O2OC¯=LCcosθ0cosθ2, O2O3¯=LCcosθ0sinθ2 and FOC¯=S1sinα1.

Hence, KOp¯can be obtained as

KOp¯=O1OCT¯+EOCT¯=LCcosθ0cosθ2+S1sina1
From triangles AOCTO1 and KOOp, tanδand tanηcan be respectively expressed as
tanδ=O1OCT¯AO¯+OO1¯andtanη=OPK¯KO¯CO¯
From triangles ABD andBCD, AB¯and BC¯can be respectively described as
AB¯=BD¯tanδandBC¯=BD¯tanη
Hence, CA¯=AB¯+BC¯can be given as
CA¯=BD¯(1tanδ+1tanη)
Then, h(x,y)=BD¯ can be presented as
h(x,y)=CA¯tanδtanηtanδ+tanη
Substituting Eq. (5) into Eq. (8) yields the following:
h(x,y)=CA¯Lccosθ0cosθ2(Lccosθ0cosθ2+S1sinα1)Lccosθ0(rcosθ2+S1sinα1sinθ2)+AO¯S1sinα1+CA¯Lccosθ0cosθ2
According to [2], CA¯ and AO¯ can be presented as
CA¯=ΔϕDC(x,y)2πf0andAO¯=ϕD(x,y)2πf0
Substituting Eq. (10) into Eq. (9), we have
h(x,y)=ΔϕDC(x,y)Lccosθ0cosθ2(Lccosθ0cosθ2+S1sinα1)2πf0Lccosθ0(rcosθ2+S1sinα1sinθ2)ϕD(x,y)S1sinα1ΔϕDC(x,y)Lccosθ0cosθ2
where, in addition to parameters associated to the model in [3], a new parameterθ0is introduced, which is the angle between the lines OOC and O3OC. When θ0=0, Eq. (11) is the same as Eq. (3). Hence, the geometric model proposed in [3] is a specific case of our geometric model.

4. System calibration

With the introduction of θ0in the model proposed in Fig. 5, the system must be calibrated in order to determine all the seven parameters associated, including LC, r, f0, θ0, S1, α1 and θ2. Before the seven parameters are estimated, camera and projector should be calibrated. Hence, this section introduces camera calibration, projector calibration and calculation of systematic parameters.

4.1 Calibration of camera and projector

The camera and projector can be calibrated using the system in Fig. 6 with the aid of calibration plane. In order to describe the mapping relationship among 2-D points on the DMD of projector, 3-D points on calibration board, and 2-D points on the CCD of camera, a number of coordinate system are required, including world coordinate system (WCS), camera coordinate system (CCS), camera image coordinate system (CICS), projector coordinate system (PCS) and projector image coordinate system (PICS). Let P1(up,vp)denote the point on the projector DMD, P(x,y,z)the corresponding point on the calibration board, and P2(uc,vc)the corresponding point on camera CCD. The relationship between P2(uc,vc)and P(x,y,z)can be described as:

sc[ucvc1]m˜c=[αcγcuc00βcvc0001]Ac[r11r12r13t1r21r22r23t2r31r32r33t3][RcTc][xyz1]M˜
whereAcis the matrix containing all the intrinsic parameters of camera, in which (uc0,vc0) is the coordinates of principle point of the camera; αcandβcare the focal length along uc-axis and vc-axis of the CCD; γcis the skewness of uc-axis and vc-axis; Rcand Tcare the rotation matrix and translation vector containing extrinsic; sc(μm/pixel)is the scale factor which denotes the ratio of the physical dimension of an object (in microns) to its size (in pixel). Since camera calibration has been studied extensively with many effective methods developed. In this paper we employ the technique proposed in [13] to calibrate the camera.

 figure: Fig. 6

Fig. 6 Schematic illustration of systematic calibration.

Download Full Size | PDF

Since the projector can be considered as an inverse camera, the pinhole model can be used to describe the projector. As shown in Fig. 6, the relationship betweenP1(up,vp) and P(x,y,z) can be described as

sp[upvp1]m˜p=[αpγpup00βpvp0001]Ap[g11g12g13e1g21g22g23e2g31g32g33e3][RpTp][xyz1]M˜
where Apis the matrix containing all the intrinsic parameters of projector, in which (up0,vp0) is the coordinates of principle point of the projector; αpand βpare the focal length along up-axis and vp-axis of the DMD; γpis the skewness of up-axis and vp-axis; RpandTpare the rotation matrix and translation vector containing extrinsic; sp(μm/pixel)is the scale factor which denotes the ratio of the physical dimension of an object (in microns) to its size (in pixel).

As the projector cannot take a picture by itself, its calibration must be with the help of camera, and the mapping relationship between the pixels on the DMD and CCD should be determined. In order to establish the mapping between the pixels on the DMD and CCD, two sets of phase shifted sinusoidal fringe patterns are generated by the projector, one set with vertical patterns and the other with horizontal patterns. Assuming that the resolution of the DMD is Np×Mp pixels, and each individual fringe spans TVand THpixels on the vertical patterns and horizontal patterns, respectively, these patterns can be expressed by the following:

IVl(up,vp)=I1+I2cos(2πupTV+l43π),l=1,2,...,6andup=0,1,2,...,Mp
and
IHl(up,vp)=I1+I2cos(2πvpTH+l43π),l=1,2,...,6andvp=0,1,2,...,Np
whereI1is average intensity, I2is the intensity modulation. These patterns are projected onto the calibration board, and are captured by a camera. Assuming that the resolution of the camera CCD is Nc×Mcpixels, the captured vertical fringe patterns can be expressed as follows:
IVn(uc,vc)=I1+I2cos(ϕV(uc,vc)+n43π),n=1,2,...,6anduc=0,1,2,...,Mc
Similarly, the captured horizontal fringe patterns are:
IHn(uc,vc)=I1+I2cos(ϕH(uc,vc)+n43π),n=1,2,...,6andvc=0,1,2,...,Nc
where ϕV(uc,vc)and ϕH(uc,vc) are the phases, which can be retrieved by the following:
ϕV(uc,vc)=arctan[n=16IVn(uc,vc)sin(2πn/6)n=16IVn(uc,vc)cos(2πn/6)]andϕH(uc,vc)=arctan[n=16IHn(uc,vc)sin(2πn/6)n=16IHn(uc,vc)cos(2πn/6)]
Note that, due to the arctangent operation in Eq. (18), the values of ϕV(uc,vc)and ϕH(uc,vc) are wrapped into the range between 0and 2π. In order to have a unique mapping between the DMD and the CCD pixels, both of them are required to be unwrapped. To this end two sets of the gray code fringe images are projected, one vertical and the other horizontal respectively. These Gray code patterns have the same period as the sinusoidal fringe pattern, but the number of patterns is determined by n=log(2M), where Mis the number of fringes in the corresponding sinusoidal pattern. The sinusoidal patterns have 64 fringes, and hence 6 gray code patterns should be used. For a point (uc,vc)on ϕV(uc,vc) the six vertical gray code patterns provide a code by means of the light intensity on the corresponding point, which allows unique determination of the mV(uc,vc), which can be used to unwrap ϕV(uc,vc)as follows:
ΦV(uc,vc)=ϕV(uc,vc)+2πmV(uc,vc)
In the same way, the six horizontal gray code patterns can be used to unwrapϕH(uc,vc), yielding the following:
ΦH(uc,vc)=ϕH(uc,vc)+2πmH(uc,vc)
where ΦV(uc,vc)and ΦH(uc,vc) are monotonic with respect to vcand uc, respectively. Compared them with the phases in Eq. (14) and Eq. (15), a unique point-to-point mapping between the CCD pixels and the DMD pixels can be determined as follows:
up=ΦV(uc,vc)2πTVandvp=ΦH(uc,vc)2πTH
Hence the mapping relationship between the CCD and DMD can be established. The same procedure will be applied to all the circle of the DMD image, resulting in a set of corresponding points on the board and the DMD, which can be used to calibrate the projector based on method in [13]. Note that in order to have enough number of independent equations the calibration board needs to be rotated twice. Although the camera and projector have been calibrated simultaneously, the relationship between systematic parameters and the extrinsic and intrinsic parameters of both camera and projector is not clear. Hence, this relationship must be proposed. The whole derived procedure of this relationship is given as the section of calculation of systematic parameters.

4.2 Calculation of systematic parameters

With the camera and projector calibrated, we are now able to determine the seven parameters associated model in Fig. 5, includingLc, S1, f0, θ0, α1, rand θ2. To achieve this, we set the last position of calibration plane as the position of reference plane. The relationship between coordinates θ0 of one point in the CCS and the coordinates (xwc,ywc,zwc) of the same point in the WCS can be described as

[xcyczc]=Rc[xwcywczwc]+Tc
Similarly, the relationship between coordinates (xp,yp,zp) of one point in the PCS and the coordinates (xwp,ywp,zwp)of the same point in WCS can be described as
[xpypzp]=Rp[xwpywpzwp]+Tp
where Rcis 3×3 rotation matrix of camera, and Tcis 3×1the translation vector of camera. Rpis 3×3 rotation matrix of projector, and Tpis 3×1 translation vector of projector. When [xcyczc]T=[xpypzp]T=[000]T, the optical centers of the camera and the projector in world coordinate system can be calculated as:
[xwcywczwc]=Rc1Tcand[xwpywpzwp]=Rp1Tp
Where Rc1and Rp1are the inverse of Rc and Rp, respectively. From Eq. (24), parameters S1, a1 and rcan be expressed as
S1=(xwpxwc)2+(ywpywc)2+(zwpzwc)2
and
α1=arcsin(|zwpzwc|S1)
and
r=|xwpxwc|
In order to determine θ0, Lcandθ2, the coordinates (xO,yO,zO)of point O in WCS should be determined. Since point Ocorresponds to the principle point (uc0,vc0) in the camera image plane, xOand yOcan be expressed by uc0 and vc0 by Eq. (12), which ignores the lens distortion of camera. Hence, by assumingz=0, we have following:
scm˜c=HM˜
where m˜c=[uc0vc01]T, M˜=[xoyo1]Tand H=[αcrcuc00βcvc0001][r11r12t1r21r22t2r31r32t3].

Therefore, we have

M˜=scH1m˜c
where H1is the inverse of H.

When the coordinates of point Oare obtained, parametersLc,θ2andθ0can be expressed as:

Lc=(xOxwc)2+(yOywc)2+zwc2
and
θ2=arctan(|xOxwc||zwc|)
and
θ0=arctan(|yOywc|(xOxwc)2+zwc2)
Finally, the frequencyf0of projected fringe patterns on reference plane can be calculated using the method in [12].

5. Experiments

The experiments are conducted to verify the performance of proposed geometrical model and the calibration approach presented in Section 4. The experimental setup in our lab is shown in Fig. 7, consisting of a computer, a camera, a projector, and calibration board. The resolution of projector is 768 pixels by 1024 pixels, and that of camera is 1024 pixels by 1280 pixels. The calibration board is a black metal plane with 99 engraved circles as shown in Fig. 7.

 figure: Fig. 7

Fig. 7 System calibration equipment in our lab.

Download Full Size | PDF

The procedure of calibration experiment is as follows. Firstly, a white paper is stuck on the surface of calibration board, and then six vertical patterns and six horizontal patterns are projected onto the covered calibration board. These projected patterns are captured by a camera. Then six vertical gray code patterns and six horizontal gray code patterns are projected onto the covered calibration board, which are then taken by the camera. After these 24 images are projected, we removed the white paper and capture an image of calibration board. Since camera calibration in [13] needs at least three different views of calibration board, the calibration board has been viewed from three different positions, and the gray code phase shifting is used to every view of calibration board. The captured CCD images of calibration board are shown in Fig. 8.

 figure: Fig. 8

Fig. 8 (a). image of calibration board on first position (b). that on second position (c). that on the third position.

Download Full Size | PDF

When both camera and projector are calibrated, the last position of calibration board is chosen as the reference plane. Then their corresponding extrinsic parameters are used to estimate the seven parameters using the method described in Section 4.2. All the obtained parameters on the new PTH mapping are shown in Table 1.

Tables Icon

Table 1. Parameters of our Proposed PTH Mapping

Let us now look at the accuracy of the parameters obtained in Table 1. Because their true values are unknown, we employ an indirect method. We measure a cuboid with a flat top surface with its known a priori (i.e., 14.23mm) and hence we can compare the measurement result against the true value. Figure 9 shows the reconstruction results using the ideal geometrical model in [1], the model proposed in [3] and the model in Fig. 5 incorporating the parameter values in Table 1. It is seen that the reconstructed results using the proposed method are much smoother than the models in [1] and [3], and hence the proposed method is the most accurate. Also, the standard deviation of the measurement associated with the proposed method is 0.1238mm (or 0.87%), implying that the parameters obtained in Table 1. is also very accurate. In contrast, standard deviation of results in Fig. 9(a) and Fig. 9(b) are 0.49 mm, 0.29 mm respectively, which are much higher. Therefore, we can say that a noticeable improvement in the measurement accuracy can be achieved by the proposed model.

 figure: Fig. 9

Fig. 9 (a) Reconstruction based on ideal geometric model (b) Reconstruction based on model in [3] (c) Reconstruction based on proposed model.

Download Full Size | PDF

6. Conclusion

In this paper, we proposed a new geometric model for FPP where the plane formed by the axes of camera and projector is not necessarily perpendicular to the reference plane, thus making it much easier for implementing a FPP. Based on the new model, we presented a new PTH mapping relationship in order to improve the measurement accuracy. The new model involves seven parameters, for which we also proposed a new system calibration method to determine the values. Experiments are conducted to verify the performance of the proposed technique, showing a noticeable improvement in the accuracy of 3D shape measurement.

References and links

1. M. Takeda and K. Mutoh, “Fourier-transform profilometry for the automatic measurement of 3D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

2. X. F. Mao, W. J. Chen, and X. Y. Su, “Improved Fourier-transform profilometry,” Appl. Opt. 46(5), 664–668 (2007). [CrossRef]   [PubMed]  

3. Y. S. Xiao, Y. P. Cao, and Y. C. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]   [PubMed]  

4. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]   [PubMed]  

5. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 (1999). [CrossRef]   [PubMed]  

6. M. J. Baker, J. T. Xi, and J. F. Chicharo, “Neural Network digital fringe calibration technique for structured light profilometers,” Appl. Opt. 46(8), 1233–1243 (2007). [CrossRef]   [PubMed]  

7. B. M. Chung and Y. C. Park, “Hybrid method for phase-height relationship in 3D shape measurement using fringe pattern projection,” International Journal of Precision Engineering and Manufacturing 15(3), 407–413 (2014). [CrossRef]  

8. Q. Hu, P. S. Huang, Q. Fu, and F. P. Chiang, “Calibration of a three-dimensional shape measurement system,” Opt. Eng. 42(2), 482–493 (2003). [CrossRef]  

9. H. Du and Z. Y. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef]   [PubMed]  

10. E. Zappa and G. Busca, “Fourier-transform profilometry calibration based on an exhaustive geometric model of the system,” Opt. Lasers 47(7-8), 754–767 (2009). [CrossRef]  

11. E. Zappa, G. Busca, and P. Sala, “Innovative calibration technique for fringe projection based 3D scanner,” Opt. Lasers 49(3), 331–340 (2011). [CrossRef]  

12. L. M. Song, C. M. Chen, Z. Chen, J. T. Xi, and Y. G. Yu, “Essential parameter calibration for the 3D scanner with only single camera and projector,” Optoelectron. Lett. 9(2), 143–147 (2013). [CrossRef]  

13. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

14. S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The ideal geometric modal.
Fig. 2
Fig. 2 (a). An improved geometric model (b). Simplified geometric model.
Fig. 3
Fig. 3 Another improved geometric model in [3].
Fig. 4
Fig. 4 (a). Captured ideal fringes (b). Captured actual fringes.
Fig. 5
Fig. 5 The proposed geometric model for FPP.
Fig. 6
Fig. 6 Schematic illustration of systematic calibration.
Fig. 7
Fig. 7 System calibration equipment in our lab.
Fig. 8
Fig. 8 (a). image of calibration board on first position (b). that on second position (c). that on the third position.
Fig. 9
Fig. 9 (a) Reconstruction based on ideal geometric model (b) Reconstruction based on model in [3] (c) Reconstruction based on proposed model.

Tables (1)

Tables Icon

Table 1 Parameters of our Proposed PTH Mapping

Equations (32)

Equations on this page are rendered with MathJax. Learn more.

h(x,y)= L C Δ φ DC ( x,y ) 2π f 0 dΔ φ DC ( x,y )
h( x,y )= Δ φ DC ( x,y ) L C ( L C + S 1 sin α 1 ) 2π f 0 L C r L C Δ φ DC ( x,y ) φ D ( x,y ) S 1 sin α 1
h(x,y)= Δ ϕ DC ( x,y ) L c cos θ 2 ( L c cos θ 2 + S 1 sin α 1 ) 2π f 0 L c ( rcos θ 2 + S 1 sin α 1 sin θ 2 ) ϕ D ( x,y ) S 1 sin α 1 ϕ DC ( x,y ) L c cos θ 2
K O p ¯ = O 1 O CT ¯ + E O CT ¯ = L C cos θ 0 cos θ 2 + S 1 sin a 1
tanδ= O 1 O CT ¯ AO ¯ + O O 1 ¯ and tanη= O P K ¯ KO ¯ CO ¯
AB ¯ = BD ¯ tanδ and BC ¯ = BD ¯ tanη
CA ¯ = BD ¯ ( 1 tanδ + 1 tanη )
h( x,y )= CA ¯ tanδtanη tanδ+tanη
h(x,y)= CA ¯ L c cos θ 0 cos θ 2 ( L c cos θ 0 cos θ 2 + S 1 sin α 1 ) L c cos θ 0 ( rcos θ 2 + S 1 sin α 1 sin θ 2 )+ AO ¯ S 1 sin α 1 + CA ¯ L c cos θ 0 cos θ 2
CA ¯ = Δ ϕ DC ( x,y ) 2π f 0 and AO ¯ = ϕ D ( x,y ) 2π f 0
h(x,y)= Δ ϕ DC ( x,y ) L c cos θ 0 cos θ 2 ( L c cos θ 0 cos θ 2 + S 1 sin α 1 ) 2π f 0 L c cos θ 0 ( rcos θ 2 + S 1 sin α 1 sin θ 2 ) ϕ D ( x,y ) S 1 sin α 1 Δ ϕ DC ( x,y ) L c cos θ 0 cos θ 2
s c [ u c v c 1 ] m ˜ c = [ α c γ c u c0 0 β c v c0 0 0 1 ] A c [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ R c T c ] [ x y z 1 ] M ˜
s p [ u p v p 1 ] m ˜ p = [ α p γ p u p0 0 β p v p0 0 0 1 ] A p [ g 11 g 12 g 13 e 1 g 21 g 22 g 23 e 2 g 31 g 32 g 33 e 3 ] [ R p T p ] [ x y z 1 ] M ˜
I V l ( u p , v p )= I 1 + I 2 cos( 2π u p T V + l4 3 π ),l=1,2,...,6 and u p =0,1,2,..., M p
I H l ( u p , v p )= I 1 + I 2 cos( 2π v p T H + l4 3 π ),l=1,2,...,6 and v p =0,1,2,..., N p
I V n ( u c , v c )= I 1 + I 2 cos( ϕ V ( u c , v c )+ n4 3 π ),n=1,2,...,6 and u c =0,1,2,..., M c
I H n ( u c , v c )= I 1 + I 2 cos( ϕ H ( u c , v c )+ n4 3 π ),n=1,2,...,6 and v c =0,1,2,..., N c
ϕ V ( u c , v c )=arctan[ n=1 6 I V n ( u c , v c )sin( 2πn/6 ) n=1 6 I V n ( u c , v c )cos( 2πn/6 ) ] and ϕ H ( u c , v c )=arctan[ n=1 6 I H n ( u c , v c )sin( 2πn/6 ) n=1 6 I H n ( u c , v c )cos( 2πn/6 ) ]
Φ V ( u c , v c )= ϕ V ( u c , v c )+2π m V ( u c , v c )
Φ H ( u c , v c )= ϕ H ( u c , v c )+2π m H ( u c , v c )
u p = Φ V ( u c , v c ) 2π T V and v p = Φ H ( u c , v c ) 2π T H
[ x c y c z c ]= R c [ x wc y wc z wc ]+ T c
[ x p y p z p ]= R p [ x wp y wp z wp ]+ T p
[ x wc y wc z wc ]= R c 1 T c and [ x wp y wp z wp ]= R p 1 T p
S 1 = ( x wp x wc ) 2 + ( y wp y wc ) 2 + ( z wp z wc ) 2
α 1 =arcsin( | z wp z wc | S 1 )
r=| x wp x wc |
s c m ˜ c =H M ˜
M ˜ = s c H 1 m ˜ c
L c = ( x O x wc ) 2 + ( y O y wc ) 2 + z wc 2
θ 2 =arctan( | x O x wc | | z wc | )
θ 0 =arctan( | y O y wc | ( x O x wc ) 2 + z wc 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.