Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry

Open Access Open Access

Abstract

Two major methods for 3D reconstruction in fringe projection profilometry, phase-height mapping and stereovision, have their respective problems: the former has low-flexibility in practical application due to system restrictions and the latter requires time-consuming homogenous points searching. Given these limitations, we propose a phase-3D mapping method developed from back-projection stereovision model to achieve flexible and high-efficient 3D reconstruction for fringe projection profilometry. We showed that all dimensional coordinates (X, Y, and Z), but not just the height coordinate (Z), of a measured point can be mapped from phase through corresponding rational functions directly and independently. To determine the phase-3D mapping coefficients, we designed a flexible two-step calibration strategy. The first step, ray reprojection calibration, is to determine the stereovision system parameters; the second step, sampling-mapping calibration, is to fit the mapping coefficients using the calibrated stereovision system parameters. Experimental results demonstrated that the proposed method was suitable for flexible and high-efficient 3D reconstruction that eliminates practical restrictions and dispenses with the time-consuming homogenous point searching.

© 2017 Optical Society of America

1. Introduction

Fringe projection profilometry (FPP) is a widely used optical three-dimensional (3D) technology that benefits from the advantages of high speed, high accuracy and high resolution [1,2]. The FPP system generally adopts projector-camera setup to project sinusoidal fringe patterns and then capture distorted fringe images for phase computation and final 3D reconstruction. Among existing reconstruction methods in FPP, there are two main classical categories: phase-height mapping (PHM) [3–25] and stereovision (SV) [26–40].

PHM, which has been studied since 1980s, lends itself to efficient conversion of height-modulated phase to height coordinates [3–22]. Takeda and Mutoh [3] derived a formula for converting phase difference into height relative to a reference plane by means of similar triangle theory. Liu et al. [9] presented a mapping model representing the intersection of a line of sight and equiphasic planes. Du et al. [14] established another model mapping the phase to the relative height of objects through line equations of acquisition and projection. However, PHM has some practical restrictions; for example, system structure is restricted geometrically so that the optical axis of the camera or the projector is perpendicular to the reference plane, the connecting line of the optical centers of the camera and the projector is parallel to the reference plane, etc. PHM generally requires a reference plane to compute the phase difference and relative height, which limits the measuring volume. Moreover, PHM needs a translation stage or some gage blocks to obtain precise heights for calibration, which leads to loss of accuracy for measured points lying outside the calibrated volume. Studies reporting alternative mapping methods used an approximate polynomial to represent the mapping relation directly [23–25]. For instance, Vargas et al. [23] described the relationship between the depth coordinate and the absolute phase by a monotonic polynomial. These methods achieved polynomial fitting depending on camera calibration instead of translation stage or gage block, and no longer needed a reference plane. Nevertheless, the approximate polynomial is lack of strict derivation associated with imaging models, so that the degree of polynomial cannot be determined reasonably. Especially, lens distortion has not yet been fully taken into account. In fact, the degree of polynomial is closely related to the lens distortion model.

In contrast, SV-based FPP system can be set as a binocular framework with more degree of freedom to relax the restrictions PHD requires. By searching for homologous image points between the camera and the projector, 3D coordinates of measured points can be reconstructed once system parameters have been determined. The system parameters can be worked out through a flexible SV calibration. Legarda-Sáenz et al. [30] introduced a rigidity constraint between the camera and the projector to simultaneously estimate the system parameters for improving calibration reliability. Zhang and Huang [31] used a novel red/blue checkerboard instead of conventional black/white one to make the calibration of the projector the same as that of the camera. In Chen’s method [35], a plane target was moved along its normal direction to construct an accurate 3D lattice of benchmarks distributed uniformly within the measurement volume, which made the distribution of calibration error as uniform as possible. Yin et al. [38] presented a FPP calibration with a bundle adjustment strategy to accomplish a more accurate result, even when using an imperfectly printed target pattern. SV using a linear model without considering lens distortion is efficient. However, the process of homologous point searching consumes nonnegligible time cost.

In this paper, we propose a novel phase mapping method to make best use of the advantages and bypass the disadvantages of PHM and SV. We derived a phase-3D mapping (P3DM) model in strict accordance with the back-projection SV model. Based on P3DM, all dimensional coordinates (X, Y, and Z) of a measured point are associated only with the phase, so that they can be reconstructed from the phase directly and independently, avoiding time-consuming processes of coordinate transformation and homogenous points searching required in SV method. To determine the mapping coefficients of P3DM, we designed a flexible two-step calibration strategy. The first step is ray reprojection calibration to determine system parameters of the back-projection SV model; the second step is sampling mapping calibration to fit mapping coefficients of the P3DM model by using the calibrated system parameters. As a result, P3DM can achieve flexible and high-efficient 3D reconstruction without practical limitations.

2. Imaging model

The FPP system typically consists of a digital camera and a digital-micromirror-device (DMD) projector (Fig. 1). Since orthogonal absolute phases can provide correspondence between the camera and DMD image points, the projector can be treated as an inverse imaging process [31]. So, the FPP system can be modeled as a binocular system. To facilitate elaboration of method, in this section we briefly describe some related imaging models.

 figure: Fig. 1

Fig. 1 Schematic diagram of FPP system

Download Full Size | PDF

2.1. Back-projection camera model

The imaging process of a camera is perspective projection. An object point, denoted as Xw=(Xw,Yw,Zw)T in a world coordinate system (WCS) and Xc=(Xc,Yc,Zc)T in a camera coordinate system (CCS), is projected onto an image plane through a projection center. The process can be modeled as

{Xc=RcXw+tcλcx˜c=[I|0]X˜cm˜c=Kcx˜c,Kc=(fu|cscfu|cu0|c0fv|cv0|c001)
where ˜ denotes a homogeneous coordinate; xc=(xc,yc)T=(Xc/Zc,Yc/Zc)T is the projection of Xc on the image plane (the image point denoted as mc=(uc,vc)T in an image coordinate system); Rc is a rotation matrix corresponding to three rotation angles rc=(αc,βc,γc)T, and tc is a translation vector, representing transformation from the WCS to the CCS; Kc is a camera parameter matrix with respect to equivalent focus lengths (fu|c,fv|c)T along the image coordinate axes, image coordinates of principle point (u0|c,v0|c)T, and skew factor of image axes sc; and λc is a scale factor.

In practice, aberrations in the imaging lens inevitably exist, so that lens distortion should be considered in the camera model. The lens distortion can be modeled as

xc=xcΔ(xc),Δ(xc)=(xc(k1r2+k2r4+k3r6+)+[2p1xcyc+p2(r2+2xc2)](1+p3r2+)yc(k1r2+k2r4+k3r6+)+[2p2xcyc+p1(r2+2yc2)](1+p3r2+))
where xc=(xc,yc)T denotes a distorted image point, and Δ(xc) is a distortion correction term associated with radial and decentering distortion, r=xc2+yc2 is distance from the distorted image point to the principle point, (k1,k2,k3,) and (p1,p2,p3,) are radial and decentering distortion parameters. Let kc be a vector of lens distortion parameters. Generally three terms of radial distortion and two terms of decentering distortion are sufficient, i.e., kc=(k1,k2,k3,p1,p2)T. Thus the complete camera model involving lens distortion can be represented as
{Xc=RcXw+tcλcx˜c=[I|0]X˜cxc=xcΔ(xc;kc)m˜c=Kcx˜c
This is a back-projection camera model, which can project an image point to an object point up to a scale factor and is thus accord with the process of 3D reconstruction.

2.2. Back-projection SV model

The projector can be treated as an inverse camera that thus can adopt the above-mentioned camera model, as long as the subscript c is changed to the subscript p. When the camera and the projector are fixed with respect to each other, structural parameters Rs and ts can be introduced to represent a rigid transformation from a CCS to a projector coordinate system (PCS):

{Rs=RpRc1ts=tpRpRc1tc

Therefore, the back-projection SV model can be represented as

{Xc=RcXw+tcλcx˜c=[I|0]X˜cxc=xcΔ(xc;kc)m˜c=Kcx˜cλpx˜p=[Rs|ts]X˜cxp=xpΔ(xp;kp)m˜p=Kpx˜p

3. Phase-3D mapping

In this section, we prove that there exist mappings from phase to all dimensional coordinates (X, Y, and Z) of a measured point in strict accordance with the back-projection SV model. It means that an image point mc of given phase ϕc can be mapped to its corresponding 3D spatial point Xc directly, without any time-consuming processes of coordinate transformations or homologous point searching. The following is the derivation of P3DM.

Supposing vertical fringe patterns are projected in FPP. On the DMD image plane the phase ϕp is proportional to the image coordinate up. Moreover, for the homologous image points mc and mp, their absolute phases are equal, i.e. ϕc=ϕp. Thus there is a linear mapping from ϕc to up such that

fupL:ϕcup
where the superscript L represents a linear mapping relationship.

The image point mc defines an epipolar line lp on the DMD image plane based on the epipolar geometry constraint, as shown in Fig. 2. Because of lens distortion, the actual (i.e. distorted) epipolar line is a curve lp. The curve lp is bounded that it can be approximated to be a polynomial curve in term of the Weierstrass approximation theorem. Since the homologous image point mp related to mc is on lp, the image coordinate up can be a polynomial function of vp such that

fvpP:upvp
where the superscript P represents a polynomial mapping relationship.

 figure: Fig. 2

Fig. 2 Schematic diagram of the mapping in SV-based FPP system.

Download Full Size | PDF

The image point mp can be transformed to be xp=(xp,yp)T in PCS. From Eq. (5), the transformation can be represented as

x˜p=Kp1m˜p
Equation (8) contains linear mappings from up and vp to xp and yp such that

{fxpL:(up,vp)xpfypL:(up,vp)yp

The distorted image point xp then is corrected to be an undistorted image point xp=(xp,yp)T. According to the lens distortion model of Eq. (2), the coordinate corrections are polynomial mappings from xp and yp to xp and yp such that

{fxpP:(xp,yp)xpfypP:(xp,yp)yp

Finally, two rays back-projected from the image points xc and xp intersect at a 3D spatial point Xc. According to the back-projection SV model of Eq. (5), the ray intersection can be formulized as

{λcx˜c=[I|0]X˜cλpx˜p=[Rs|ts]X˜c
Considering the epipoloar geometrical constraint, i.e. x˜pT[ts]xRsx˜c=0 where [ts]x denotes an antisymmetric matrix associated with ts, Eq. (11) has three degrees of freedom. So 3D coordinates of Xc can be worked out exactly such that
{Xc=t1xct3xcxp(r31xc+r32yc+r33)xp(r11xc+r12yc+r13)Yc=t1yct3ycxp(r31xc+r32yc+r33)xp(r11xc+r12yc+r13)Zc=t1t3xp(r31xc+r32yc+r33)xp(r11xc+r12yc+r13)
where rij and ti are elements of Rs and ts, respectively. For a fixed FPP system and a specific image point, the parameters Rs and ts and the image coordinates xc and yc are determined. Thus, the spatial coordinates Xc, Yc and Zc are functions of xp. We can rewrite Eq. (12) as
Xc=1fXL(xp)+cX,Yc=1fYL(xp)+cY,Zc=1fZL(xp)+cZ
where cX, cY and cZare constants.

By combining Eqs. (6), (7), (9), (10) and (13), we derive the mappings from phase to 3D coordinates respectively to be

Xc=1n=0Nanϕcn+cX,Yc=1n=0Nbnϕcn+cY,Zc=1n=0Ncnϕcn+cZ
where {an,cX;bn,cY;cn,cZ} are mapping coefficients. This is the P3DM developed from the back-projection SV model for FPP. The 3D reconstruction is related only to the phase in which all dimensional coordinates (X, Y, and Z) can be mapped from a phase directly and independently. Therefore, P3DM can achieve high-efficient 3D reconstruction.

In addition, the degree N, of polynomial, is defined by the lens distortion and the distorted epipolar line. Since we adopt the lens distortion model with three radial terms and two decentering terms, the function in Eq. (10) is a 7th degree polynomial. And supposing the distorted epipolar line in Eq. (7) is sufficiently approximated to be a 5th degree polynomial curve. Then, the degree N of P3DM are 35. A polynomial of high degree may be susceptible to disturbance when the value of independent variable is large and increase computational complexity and storage space. Accordingly, in practical application, the degree of polynomial can be adjusted via trade-off of accuracy and efficiency.

4. P3DM method

Based on the derivation of P3DM, we propose the P3DM method, including flexible two-step calibration and high-efficient 3D reconstruction. The overall flow chart is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 The overall flow chart of the P3DM method

Download Full Size | PDF

4.1. P3DM calibration

The goal of P3DM calibration is to determine the mapping coefficients for the FPP system to reconstruct 3D surface profiles of objects from the modulated phase. Actually, each camera image point mc can be calibrated with an independent mapping coefficient set {mc|an,cX;bn,cY;cn,cZ}, so that each dimension of each measured point can be reconstructed independently. We designed a two-step calibration strategy: ray reprojection calibration and sampling mapping calibration, to determine the mapping coefficients.

Ray reprojection calibration

The ray reprojection calibration is to determine system parameters of the back-projection SV model, involving parameter matrixes Kc and Kp, lens distortion vectors kc and kp, and structural parameters Rs and ts. For concision and clarity, we define symbols (Kc/p,kc/P)Θc/P and (Rc/s,tc/s)Φc/s. Hence the system parameters are Θc, Θp and Φs.

Traditional camera calibration is to minimize image reprojection error corresponding to the image distance between a projected image point and a measured one [30, 38]. It is suitable for the forward-projection camera model projecting an object point to an image one. Because the SV model used to derive P3DM is based on the back-projection camera model, traditional calibration strategy cannot be applied in this paper. In this case, we take account ray reprojection, which back-projects an image point to be a spatial ray through the projection center. Ideally, the back-projected spatial ray will pass through the object point related to the image point. Therefore, ray reprojection calibration can be performed by minimizing ray reprojection error corresponding to the spatial distance from an object point to a back-projected spatial ray.

A planar target on which there are M benchmarks with known world coordinates Xw is placed in N different orientations arbitrarily. In each orientation, one image of the planar target under uniform illumination is employed to locate image positions of benchmarks m^c. The others under orthogonal fringe projection are utilized to compute crossed absolute phase maps. Then homologous image points m^p on the DMD image plane can be acquired from the phase maps. The ray reprojection calibration minimizing the sum of ray reprojection error can be represented as

argτmini=1Nj=1M[dray(Xwj;m^cij,Θc,Φci)+dray(Xwj;m^pij,Θp,Φs,Φci)]
where τ={Θc,Θp,Φs,Φci} is the calibrated system parameter set and dray() denotes the ray reprojection error. In general, the epipolar constraint is taken account in the calibration of binocular system. In this situation, the calibration procedure is modified as
argτmini=1Nj=1M[dray(Xwj;m^cij,Θc,Φci)+dray(Xwj;m^pij,Θp,Φs,Φci)+depi(m^cij,m^pij,Θc,Θp,Φs)]
where depi() denotes distance from a homologous point to its corresponding epipolar line. Then an appropriate optimization algorithm such as Levenberg–Marquardt can be employed to work out the system parameters.

Sampling mapping calibration

The sampling mapping calibration is to determine the mapping coefficients {an,cX;bn,cY;cn,cZ} of Eq. (14) using the calibrated system parameters. In this procedure, sufficient points with known 3D coordinate Xc and absolute phase ϕc are required for coefficient fit. In the P3DM method, these data can be obtained by means of the calibrated system parameters, without the aid of transformation stage or gage block.

The flexible sampling mapping calibration is summarized as follow:

  • Step 1, back-project an image point mc to be a spatial line lc using the calibrated system parameter Θc;
  • Step 2, sample a series of points Xci, where i=1,2,, along the space line lc within a measurement volume, then project these sampling points onto the DMD image plane to obtain the corresponding image points mpi using the system parameters Θp and Φs, work out the absolute phases ϕci from these image points to make up phase-3D pairs (Xci,Yci,Zci;ϕci);
  • Step 3, coefficient fit based on these phase-3D pairs to obtain the mapping coefficients {an,cX;bn,cY;cn,cZ};
  • Step 4, repeat the first three steps for every image point, and finally establish a P3DM coefficient look-up-table (LUT), denoting it as LUT({mc|an,cX;bn,cY;cn,cZ}).

4.2. P3DM reconstruction

In P3DM-based FPP system, 3D reconstruction can be performed in a very efficient way. After P3DM calibration, the high-efficient P3DM reconstruction includes the following three steps:

  • Step 1, project a group of single-direction fringe patterns onto the surface of measured objects, and then capture the fringe images to retrieve an absolute phase map ϕobj;
  • Step 2, determine the mapping coefficients {mc|an,cX;bn,cY;cn,cZ} corresponding to the image points mc from the LUT;
  • Step 3, substitute the phase and the mapping coefficients in Eq. (14) to reconstruct the corresponding X-, Y- and Z-coordinates.

5. Experiments and analysis

In this section, experiments and analysis regarding validity and performance of the proposed P3DM method are demonstrated. We set up an FPP system consisting of a camera (DH MER-130-30UM, 1280*1024) with a 16mm TV lens (PANTAX) and a projector (Dell M110, 1280*800). The experiments included P3DM calibration and P3DM reconstruction.

During P3DM calibration, the ray reprojection calibration was implemented first. A planar target with 99 circle benchmarks was taken as a standard calibration reference. To identify homologous benchmarks between the camera and DMD image planes, cross phase maps retrieved from orthogonal fringe patterns were utilized. In our experiment, the planar target was placed on 7 positions in a measuring volume of 400mmx400mmx300mm. Figure 4 shows the target at one of calibrated positions: the cross phase maps in Figs. 4(a) and 4(b) and the located benchmarks respectively on the camera and DMD image planes in Fig. 4(c). We chose the strategy of Eq. (16) to calibrate the binocular system by minimizing the sum of ray reprojection error and epipolar distance error.

 figure: Fig. 4

Fig. 4 Ray reprojection calibration: (a-b) the cross phase maps; (c) the homologous benchmarks on the camera and DMD image planes.

Download Full Size | PDF

The final calibrated parameters are listed in Table 1. Using these parameters, the ray reprojection at the special position is visualized in Fig. 5(a). Each pair of rays back-projected from the homologous image points are very close to the corresponding benchmark on the target. To analyze the calibration precision, we reconstructed the 3D coordinates of these benchmarks making use of the calibrated system parameters. Then the reconstruction result was compared with the known dimensions to obtain the reconstruction error in SV. Error distributions of the reconstructed 3D coordinates related to image coordinates in the special position are shown in Fig. 5(b). The errors are randomly distributed because the systematic error introduced by the lens distortion is significantly eliminated.

Tables Icon

Table 1. System parameters of ray reprojection calibration.

 figure: Fig. 5

Fig. 5 Ray reprojection at one of the calibrated positions: (a) visual diagram; (b) error distributions of the reconstructed 3D coordinates related to image coordinates.

Download Full Size | PDF

Subsequently, sampling mapping calibration was implemented by means of the calibrated system parameters. In the experiment, we back-projected rays from each image point then sampled 101 points on each back-projected ray. Using these sampled data, the mapping coefficients can be calculated through an optimization procedure. However, we found that the optimization result was sensitive to initial value and converged locally. Then we considered the polynomial form in the right side of Eq. (14). In our experiment, the constant terms were constrained to be zero, i.e. cX=cY=cZ=0. In this case Eq. (14) can be changed to be polynomial forms such that

1Xc=n=0Nanϕcn,1Yc=n=0Nbnϕcn,1Zc=n=0Ncnϕcn
The polynomial fit can simply be implemented without initial value. We used a 4th degree polynomial for X- and Y-coordinate and a 5th degree polynomial for Z-coordinate. A mapping coefficient set related to an image point (240,112) is listed in Table 2, the corresponding fit-error curves are shown in Fig. 6. In Table 3 are the maximum (MAX) and root-mean-square (RMS) values of fit errors related to different camera image points. It can be seen that the fit errors were less than 0.1 micrometer. They are quite small, indicating that the degree of polynomial selected in our experiment is sufficiently accurate for field application.

Tables Icon

Table 2. Mapping coefficients related to an image point (240, 112).

 figure: Fig. 6

Fig. 6 Fit error curves related to an image point (240, 112) in sampling mapping calibration.

Download Full Size | PDF

Tables Icon

Table 3. MAX and RMS of fit errors related to different camera image points (mm)

After the P3DM calibration is finished, the P3DM reconstruction can be performed for the FPP system. We utilized a standard sphere to test the precision of the P3DM reconstruction, as shown in Fig. 7(a). The standard sphere was reconstructed by means of the P3DM method and the SV method, respectively. Figures 7(b) and 7(c) show respective results of 3D reconstruction. Moreover, the corresponding MAX and RMS values associated with point distances between the two 3D models in different dimensions are listed in Table 4. It can be seen that the point distances are quite small in all dimensions. Therefore, the P3DM method has the same precision level as the SV method. Even then, it should be noticed that the P3DM method containing polynomial operations may smooth the reconstructed geometry in a small degree.

 figure: Fig. 7

Fig. 7 The result of 3D reconstruction of a standard sphere: (a) the image; (b-c) the 3D models: related to the P3DM method and the SV method, respectively.

Download Full Size | PDF

Tables Icon

Table 4. MAX and RMS of point distance between the two 3D reconstructions.

Although the precisions of P3DM and SV are identical, their efficiencies are significantly different. In P3DM, all dimensional coordinates can be mapped from phase directly and independently. Therefore, the efficiency of 3D reconstruction of P3DM is higher than that of SV. To demonstrate this point, we compared the methods in terms of the time cost of 3D reconstruction in three different scenes. The results of 3D reconstruction are shown in Fig. 8, and the corresponding data are listed in Table 5. The time cost of SV reconstruction is much larger than that of P3DM reconstruction. This becomes more significant when the number of measured points increases.

 figure: Fig. 8

Fig. 8 The results of 3D reconstruction of three scenes: the second row is related to the P3DM method; the third row is related to the SV method.

Download Full Size | PDF

Tables Icon

Table 5. Data related to the efficiency of 3D reconstruction.

At this point, our experiments indicated that the P3DM method has the same precision, but significantly higher efficiency of 3D reconstruction in comparison with the SV method. By making use of the high-efficient P3DM, the FPP can implement high-speed 3D imaging. In the following experiment, such a high-speed FPP system was set up consisting of a BASLER camera (ACA640-120GM, 640x480) with a 12mm TV lens (PANTAX) and a TI DLP projector (LightCrafter 3000, 608x684), as shown in Fig. 9. The time cost of 3D reconstruction was reduced to 3ms with aid of GPU computation. Consequently, high-speed 3D imaging and rendering of moving objects were performed via this FPP system. Figure 10 shows the 3D imaging of a moving plaster model rotated by a turning disc (Visualization 1).

 figure: Fig. 9

Fig. 9 The high-speed FPP system.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 3D imaging of a moving object.

Download Full Size | PDF

6. Conclusion

In this paper, we propose a P3DM method based on the back-projection SV model for FPP, in which all dimensional coordinates (X, Y, and Z) can be mapped from phase directly and independently. Correspondingly, a flexible two-step calibration strategy, including ray reprojection calibration and sampling mapping calibration, is proposed to determine the mapping coefficients of P3DM. P3DM has the same precision and significant higher efficiency of 3D reconstruction when compared with SV. This verifies that P3DM is suitable for flexible and high-efficient 3D reconstruction in practical application.

Funding

National Natural Science Foundation of China (NSFC) (61377017); Scientific and Technological Project of the Shenzhen Government (JCYJ20140828163633999).

References and Links

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

4. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

5. G. Sansoni, L. Biancardi, U. Minoni, and F. Docchio, “A novel, adaptive system for 3-D optical profilometry using a liquid crystal light projector,” IEEE Trans. Instrum. Meas. 43(4), 558–566 (1994). [CrossRef]  

6. W. S. Zhou and X. Y. Su, “A direct mapping algorithem for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 (1994). [CrossRef]  

7. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 (1999). [CrossRef]   [PubMed]  

8. R. Sitnik, M. Kujavinska, and J. Woznicki, “Digital fringe projection system for large-volume 360-deg shape measurement,” Opt. Eng. 41(2), 443–449 (2002). [CrossRef]  

9. H. Liu, W.-H. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1–3), 65–80 (2003). [CrossRef]  

10. X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1–3), 35–44 (2003). [CrossRef]  

11. Z. Zhang, D. Zhang, and X. Peng, “Performance analysis of a 3D full-field sensor based on fringe projection,” Opt. Lasers Eng. 42(3), 341–353 (2004). [CrossRef]  

12. B. A. Rajoub, D. R. Burton, and M. J. Lalor, “A new phase-to-height model for measuring object shape using collimated projections of structured light,” J. Opt. A, Pure Appl. Opt. 7(6), S368–S375 (2005). [CrossRef]  

13. P. Jia, J. Kofman, and C. English, “Comparison of linear and nonlinear calibration methods for phase-measuring profilometry,” Opt. Eng. 46(4), 043601 (2007). [CrossRef]  

14. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef]   [PubMed]  

15. A. Maurel, P. Cobelli, V. Pagneux, and P. Petitjeans, “Experimental and theoretical inspection of the phase-to-height relation in Fourier transform profilometry,” Appl. Opt. 48(2), 380–392 (2009). [CrossRef]   [PubMed]  

16. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef]   [PubMed]  

17. Z. Zhang, H. Ma, S. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3D imaging system based on uneven fringe projection,” Opt. Lett. 36(5), 627–629 (2011). [CrossRef]   [PubMed]  

18. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]   [PubMed]  

19. I. Léandry, C. Brèque, and V. Vallea, “Calibration of a structured-light projection system: Development to large dimension objects,” Opt. Lasers Eng. 50(3), 373–379 (2012). [CrossRef]  

20. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]   [PubMed]  

21. Z. Huang, J. Xi, Y. Yu, Q. Guo, and L. Song, “Improved geometrical model of fringe projection profilometry,” Opt. Express 22(26), 32220–32232 (2014). [CrossRef]   [PubMed]  

22. N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014). [CrossRef]  

23. J. Vargas, J. A. Quiroga, and M. J. Terron-Lopez, “Flexible calibration procedure for fringe projection profilometry,” Opt. Eng. 46(2), 023601 (2007). [CrossRef]  

24. A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, “Fringe projection based quantitative 3D microscopy,” Optik (Stuttg.) 124(21), 5052–5056 (2013). [CrossRef]  

25. J. Huang and Q. Wu, “A new reconstruction method based on fringe projection of three-dimensional measuring system,” Opt. Lasers Eng. 52, 115–122 (2014). [CrossRef]  

26. V. Kirschner, W. Schreiber, R. M. Kowarschik, and G. Notni, “Self-calibration shape-measuring system based on fringe projection,” Proc. SPIE 3102, 5–13 (1997). [CrossRef]  

27. C. Brenner, J. Boehm, and J. Guehring, “Photogrammetric calibration and accuracy evaluation of a cross-pattern stripe projector,” Proc. SPIE 3641, 164–172 (1998). [CrossRef]  

28. T. Bothe, W. Osten, A. Gesierich, and W. P. O. Jueptner, “Compact 3D camera,” Proc. SPIE 4778, 48–59 (2002). [CrossRef]  

29. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003). [CrossRef]   [PubMed]  

30. R. Legarda-Sáenz, T. Bothe, and W. P. Jüptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43(2), 464–471 (2004). [CrossRef]  

31. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

32. S. Zhang, D. Royer, and S.-T. Yau, “GPU-assisted high-resolution, real-time 3-D shape measurement,” Opt. Express 14(20), 9120–9129 (2006). [CrossRef]   [PubMed]  

33. P. Kuhmstedt, C. Munckelt, M. Heinze, C. Brauer-Burchardt, and G. Notni, “3D shape measurement with phase correlation based fringe projection,” Proc. SPIE 6616, 66160B (2007). [CrossRef]  

34. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

35. X. Chen, J. Xi, Y. Jin, and J. Sun, “Accurate calibration for a camera-projector measurement system based on structured light projection,” Opt. Lasers Eng. 47(3–4), 310–319 (2009). [CrossRef]  

36. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]   [PubMed]  

37. X. Peng, X. Liu, Y. Yin, and A. Li, “Optical measurement network for large-scale and shell-like objects,” Opt. Lett. 36(2), 157–159 (2011). [CrossRef]   [PubMed]  

38. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [CrossRef]   [PubMed]  

39. X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Opt. Lett. 37(15), 3126–3128 (2012). [CrossRef]   [PubMed]  

40. R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293–4300 (2016). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1: MOV (264 KB)      A video of 3D imaging of a moving object

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic diagram of FPP system
Fig. 2
Fig. 2 Schematic diagram of the mapping in SV-based FPP system.
Fig. 3
Fig. 3 The overall flow chart of the P3DM method
Fig. 4
Fig. 4 Ray reprojection calibration: (a-b) the cross phase maps; (c) the homologous benchmarks on the camera and DMD image planes.
Fig. 5
Fig. 5 Ray reprojection at one of the calibrated positions: (a) visual diagram; (b) error distributions of the reconstructed 3D coordinates related to image coordinates.
Fig. 6
Fig. 6 Fit error curves related to an image point (240, 112) in sampling mapping calibration.
Fig. 7
Fig. 7 The result of 3D reconstruction of a standard sphere: (a) the image; (b-c) the 3D models: related to the P3DM method and the SV method, respectively.
Fig. 8
Fig. 8 The results of 3D reconstruction of three scenes: the second row is related to the P3DM method; the third row is related to the SV method.
Fig. 9
Fig. 9 The high-speed FPP system.
Fig. 10
Fig. 10 3D imaging of a moving object.

Tables (5)

Tables Icon

Table 1 System parameters of ray reprojection calibration.

Tables Icon

Table 2 Mapping coefficients related to an image point (240, 112).

Tables Icon

Table 3 MAX and RMS of fit errors related to different camera image points (mm)

Tables Icon

Table 4 MAX and RMS of point distance between the two 3D reconstructions.

Tables Icon

Table 5 Data related to the efficiency of 3D reconstruction.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

{ X c = R c X w + t c λ c x ˜ c =[ I|0 ] X ˜ c m ˜ c = K c x ˜ c , K c =( f u|c s c f u|c u 0|c 0 f v|c v 0|c 0 0 1 )
x c = x c Δ( x c ), Δ( x c )=( x c ( k 1 r 2 + k 2 r 4 + k 3 r 6 + )+[ 2 p 1 x c y c + p 2 ( r 2 +2 x c 2 ) ]( 1+ p 3 r 2 + ) y c ( k 1 r 2 + k 2 r 4 + k 3 r 6 + )+[ 2 p 2 x c y c + p 1 ( r 2 +2 y c 2 ) ]( 1+ p 3 r 2 + ) )
{ X c = R c X w + t c λ c x ˜ c =[ I|0 ] X ˜ c x c = x c Δ( x c ; k c ) m ˜ c = K c x ˜ c
{ R s = R p R c 1 t s = t p R p R c 1 t c
{ X c = R c X w + t c λ c x ˜ c =[ I|0 ] X ˜ c x c = x c Δ( x c ; k c ) m ˜ c = K c x ˜ c λ p x ˜ p =[ R s | t s ] X ˜ c x p = x p Δ( x p ; k p ) m ˜ p = K p x ˜ p
f u p L : ϕ c u p
f v p P : u p v p
x ˜ p = K p 1 m ˜ p
{ f x p L :( u p , v p ) x p f y p L :( u p , v p ) y p
{ f x p P :( x p , y p ) x p f y p P :( x p , y p ) y p
{ λ c x ˜ c =[ I|0 ] X ˜ c λ p x ˜ p =[ R s | t s ] X ˜ c
{ X c = t 1 x c t 3 x c x p ( r 31 x c + r 32 y c + r 33 ) x p ( r 11 x c + r 12 y c + r 13 ) Y c = t 1 y c t 3 y c x p ( r 31 x c + r 32 y c + r 33 ) x p ( r 11 x c + r 12 y c + r 13 ) Z c = t 1 t 3 x p ( r 31 x c + r 32 y c + r 33 ) x p ( r 11 x c + r 12 y c + r 13 )
X c = 1 f X L ( x p ) + c X , Y c = 1 f Y L ( x p ) + c Y , Z c = 1 f Z L ( x p ) + c Z
X c = 1 n=0 N a n ϕ c n + c X , Y c = 1 n=0 N b n ϕ c n + c Y , Z c = 1 n=0 N c n ϕ c n + c Z
arg τ min i=1 N j=1 M [ d ray ( X w j ; m ^ c ij , Θ c , Φ c i )+ d ray ( X w j ; m ^ p ij , Θ p , Φ s , Φ c i ) ]
arg τ min i=1 N j=1 M [ d ray ( X w j ; m ^ c ij , Θ c , Φ c i )+ d ray ( X w j ; m ^ p ij , Θ p , Φ s , Φ c i )+ d epi ( m ^ c ij , m ^ p ij , Θ c , Θ p , Φ s ) ]
1 X c = n=0 N a n ϕ c n , 1 Y c = n=0 N b n ϕ c n , 1 Z c = n=0 N c n ϕ c n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.