Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase-shifting profilometry for the robust 3-D shape measurement of moving objects

Open Access Open Access

Abstract

Arbitrary two-dimensional (2-D) motion introduces coordinate errors and phase errors to three-dimensional (3-D) shape measurement of objects in phase-shifting profilometry (PSP). This paper presents a new robust 3-D reconstruction method for arbitrary 2-D moving objects by introducing an adaptive reference phase map and the motion estimation based on fence image. First, a composite fence image is used to track object motion. Second, to obtain the transformation matrixes and remove the coordinate errors among object images, the angle extraction technique and the 1-D hybrid phase correlation method (1-D HPCM) are integrated to automatically estimate the sub-pixel motion of objects. Third, the phase errors are compensated to obtain the rough absolute phase map of objects by combining the transformation matrixes with the reference phase map. Finally, the absolute phase map is refined to reconstruct the 3-D surfaces of moving objects with adaptive reference phase map. The proposed computational framework can accurately and automatically realize 3-D shape measurement of arbitrary objects with 2-D movement. The results of experiment verify the effectiveness of our computational framework.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Digital fringe projection (DFP) has been employed for three-dimensional (3-D) shape measurement, and intensively researched in various fields including industrial product monitoring, inverse engineering and biological applications [1–4]. Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP) are the usual DFP techniques. FTP is adopted within one single-shot fringe image, which has obvious speed advantages in 3-D reconstruction. Later, amodified Fourier transform profilometry (MFTP) [5] was proposed to remove the zero component by adding a π-shift fringe pattern based on FTP. However, owing to the limited number of patterns, FTP and MFTP suffer from the influences of surface reflectivity and ambient light. By contrast, PSP owns high accuracy for static objects by sequentially projecting N (N3) fringe patterns. Meanwhile, to fully achieve the accuracy advantage of PSP, temporal phase unwrapping is generally used to calculate the absolute phase by unfolding the wrapped phase [6], which requires additional projection fringes, such as Gray-code phase unwrapping [7, 8], multi-frequency phase unwrapping[9, 10], and phase-coding phase unwrapping [11]. However, for measuring moving objects, PSP induces motion errors among patterns [12, 13] and traditional temporal phase unwrapping is usually invalid. Therefore, high-precision 3-D shape measurement of moving objects is challenging.

Many techniques extended from PSP or FTP have recently proposed to improve the accuracy of 3-D reconstruction for the moving objects. Researchers have combined PSP with FTP to analyze dynamic situations such as stabilized flapping flight processes [14], and isolated moving objects scenes [15]. These approaches can retrieve absolute 3-D reconstruction of a dynamic process. However, these unique situations and complex hybrid techniques may not be suitable for common applications. Tao et al. [16] introduced another camera and embedded triangular wave based on normal phase-shifting fringes and stereo phase unwrapping, which can retrieve the surface profiles of dynamic objects with geometry constraints. Li et al. [17] utilized multi-view constraints to match the corresponding point from the wrapping phase map, which can measure arbitrarily shaped dynamic objects without projecting additional images. However, the new cameras will increase the measurement system cost and calibration errors.

With the typical DFP measurement system (i.e., one projector and one camera), many approaches have been developed to reduce or compensate the motion-induced errors. Iterative error reduction method [18] is proposed to obtain random phase-shift using the least-square algorithm. Feng et al. [19] developed an image segmentation assisted approach to compensate the motion error in motion ripples, motion-induced phase unwrapping error, and motion outliers. However, these methods address the constant phase error, but fail to measure moving objects with rotation motion. Wang et al. [20] proposed Hilbert transform error compensation to remove the nonuniform phase error of moving objects, inevitably leading to some outliers at the noncontinuous edges of objects due to spectrum leakage. Meanwhile, some scholars have introduced spatial matching approaches to roughly compensate the motion-induced phase errors and obtained relative height distribution of objects with a single reference plane. For instance, Lu et al. [21] introduced the extended PSP to acquire a phase map by tracking the moving object and building transformation matrixes, which can realize 3-D shape measurement of rigid objects with 2-D movement. However, the whole operation is manual and only three small circular markers are set on the edge of an object, which are easily disturbed by projection sinusoidal fringes. Guo et al. [22] proposed an FFT-based matching approach to estimate object displacements, and combined FTP with the transformation matrixes to reconstruct a 3-D model, which requires more than ten captured images and is valid only for 2-D translation. Recently, Lu et al. [23] used automatic scale-invariant feature transform (SIFT) to extract the transformation matrix according to three pairs of feature blocks, which exist in the blue component of the captured image. Whereas, using different color components leads to a crosstalk problem [24, 25], which affects the matching efficiency and accuracy. These approaches [21–23] can reconstruct surface profiles of 2-D moving objects by combining spatial matching with phase fringes. However, they still result in unreliable motion estimation and only acquire a rough relative phase map of objects, which may fail to reconstruct precise and whole 3-D shape of moving objects. Moreover, the FFT-based matching approach [22] and SIFT technique [13, 23] may fail to measure flat or smooth surfaces.

In this work, we propose a new robust 3D shape measurement method for arbitrary 2-D moving objects by introducing an adaptive reference phase map and the motion estimation based on fence image. We utilize a non-periodic marker consisting of one horizontal fence image and one vertical fence image to track the positions of the moving object in different steps of PSP. The rotation angles and translation vectors of the moving object can be rapidly extracted by angle calculation and translation estimation, respectively, owing to the assistance of the fence image. Then, the transformation matrixes assist to remove coordinate errors. According to reference phase map, the motion-induced phase errors are compensated to obtain the rough absolute phase map. The refined 3-D surfaces of moving objects are eventually reconstructed with the adaptive reference phase map. Our proposed motion estimation method, comprising the angle extraction technique and the 1-D hybrid phase correlation method (1-D HPCM), can detect sub-pixel motion based on the non-periodic fence image. Our computational framework achieve absolute 3-D shape measurement of 2-D moving objects by integrating the PSP and absolute phase unwrapping without new hardware. The results demonstrate the validity of our framework, and we can obtain high-quality 3-D reconstruction of arbitrary objects with 2-D movement.

Section 2 introduces the errors of PSP for measuring the moving object, analyzes phase errors calculation based on the adaptive reference phase map and proposes our motion estimation technique. Section 3 describes the whole procedure for our proposed computational framework. In section 4, experimental results are shown to illustrate the validity of our proposed computational framework. Section 5 discusses the merits and limitations of our framework. Section 6 draws the conclusion.

2. Principle

2.1. The errors of PSP for moving objects reconstruction

Fig. 1 shows a schematic diagram of a typical DFP system based on PSP. The designed fringe pattern is projected onto the surface of a 3-D object, and a camera captures the reflected fringe. The surface profile of an object can be reconstructed as the original shape according to the spatial geometry constraints.

 figure: Fig. 1

Fig. 1 A schematic diagram of a typical DFP system with one camera.

Download Full Size | PDF

For N step PSP in static measurements, the deformed fringe patterns captured by the camera can be modeled as follows, described in [26]

In(x,y)=a(x,y)+b(x,y)cos[ϕ(x,y)+2π(n1)/N],
where n={1,2,...,N}(N3), (x,y) is the pixel coordinate of the image, a(x,y) is the average intensity, and b(x,y) is the intensity modulation.

The wrapped phase map of an object can be determined by

ϕ(x,y)=tan 1{n=1NIn(x,y)sin [2π(n1)/N]n=1NIn(x,y)cos [2π(n1)/N]}.

PSP was originally improved in [21] to measure an object in 2-D movement. In the n-th step, the captured fringe image of the moving object is described as:

In(pn,qn)=An(pn,qn)+Bn(pn,qn)cos [ϕ(pn,qn)+2π(n1)/N],
where (pn,qn) are the pixel coordinates of the image in the n-th step. With image geometric transformation, the captured images of the extended PSP can be rewritten as follows:
{I1(x,y)=A1(p1,q1)+B1(p1,q1)cos [ϕ(x,y)+Δφ(p1,q1)],I2(x,y)=A2(p2,q2)+B2(p2,q2)cos [ϕ(x,y)+Δφ(p2,q2)+2π/N],IN(x,y)=AN(pN,qN)+BN(pN,qN)cos [ϕ(x,y)+Δφ(pN,qN)+2π(N1)/N], 
where An(pn,qn) and Bn(pn,qn) are the average intensity and intensity modulation, respectively, in the captured image of the n-th step. In addition, Δφ(pn,qn) are the phase errors among patterns, induced by 2-D motion. In the first captured image, p1=x,q1=y, Δφ(p1,q1)=0. Eq. (5) defines the geometric transformation between Eq. (3) and Eq. (4).
[xy1]T=[pnqn1]T[cos θnsin θnTxnsin θncos θnTyn001]

In Eq. (4) and Eq. (5), In(x,y) are known values, θn, Txn and Tyn can be acquired by estimating the movement, and An(pn,qn), Bn(pn,qn), Δφ(pn,qn) and ϕ(x,y) are all unknown values. Hence, to calculate ϕ(x,y), An(x,y) and Bn(x,y) are simplified as

{An(pn,qn)=a(x,y),Bn(pn,qn)=b(x,y).

Based on the above analysis, to reconstruct the 3-D shape of a moving object, there are two problems that need to be solved. 1) The phase errors Δφ(pn,qn) should be precisely determined combining 2-D motion with the typical DFP measurement system. 2) θn, Txn and Tyn must be robustly estimated to build Eq. (5).

 figure: Fig. 2

Fig. 2 DFP system with one reference phase map. (a) A reference plane is placed in front of the object; (b) A reference plane is placed behind the object.

Download Full Size | PDF

2.2. Phase errors calculation based on the adaptive reference phase map

In order to build the correlation between 2-D motion and phase shift of objects, in a typical DFP system, we introduce two absolute phase maps of the reference planes, which are easy to acquire before measurement through traditional PSP and absolute phase unwrapping.

In Fig. 2(a), O1 and O2 indicate the surface points of the original object and the moving object, respectively. The two points correspond to the same position of the object at different moments. Because the projection fringes arrive at the reference plane and object surface in order, O1 and H1 have the same phase values, and the phase values of O2 and H2 are also equal. Hence, the phase error Δφ of the moving object is equal to the phase difference between H1 and H2.

Likewise, in Fig. 2(b), when the reference plane is behind the moving object, the phase values of O1 and O2 are same as H5 and H6, respectively. Therefore, the motion-induced phase error Δφ is equal to the phase difference between H5 and H6.

Combining the reference phase map P1 or P2 with the displacement |O1O2| measured in section 2.3, the phase difference ΔφP1 or ΔφP2 is calculated as a rough estimate of the phase error Δφ. After roughly compensating phase errors, the wrapped phase map of the object is obtained. According to the geometric constraints of the measurement system described in Fig. 2(a), the phase values of O1 and H1 are equal in the captured image, but the coordinate of H1 is greater than that of O1 in the y direction. In other words, the phase map of the reference plane is ahead of the phase map of the object surface in the captured fringes [27, 28], which is shown in Fig. 3(a).

The wrapped phase map of a 2-D moving object can be unwrapped by extracting the integer number k(x,y). We have

Φ(x,y)=ϕ(x,y)+2π×k(x,y),
where Φ(x,y) is the absolute phase value of the moving object and ϕ(x,y) is the wrapped phase value of the object.

Therefore, the k(x,y) can be calculated by

k(x,y)=ceil[ΦP1(x,y)ϕ(x,y)2π],
where ceil[] is the special operator that obtains the nearest upper integer, and ΦP1(x,y) denotes the absolute phase value of reference phase map P1.

Above method is named the reference phase error compensation method (RPECM).

 figure: Fig. 3

Fig. 3 The phase relationship between reference planes and object. (a) The pixel-to-pixel absolute phase unwrapping method; (b) The adaptive reference phase map P3.

Download Full Size | PDF

Because the reference phase map P1 or P2 is away from surface points of the moving object, the phase difference ΔφP1 or ΔφP2 is a rough estimate of the phase error Δφ in the RPECM, which makes the absolute phase map Φ of the moving object unreliable.

In order to refine the motion-induced phase error, an adaptive reference phase map P3 replaces the phase maps P1 and P2 in Fig. 3(b), we define

ΦP3=ωΦP1+(1ω)ΦP2,
where 0<ω<1 and ω is determined by
ω=arg min [(ΦΦP3)2].

Therefore, the adaptive reference phase map P3 can be obtained by finding ω in Eq. (10), and the cubic interpolation method is used to optimize the phase map. Then, we can obtain refined absolute phase map Φo by two steps: 1) compensating the phase errors with the aid of the adaptive reference phase map P3; 2) unwrapping the wrapped phase map with the pixel-to-pixel phase unwrapping. This method is named the adaptive reference phase error compensation method (ARPECM).

2.3. The fence-based motion estimation technique

Before calculating the Δφ(pn,qn), we ought to calculate θn, Txn and Tyn by estimating the object movement.

 figure: Fig. 4

Fig. 4 The principle of extracting the rotation angle and the translation vector.

Download Full Size | PDF

Selecting a marker is important for detecting 2-D motion of a moving object. A horizontal fence image and a vertical fence image are assembled as the robust target image. Compared with other simple markers, non-periodic fence image has advantages as follows: 1) suppressing the influence of projection light; 2) rotational, horizontal and vertical motion estimation are independent of each other; 3) a small fence image can be utilized to extract many estimated values of rotation and translation by converting the 2-D image processing to a 1-D signal problem.

The principle of extracting the rotation angle and translation vector with a vertical fence image is shown in Fig. 4. The 2-D motion estimation of a moving object is transformed into calculate the rotation angle and translation vector of the marker. The edges of the moving fence image are extracted by an edge detection technique (such as Canny edge-detection). Then, the rotation angles θ(i) can be obtained by utilizing line detection based on the Hough transform algorithm. The mean value θ is modeled as the final rotation angle and the moving image is inversely rotated θ to the translating image. The translating image, compared with the original image, includes only the translation transform, which is described as the mean value T of the translation vectors T(j) by the proposed 1-D HPCM.

2.4. 1-D hybrid phase correlation method (1-D HPCM) for translation detection

The phase correlation method (PCM) calculates the translation vector between two relevant images based on the shift properties of the Fourier transform. For a simple 1-D image signal, g1(y) translates Ty to a new signal g2(y) along the y direction of the image. The relation between the two 1-D image signals can be denoted by

g2(y)=g1(yTy).

Referring to the property of discrete Fourier transform (DFT), Eq. (11) can be given as follows:

G2(ν)=G1(ν)exp [j(νTy)].

So the normalized cross-power spectrum matrix of the two 1-D input images is expressed by

Q(ν)=G1(ν)G2(ν)*|G1(ν)G2(ν)*|=exp [j(νTy)],
where * indicates the complex conjugate, G1(ν) and G2(ν) are the corresponding Fourier transforms of g1(y) and g2(y), respectively. Therefore, the inverse Fourier transform (IFT) of Q(v) can be calculated as
q(y)=δ(yTy).

In Eq. (14), Ty can be extracted by locating the function peak, but the unit pulse function is only displayed when Ty is an integer in PCM. It is worth noting that if Ty is a non-integral shift, the peak power of q(y) is decreased by neighboring pixels, consequently influencing the translation detection [29].

William [30] proposed the modified phase correlation method (MPCM) to precisely identify the sub-pixel translation of two images in subspace detection. For 1-D images of length M, the shift value Ty between g1(y) and g2(y) in the spatial domain corresponds to L in the frequency domain, where L=2πTy/M. For an unwrapped phase vector of matrix Q denoted by υ, the set of normal equations can be expressed as

W[μc]=unwrap{υ},
where the rows of W are equal to [w1], and w={0,1,2,,(s1)}, and the value of s is equal to the length of υ. Additionally, μ and c represent the slope and abscissa of the fitted line, respectively.

Considering that the actual system is nonlinear, least-square fitting (LSF) is used to determine the precise parameters as follows:

[μc]=(WTW)1WTunwrap{υ}.

Hence, when g1(y) moves Ty to g2(x), Ty is calculated at the sub-pixel level by

Ty=μM/2π.

However, the 1-D wrapped phase needs to be unfolded, which inevitably introduces accumulative error, i.e. the MPCM may be applicable to minute displacement detection [31].

This paper proposes the 1-D HPCM to rapidly detect the large displacement between two images in subspace.

Fig. 5 illustrates the workflow of our proposed 1-D HPCM. The 2-D reference image and the 2-D matching image are transformed through multiple-row random sampling to the 1-D reference signal and the 1-D matching signal, respectively. First, 1-D PCM is applied to acquire the integral shift Ty0 between the two signals. To obtain sub-pixel translation, we translate the matching image signal in the opposite direction, and the inverse displacement is determined by

Ty1=10floor(Ty0/10),
here, the function of floor() obtains a lower integer.

Then, the remaining sub-pixel shift between the 1-D reference signal and the new 1-D matching signal is calculated by the MPCM. By utilizing 1-D unwrapping and LSF in the normalized cross-power spectrum, another translation estimation Ty2 can be decided with Eq. (17). Finally, the whole sub-pixel shift between the reference signal and the matching signal is indicated by

T=Ty1+Ty2.

 figure: Fig. 5

Fig. 5 Workflow of our proposed 1-D HPCM.

Download Full Size | PDF

3. Procedures

The whole computational framework is shown in Fig. 6 (three-step PSP is improved in the framework), and the process mainly includes the following steps:

Step1: Segmentation of the objects and marks in the captured images. With two fixed spatial windows, the fringe images are cropped to automatically obtain two sets of fence images. The stripes from the top window are perpendicular to those from the bottom window. Meanwhile, the image areas about moving objects, which lie in different directions and positions in three fringe images, can be obtained.

Step2: Extraction of transformation matrixes and object images alignment. The motion of the marker is consistent with that of the object, which indicates that rotation angles and translation vectors of the fence image can be applied to retrieve the whole phase map of the moving object.

We first binarize the cropped fence images, then the Hough transform algorithm can calculate absolute edge angles, and the mean value of the fringe angles is adopted. For three-step PSP, when we use the fence image of the first step as the original reference, the relative angles of rotation can be easily calculated in other steps. We next reversely rotate the fence image of other steps according to the rotation angles to maintain the same spatial direction as the first fence image. Finally, 1-D HPCM is applied to detect the sub-pixel shifts in the row and column directions between the first step and the other steps.

According to rotation angles and translation vectors, object images are aligned to remove the coordinate errors.

Step3: Phase errors compensation based on RPECM. Before the measurement experiment, the wrapped phase maps of the double planes can be obtained by traditional PSP for the static object and it is absolutely unwrapped by a temporal unwrapping method (such as Gray-code phase unwrapping). Combining the reference phase map with RPCEM, we obtain the rough wrapped phase map of the moving object. The absolute phase map of the moving object is calculated by the pixel-to-pixel phase unwrapping [27] without additional projection patterns in the motion measurement.

 figure: Fig. 6

Fig. 6 The overall computational framework. The framework measures moving objects based on sub-pixel motion estimation and three-step PSP. The marker consists of a horizontal fence and a vertical fence, which moves with the object. The phase map of the reference plane is calculated and absolutely unwrapped by Gray-code phase unwrapping [7]. The two rectangular windows of projection fringe image is used to automatically extract the marking fence images, which are defined by red rectangles in the fringe images. The top window and bottom window are applied to crop different direction fence images.

Download Full Size | PDF

Step4: The refined phase recovery based on ARPECM. The adaptive reference phase map is denoted by double reference phase maps, and the refined phase error is estimated by

Δφ^(pn,qn)=ΦP3(pn,qn)ΦP3(x,y),
here, ΦP3() is the phase value of the adaptive reference phase map.

Considering that three-step PSP is utilized in our framework, Δφ^(p2,q2) and Δφ^(p3,q3) (Δφ^(p1,q1)=0) are applied to separately compensate the phase value of the second step and the third step for correcting the wrapped phase map. The pixel-to-pixel phase unwrapping retrieves the absolute phase map of the moving object.

4. Experiments

A typical DFP measurement system is built to verify the performance of our proposed computational framework. The measurement system comprises a Digital Light Processing (DLP) projector with the resolution of 912 × 1140 (LightCrafter 4500, TI) and a camera with 2048 × 2448 resolution (Grasshopper3, Point Grey). The projector is placed under the camera. Meanwhile, the optical axis of the camera remains horizontal, and the optical axis of the projector is oblique. The DLP is synchronized with the camera by the pulse signal.

We have carried out the experiments in different motion scenes to demonstrate the validity of our computational framework compared with traditional three-step PSP and FTP. Moreover, the reconstructed results are compared, which are corrected by the RPECM and the ARPECM, respectively. To successfully unfold the wrapped phase maps of moving objects, the pixel-to-pixel phase unwrapping method [27] is also adopted for traditional three-step PSP and FTP in experiments. Meanwhile, the reference models are measured by keeping objects static to show the measurement accuracy of four methods.

In the experiment 1, a plaster mask is placed onto the 2-D plane, which is in front of the measurement system.

 figure: Fig. 7

Fig. 7 Reconstructed results of the moving mask with four methods in 2-D translation. (a) Reconstructed 3-D shape using traditional three-step PSP. (b) Reconstructed 3-D shape using FTP. (c) Reconstructed 3-D shape using RPECM. (d) Reconstructed 3-D shape using ARPECM. (e-h) The corresponding enlarged detail of the region in Figs. 7(a)-7(d).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 The height curves comparison of Figs. 7(a)-7(d), where x = 80 mm. (a) The height curves of results using traditional three-step PSP and FTP. (b) The height curves of results corrected by RPECM and ARPECM. (c) Absolute error comparison of four methods.

Download Full Size | PDF

The first motion type belongs to 2-D translation. According to the principle of the four methods, the fringe images are sequentially captured by a camera when the mask is moving to different positions. Then, the 3-D shape of the moving mask is calculated by four methods, including three-step PSP, FTP and our RPECM and ARPECM. The displacement distances of the second step and third step separately are 14.198 pixels and 33.242 pixels, respectively, which are obtained by the proposed motion estimation. The measurement results of four methods are described in Fig. 7. Figs. 7(a)-7(b) present the reconstructed results with three-step PSP, FTP in 2-D translation and the performance of our RPECM and ARPECM are shown in Figs. 7(c)-7(d). Furthermore, for the detailed comparison, Figs. 7(e)-7(h) show the corresponding enlarged details of the regions in Figs. 7(a)-7(d). It is obvious that the surfaces and edges of Figs. 7(a)-7(b) have significant height errors, but the results of Figs. 7(c)-7(d) are smooth and unambiguous. Moreover, compared with the model of Fig. 7(d), the result of Fig. 7(c) contains small periodic errors, which are clearly visible in Fig. 7(g).

The height curves of the cross-section are shown in Fig. 8, where x = 80 mm. The measured surface profile with traditional three-step PSP is ambiguous, and FTP improves the surface condition of the model in Fig. 8(a). Fig. 8(b) shows the height curves of results corrected by our RPECM and ARPECM. Fig. 8(c) compares the four methods and the mean absolute error of the ARPECM is 0.253 mm, which is minimal. The proposed method has advantages in accuracy and stability, and ARPECM retrieves more accurate height curve. The result of the proposed method displays enhanced accuracy in 2-D translation.

 figure: Fig. 9

Fig. 9 Reconstructed results of the moving mask with four methods in 2-D hybrid motion. (a) Reconstructed 3-D shape using traditional three-step PSP. (b) Reconstructed 3-D shape using FTP. (c) Reconstructed 3-D shape using RPECM. (d) Reconstructed 3-D shape using ARPECM. (e-h) The corresponding enlarged detail of the region in Figs. 9(a)-9(d).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 The height curves comparison of Figs. 9(a)-9(d), where x = 80 mm. (a) The height curves of results using traditional three-step PSP and FTP. (b) The height curves of results corrected by RPECM and ARPECM. (c) Absolute error comparison of four methods.

Download Full Size | PDF

The second motion type of the mask belongs to 2-D hybrid motion. The experiment presents the accuracy of the proposed method in 2-D hybrid motion including translation and rotation. The operations of translation and rotation are both applied to the mask on a plane. With the proposed motion estimation, the rotated angles of the second step and the third step are calculated, which separately are 0.028 rad and 0.046 rad. The displacements of the second step and the third step are 21.789 pixels and 33.015 pixels, respectively. In the hybrid motion, the reconstructed mask models of the four methods are shown in Fig. 9 and the height curves are compared in Fig. 10.

The measurement results of PSP, FTP, RPECM and ARPECM are presented in Figs. 9(a)-9(d), respectively. Figs. 9(e)-9(h) show the corresponding enlarged details of the regions in Figs. 9(a)-9(d). The 3-D shapes of the mask are identifiable in Figs. 9(c)-9(d), and ARPECM has advantage in accuracy by utilizing the adaptive reference phase map.

The height curves of the cross-section with four different methods are shown in Fig. 10, where x = 80 mm. Traditional three-step PSP fails to reconstruct the 3-D shape of the mask and the result of FTP exits obvious height error described in Fig. 10(a). However, our RPECM and ARPECM both achieve the intact model of the object, and ARPECM reduces the small periodic height errors in Fig. 10(b). Fig. 10(c) compares the absolute errors of the four methods. The absolute error fluctuation of the ARPECM is minimal and its average is 0.203 mm. Therefore, our method still is robust to 2-D hybrid motion.

 figure: Fig. 11

Fig. 11 Reconstructed results of three moving geometries with four methods in 2-D translation. (a) Reconstructed 3-D surface profile using traditional three-step PSP. (b) Reconstructed 3-D surface profile using FTP. (c) Reconstructed 3-D surface profile using the RPECM. (d) Reconstructed 3-D surface profile using the ARPECM.

Download Full Size | PDF

In experiment 2, we choose a geometry group composing of a triangular prism, a hemisphere and a cuboid. The geometry group contains sharp variation edges, which increase the difficulty of errors elimination. Meanwhile, few features exist in the surface of the geometry group, so feature-based spatial matching methods [13, 22, 23] may have some limitations to extract valid features. First, the geometry group just translates to different positions, and the marker moves with the objects. For the translation scene, the reconstruction results are described in Figs. 11(a)-11(d). The reconstruction surface with traditional three-step PSP is shown in Fig. 11(a). We adopt FTP to obtain the measurement result of the geometry group, as shown in Fig. 11(b). Traditional PSP and FTP both fail to reconstruct smooth and flat surfaces of the geometry group. However, the proposed computational framework detects the translation distance from the first step to the second step. The measured objects have moved 7.129 pixels. Then, the displacement is determined between the first step and the third step. The geometry group has moved 12.619 pixels. The reconstructed surfaces of the geometries with the proposed method finally are shown in Figs. 11(c)-11(d). Although a small amount of height errors remains in surfaces, ARPECM has removed most outliers and periodic errors.

 figure: Fig. 12

Fig. 12 Reconstructed results of three moving geometries with four methods in 2-D hybrid motion. (a) Reconstructed 3-D surface profile using traditional three-step PSP. (b) Reconstructed 3-D surface profile using FTP. (c) Reconstructed 3-D surface profile using the RPECM. (d) Reconstructed 3-D surface profile using the ARPECM.

Download Full Size | PDF

Second, hybrid motion involves rotation and translation, and the reconstruction models of the geometry group are displayed in Fig. 12. Fig. 12(a) displays the reconstructed surfaces using traditional PSP. The surface profile using FTP are shown in Fig. 12(b). The results contain severe height errors in Figs. 12(a)-12(d). The proposed method obtains the rotation angle, and then detects the displacements between two arbitrary steps. From the first step to the second step, the rotation angle is 0.006 rad, and the translation displacement is 10.666 pixels. From the first step to the third step, the rotation angle is 0.033 rad, and the translation displacement is 28.824 pixels. The reconstructed shape of the objects with the proposed two methods are presented in Figs. 12(c)-12(d), respectively. It is obvious that our ARPECM still accurately retrieves surface profile of the geometry group in 2-D hybrid motion.

In order to further evaluate the measurement accuracy of the four methods, the models acquired by measuring the static objects are taken as references. The MSEs (mean square errors) of Figs. 7(a)-7(d), Figs. 9(a)-9(d), Figs. 11(a)-11(d) and Figs. 12(a)-12(d) are shown in Table 1. Compared with the MSEs under four cases, the ARPECM is all minimal, 0.084, 0.078, 0.032, 0.137 mm2. Thus, the ARPECM can realize the robust 3-D shape measurement for arbitrary 2-D moving objects and the measuring results are accurate.

Tables Icon

Table 1. The MSEs of 3-D shape measurements with four methods

5. Discussion

For the 3-D shape measurement of 2-D moving objects based on phase-shifting profilometry, our method owns the following advantages compared to other reconstruction techniques.

  • Accurate and automatic motion estimation technique. Unlike traditional motion estimation method, the proposed motion estimation based on fence image has advantages in its detection stability, efficiency and range. Only using a composite fence image, our technique can detect θn, Txn and Tyn, which are independent of each other. A small fence image can be utilized to extract many estimated values by converting the 2-D image processing to a 1-D signal problem, which improves robustness and efficiency. The proposed 1-D HPCM increases the detection range and realizes sub-pixel motion estimation.
  • Refined phase errors compensation and absolute phase unwrapping for 2-D moving objects in PSP. Combining double reference phase maps with the rough phase map of objects, the adaptive reference phase map is generated to refine phase errors compensation. The pixel-to-pixel phase unwrapping method is applied to 2-D moving objects without additional motion constraint.
  • Universal computational framework for 3-D shape measurement of arbitrary 2-D moving objects. Our proposed computational framework mainly consists of four parts: 2-D motion tracking, object images alignment, rough phase errors compensation and refined absolute phase map recovery. Unlike other methods, it is also reliable for measuring the surfaces with sparse features.

However, since two fixed windows are utilized to automatically crop different direction fence images, the proposed framework encounters challenges when objects suffer from large-scale movement or surface deformations, which may fail to extract rotation angles and translation vectors.

6. Conclusion

This paper presents a novel computational framework for robust 3-D shape measurement of arbitrary 2-D moving objects. By combining a composite fence image with 1-D HPCM, the accurate transform matrixes of 2-D motion are calculated to remove coordinate errors among object images. The ARPECM improves the accuracy of phase errors compensation based on an adaptive reference phase map rather than the actual reference phase map. In this computational framework, first, the composite fence image marks the moving object, and the fringe patterns of PSP are projected onto the object and marker. the marker is cropped by the fixed spatial windows and the object images are aligned by extracting the rotation angles and translation vectors. Third, the motion-induced phase errors are compensated to obtain the rough phase map based on the reference phase map and transformation matrixes. Finally, with the adaptive reference phase map, absolute phase map of moving objects is refined to reconstruct accurate 3-D surfaces of moving objects. Experimental results demonstrate that our computational framework can achieve robust 3-D reconstruction of arbitrary 2-D moving objects. In future works, for measuring arbitrary 3-D moving objects, we will focus on introducing the 3-D spatial alignment approach based on a composite fence image and enlarging the depth range of the pixel-to-pixel phase unwrapping.

Funding

National Natural Science Foundation of China (51605464); Research on the Major Scientific Instrument of National Natural Science Foundation of China (61727809).

References

1. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23 – 59 (2018). [CrossRef]  

2. Q. Zhou, X. Qiao, K. Ni, X. Li, and X. Wang, “Depth detection in interactive projection system based on one-shot black-and-white stripe pattern,” Opt. Express 25, 5341–5351 (2017). [CrossRef]   [PubMed]  

3. F. Bruno, G. Bianco, M. Muzzupappa, S. Barone, and A. Razionale, “Experimentation of structured light and stereo vision for underwater 3d reconstruction,” ISPRS-J. Photogramm. Remote Sens. 66, 508 – 518 (2011). [CrossRef]  

4. S. Matthias, M. Kästner, and E. Reithmeier, “Evaluation of system models for an endoscopic fringe projection system,” Measurement 73, 239 – 246 (2015). [CrossRef]  

5. J. Li, X. Su, and L. Guo, “Improved fourier transform profilometry for the automatic measurement of three-dimensional object shapes,” Opt. Eng. 29, 1439–1444 (1990). [CrossRef]  

6. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84 – 103 (2016). [CrossRef]  

7. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36, 4463–4472 (1997). [CrossRef]   [PubMed]  

8. J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827 – 849 (2004). [CrossRef]  

9. H. Zhao, W. Chen, and Y. Tan, “Phase-unwrapping algorithm for the measurement of three-dimensional object shapes,” Appl. Opt. 33, 4497–4500 (1994). [CrossRef]   [PubMed]  

10. H. Li, Y. Hu, T. Tao, S. Feng, M. Zhang, Y. Zhang, and C. Zuo, “Optimal wavelength selection strategy in temporal phase unwrapping with projection distance minimization,” Appl. Opt. 57, 2352–2360 (2018). [CrossRef]   [PubMed]  

11. Y. Xing, C. Quan, and C. Tay, “A modified phase-coding method for absolute phase retrieval,” Opt. Lasers Eng. 87, 97 – 102 (2016). [CrossRef]  

12. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018). [CrossRef]   [PubMed]  

13. L. Lu, Y. Yin, Z. Su, X. Ren, Y. Luan, and J. Xi, “General model for phase shifting profilometry with an object in motion,” Appl. Opt. 57, 10364–10369 (2018). [CrossRef]  

14. B. Li and S. Zhang, “Superfast high-resolution absolute 3d recovery of a stabilized flapping flight process,” Opt. Express 25, 27270–27282 (2017). [CrossRef]   [PubMed]  

15. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24, 23289–23303 (2016). [CrossRef]   [PubMed]  

16. T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-d shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24, 20253–20269 (2016). [CrossRef]   [PubMed]  

17. Z. Li, K. Zhong, Y. F. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3d measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38, 1389–1391 (2013). [CrossRef]   [PubMed]  

18. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]   [PubMed]  

19. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

20. Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a hilbert transform,” Opt. Express 26, 34224–34235 (2018). [CrossRef]  

21. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013). [CrossRef]  

22. Q. Guo, Y. Ruan, J. Xi, L. Song, X. Zhu, Y. Yu, and J. Tong, “3d shape measurement of moving object with fft-based spatial matching,” Opt. Laser Technol. 100, 325 – 331 (2018). [CrossRef]  

23. L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on psp,” Opt. Express 25, 32120–32131 (2017). [CrossRef]   [PubMed]  

24. J. L. Flores, J. A. Ferrari, G. G. Torales, R. Legarda-Saenz, and A. Silva, “Color-fringe pattern profilometry using a generalized phase-shifting algorithm,” Appl. Opt. 54, 8827–8834 (2015). [CrossRef]   [PubMed]  

25. J. Pan, P. S. Huang, and F.-P. Chiang, “Color phase-shifting technique for three-dimensional shape measurement,” Opt. Eng. 45, 13602 (2006). [CrossRef]  

26. Y. Hu, J. Xi, J. F. Chicharo, W. Cheng, and Z. Yang, “Inverse function analysis method for fringe pattern profilometry,” IEEE Trans. Instrum. Meas. 58, 3305–3314 (2009). [CrossRef]  

27. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24, 18445–18459 (2016). [CrossRef]   [PubMed]  

28. Y. Xing and C. Quan, “Reference-plane-based fast pixel-by-pixel absolute phase retrieval for height measurement,” Appl. Opt. 57, 4901–4908 (2018). [CrossRef]   [PubMed]  

29. H. Wang, J. Zhao, J. Zhao, J. Song, Z. Pan, and X. Jiang, “A new rapid-precision position measurement method for a linear motor mover based on a 1-d epca,” IEEE Trans. Ind. Electron 65, 7485–7494 (2018). [CrossRef]  

30. W. S. Hoge, “A subspace identification extension to the phase correlation method [mri application],” IEEE Trans. Med. Imaging 22, 277–280 (2003). [CrossRef]   [PubMed]  

31. J. Zhao, J. Zhao, H. Wang, J. Song, and F. Dong, “Precision position measurement of linear motors mover based on temporal image correlation,” IEEE Trans. Instrum. Meas. pp. 1–10 (2018).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 A schematic diagram of a typical DFP system with one camera.
Fig. 2
Fig. 2 DFP system with one reference phase map. (a) A reference plane is placed in front of the object; (b) A reference plane is placed behind the object.
Fig. 3
Fig. 3 The phase relationship between reference planes and object. (a) The pixel-to-pixel absolute phase unwrapping method; (b) The adaptive reference phase map P3.
Fig. 4
Fig. 4 The principle of extracting the rotation angle and the translation vector.
Fig. 5
Fig. 5 Workflow of our proposed 1-D HPCM.
Fig. 6
Fig. 6 The overall computational framework. The framework measures moving objects based on sub-pixel motion estimation and three-step PSP. The marker consists of a horizontal fence and a vertical fence, which moves with the object. The phase map of the reference plane is calculated and absolutely unwrapped by Gray-code phase unwrapping [7]. The two rectangular windows of projection fringe image is used to automatically extract the marking fence images, which are defined by red rectangles in the fringe images. The top window and bottom window are applied to crop different direction fence images.
Fig. 7
Fig. 7 Reconstructed results of the moving mask with four methods in 2-D translation. (a) Reconstructed 3-D shape using traditional three-step PSP. (b) Reconstructed 3-D shape using FTP. (c) Reconstructed 3-D shape using RPECM. (d) Reconstructed 3-D shape using ARPECM. (e-h) The corresponding enlarged detail of the region in Figs. 7(a)-7(d).
Fig. 8
Fig. 8 The height curves comparison of Figs. 7(a)-7(d), where x = 80 mm. (a) The height curves of results using traditional three-step PSP and FTP. (b) The height curves of results corrected by RPECM and ARPECM. (c) Absolute error comparison of four methods.
Fig. 9
Fig. 9 Reconstructed results of the moving mask with four methods in 2-D hybrid motion. (a) Reconstructed 3-D shape using traditional three-step PSP. (b) Reconstructed 3-D shape using FTP. (c) Reconstructed 3-D shape using RPECM. (d) Reconstructed 3-D shape using ARPECM. (e-h) The corresponding enlarged detail of the region in Figs. 9(a)-9(d).
Fig. 10
Fig. 10 The height curves comparison of Figs. 9(a)-9(d), where x = 80 mm. (a) The height curves of results using traditional three-step PSP and FTP. (b) The height curves of results corrected by RPECM and ARPECM. (c) Absolute error comparison of four methods.
Fig. 11
Fig. 11 Reconstructed results of three moving geometries with four methods in 2-D translation. (a) Reconstructed 3-D surface profile using traditional three-step PSP. (b) Reconstructed 3-D surface profile using FTP. (c) Reconstructed 3-D surface profile using the RPECM. (d) Reconstructed 3-D surface profile using the ARPECM.
Fig. 12
Fig. 12 Reconstructed results of three moving geometries with four methods in 2-D hybrid motion. (a) Reconstructed 3-D surface profile using traditional three-step PSP. (b) Reconstructed 3-D surface profile using FTP. (c) Reconstructed 3-D surface profile using the RPECM. (d) Reconstructed 3-D surface profile using the ARPECM.

Tables (1)

Tables Icon

Table 1 The MSEs of 3-D shape measurements with four methods

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = a ( x , y ) + b ( x , y ) c o s [ ϕ ( x , y ) + 2 π ( n 1 ) / N ] ,
ϕ ( x , y ) = tan  1 { n = 1 N I n ( x , y ) sin  [ 2 π ( n 1 ) / N ] n = 1 N I n ( x , y ) cos  [ 2 π ( n 1 ) / N ] } .
I n ( p n , q n ) = A n ( p n , q n ) + B n ( p n , q n ) cos  [ ϕ ( p n , q n ) + 2 π ( n 1 ) / N ] ,
{ I 1 ( x , y ) = A 1 ( p 1 , q 1 ) + B 1 ( p 1 , q 1 ) cos  [ ϕ ( x , y ) + Δ φ ( p 1 , q 1 ) ] , I 2 ( x , y ) = A 2 ( p 2 , q 2 ) + B 2 ( p 2 , q 2 ) cos  [ ϕ ( x , y ) + Δ φ ( p 2 , q 2 ) + 2 π / N ] , I N ( x , y ) = A N ( p N , q N ) + B N ( p N , q N ) cos  [ ϕ ( x , y ) + Δ φ ( p N , q N ) + 2 π ( N 1 ) / N ] ,  
[ x y 1 ] T = [ p n q n 1 ] T [ cos  θ n sin  θ n T x n sin  θ n cos  θ n T y n 0 0 1 ]
{ A n ( p n , q n ) = a ( x , y ) , B n ( p n , q n ) = b ( x , y ) .
Φ ( x , y ) = ϕ ( x , y ) + 2 π × k ( x , y ) ,
k ( x , y ) = c e i l [ Φ P 1 ( x , y ) ϕ ( x , y ) 2 π ] ,
Φ P 3 = ω Φ P 1 + ( 1 ω ) Φ P 2 ,
ω = arg min  [ ( Φ Φ P 3 ) 2 ] .
g 2 ( y ) = g 1 ( y T y ) .
G 2 ( ν ) = G 1 ( ν ) exp  [ j ( ν T y ) ] .
Q ( ν ) = G 1 ( ν ) G 2 ( ν ) * | G 1 ( ν ) G 2 ( ν ) * | = exp  [ j ( ν T y ) ] ,
q ( y ) = δ ( y T y ) .
W [ μ c ] = u n w r a p { υ } ,
[ μ c ] = ( W T W ) 1 W T u n w r a p { υ } .
T y = μ M / 2 π .
T y 1 = 10 f l o o r ( T y 0 / 10 ) ,
T = T y 1 + T y 2 .
Δ φ ^ ( p n , q n ) = Φ P 3 ( p n , q n ) Φ P 3 ( x , y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.