Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry

Open Access Open Access

Abstract

When phase shifting profilometry (PSP) is employed for 3-D shape measurement, the object must be kept static during the projection and acquisition of the multiple fringe patterns. Errors will occur when the object moves and if the projection and capture of fringe patterns are not fast enough. In this paper, a new approach is proposed to tackle the problem, consisting of two steps. Firstly, the rotation matrix and translation vector describing the movement of the object are estimated using a set of marks placing on the surface of the object. Then the expressions of the fringe patterns under the influence of 2-D object movement are derived, which are employed to determine the correct phase map, leading to accurate measurement of the profile. Simulations and experiments are presented to verify the effectiveness of the proposed algorithm.

© 2013 Optical Society of America

1. Introduction

In recent years, fringe pattern profilometry (FPP) has attracted intensive research interests as a technique for non-destructive and high accuracy three-dimensional (3-D) shape measurement [15]. Among the various approaches of implementing FPP, phase shifting profilometry (PSP) is one of the most widely used because of its high accuracy and robustness to the influence of ambient and reflectivity variations. With PSP, multiple (at least three) fringe patterns with a certain phase shift from each other are utilized to probe the object, and the 3-D information is retrieved by processing the reflected fringe patterns acquired by a camera. A fundamental requirement associated with the PSP is that the object must be kept static during the projection and acquisition of the multiple fringe patterns. If the object moves, errors will be introduced to the result of the measurement. However, in many applications this requirement is difficult to meet. The problem can be remedied by means of increasing the speed of digital projection and capture which, however, usually leads to significant increases in the hardware cost. Therefore, it is highly desirable to develop a technique for the measurement of moving object with low-cost digital projector and camera.

In order to reduce the errors caused by the object movement, Su and Zhang, et al. [6,7] used Fourier transform profilometry (FTP) to measure the moving object. However, as only one fringe pattern is used to obtain the phase map, the accuracy suffers from the influence of ambient light and reflectivity variations of the object. Zhang and Yau [8] proposed a modified two-plus-one phase shifting algorithm to address the problem, where two sinusoidal fringe patterns with 900 phase shift and a uniform flat image are utilized to calculate the phase map. As only the two sinusoidal fringe patterns carry the information of the object profile, the measurement error due to motion is smaller than the traditional multiple-step PSP. However, the error still occurs when the object moves during the projections of the two sinusoidal fringe patterns. Hu and He [9] proposed an improved π phase shifting Fourier transform profilometry algorithm to measure the object moving at a constant speed. In their algorithm, only one fringe pattern is projected onto the object. The fringe pattern comprises two regions which with a π phase shifting to each other. Two line-scan cameras are used to capture the deformed fringe pattern in the two regions respectively. In order to find the corresponding points between the two regions, the object must be moved at a constant velocity and the movement direction should be perpendicular to the line-scan direction. At last, the traditional π phase shifting FTP algorithm is used to reconstruct the object. The system also requires two line-scan cameras and hence is costly in implementation.

In this paper, a novel approach to reduce the measurement error due to the movement of the object is proposed. The proposed algorithm is based on the analysis of the phase maps of the fringe patterns acquired from the surface of an object subject to two-dimensional (2-D) movement. As a 2-D movement of an object can be modeled by a rotation matrix and a translation vector, the relationship between the phase maps of fringe patterns can also be described by the same. Then, the relationship between the phase maps can be employed to eliminate the influence of the object movement, thereby achieving accurate 3-D shape measurement.

This paper is organized as follows. Section 2 presents the principle of PSP. In Section 3, the relationship among the phase maps of PSP when the object is subject to a 2-D movement is described. Based on the relationship, a new formulation of the 3-D shape measurement is derived, which is advantageous by immunizing from the influence of 2-D movement. In Section 4, simulations and experimental results are given to verify the effectiveness of the proposed algorithm. Section 5 concludes this paper.

2. Principle of PSP

A typical structure of the FPP measurement system using PSP is shown in Fig. 1, which consists of a camera, a projector and a reference plane. A set of sinusoidal fringe patterns is projected to the reference plane and captured by the camera. After removing the reference plane, the same set of fringe patterns is projected onto the object surface and also acquired by the camera [10]. The fringe patterns are phase-modulated by the height distribution of the object. The height information of the object is contained in the phase difference between the object and the reference plane.

 figure: Fig. 1

Fig. 1 The structure of PSP system.

Download Full Size | PDF

Considering the use of N-step PSP, the sinusoidal fringe patterns acquired from the reference plane and object can be expressed respectively as follows:

sn(x,y)=a+bcos(ϕ(x,y)+2π(n1)N)
and
dn(x,y)=a+bcos(ϕ(x,y)+Φ(x,y)+2π(n1)N)
where n=1,2,3,...,N; sn(x,y) is the n-th fringe patterns on the reference plane; dn(x,y) is the n-th fringe patterns on the object; a is the ambient light intensity; b is the amplitude of the intensity of the sinusoidal fringe patterns; ϕ(x,y) is the phase value on the reference plane; Φ(x,y) is the phase difference between the reference plane and object which is caused by the height of the object.

The phase maps of the reference plane and the object can be calculated by

ϕr(x,y)=ϕ(x,y)=arctann=1Nsn(x,y)sin2π(n1)/Nn=1Nsn(x,y)cos2π(n1)/N
and
ϕo(x,y)=ϕ(x,y)+Φ(x,y)=arctann=1Ndn(x,y)sin2π(n1)/Nn=1Ndn(x,y)cos2π(n1)/N
where ϕr(x,y) is the phase value on the reference plane; ϕo(x,y) is the phase value on the object surface. The function arctan() is defined as the four-quadrant inverse tangent.

In Eqs. (3) and (4), the phase values calculated by arctan() are wrapped into π to π and hence they are discontinuous. In order to calculate the height of the object, the phase unwrapping algorithm is used to remove the discontinuities of the wrapped phase values. Assume that Φr(x,y) and Φo(x,y) are the unwrapped phase for ϕr(x,y) and ϕo(x,y) respectively, then the phase difference between the reference plane and object can be calculated by

Φ(x,y)=Φo(x,y)Φr(x,y)

So, the height of the object can be calculated by

h(x,y)=l0Φ(x,y)Φ(x,y)2πf0d0
where h(x,y) is the object height; l0 is the distance between the camera and the reference plane; f0 is the spatial frequency of the fringe patterns; d0 is the distance between the camera and projector.

It should be pointed out that the unit of x and y in Eq. (6) is the pixel. In order to obtain the relationship between the pixel in the image and the real world coordinate, the system needs to be calibrated using a calibration board [11]. The calibration board is flat and marked by a set of points with their positions precisely known a priori. By placing the board into different positions, a set of images can be obtained by the camera, which then can be used to yield the relationship between the coordinate of the images and the real world.

The effectiveness of the conventional PSP algorithm presented above depends on the validity of Eqs. (3) and (4). In order for Eqs. (3) and (4) to hold, the phase values of sn(x,y) and dn(x,y) must be equally spaced by 2π/N. This requires not only accurate creation and projection of the fringe patterns, but also keeping the object static during the projection and capture of the multiple fringe patterns for PSP. Obviously, when the object moves, Eqs. (3) and (4) will be violated, and errors will occur in the measurement.

3. Derivation of the proposed algorithm

In order to calculate the phase map of the moving object, the first task is to describe the movement of the object. The surface shape of an object is still described by the height distribution h(x,y), and it is subject to a 2-D movement on the xy plane. Due to the movement, a point (x,y) on the object surface will be moved to the point (u,v) following the relationship below

[xy]=R[uv]+T,[uv]=R¯[xy]+T¯.
where R, R¯, T and T¯ are referred to as rotation matrixes and translation vectors which describe the relationship between (x,y) and (u,v), and they are given by

R=[r11r12r21r22],T=[t1t2],
R¯=[r¯11r¯12r¯21r¯22],T¯=[t¯1t¯2].

The relationship between (R,T) and (R¯,T¯) can be expressed as

R¯=R1,T¯=R1T.

As the shape of the object surface does not change, the height distribution of the object surface after movement becomes

h˜xy(u,v)=hxy(x,y)
where the subscript xy denotes the coordinate system in which the functions are defined. From Eq. (7), we have
h˜xy(u,v)=hxy(x,y)=hxy(f(u,v),g(u,v))
where

f(u,v)=r11u+r12v+t1,g(u,v)=r21u+r22v+t2.

Without loss of generality, (u,v) can be replaced by (x,y), yielding the following:

h˜xy(x,y)=hxy(f(x,y),g(x,y))

As mentioned above, the movement of the object during the measurement will cause variance in the phase map of reflected fringe patterns, which then results in unequally spaced phase shift among the fringe patterns. This is the fundamental reason of the errors caused by the movement. To address this problem, the relationship between the movement of object and phase maps is analyzed below.

For the N-step PSP, the fringe patterns on the reference plane and object without movement are described in Eqs. (1) and (2), respectively. After the movement of the object, the fringe patterns of the object become the following:

d˜xyn(x,y)=a+bcos(ϕ(x,y)+Φ˜(x,y)+2π(n1)N)
where Φ˜(x,y) is the phase difference at point (x,y) after movement. From Eq. (6) we can see a corresponding relationship between the height distribution and the phase difference, and hence a similar relationship to Eq. (14) should also hold for the phase differences before and after the movement:

Φ˜(x,y)=Φ(f(x,y),g(x,y))

Substituting Eq. (16) into Eq. (15) yields the following:

d˜xyn(x,y)=a+bcos(ϕ(x,y)+Φ(f(x,y),g(x,y))+2π(n1)N)

Note that Eq. (17) is defined in xy coordinate system. Now let us consider Eq. (17) in a new coordinate system ξη, which has the relationship to the xy system as follows:

[xy]=R¯[ξη]+T¯

In ξη coordinate system, Eq. (17) becomes

d˜ξηn(ξ,η)=d˜xyn(f¯(ξ,η),g¯(ξ,η))=a+bcos(ϕ(f¯(ξ,η),g¯(ξ,η))+Φ(ξ,η)+2π(n1)N)
where, from Eq. (9) we have

f¯(ξ,η)=r¯11ξ+r¯12η+t¯1,g¯(ξ,η)=r¯21ξ+r¯22η+t¯2.

Equation (19) is the expression of the fringe patterns in the ξη coordinate system. Obviously, when (R¯,T¯) are available, d˜ξηn(ξ,η) can be obtained. As Eq. (19) is valid for arbitrary two-dimensional movement, it can be rewritten in a general form as follows:

d˜n(x,y)=d˜xyn(f¯(x,y),g¯(x,y))=a+bcos(ϕ(f¯(x,y),g¯(x,y))+Φ(x,y)+2π(n1)N)

The above can be extended to N-step PSP. Due to the movement of the object, the fringe patterns on the object can be obtained as follows:

{d˜1(x,y)=a+bcos(ϕ(x,y)+Φ(x,y))d˜2(x,y)=a+bcos(ϕ(f¯2(x,y),g¯2(x,y))+Φ(x,y)+2π/N)d˜N(x,y)=a+bcos(ϕ(f¯N(x,y),g¯N(x,y))+Φ(x,y)+2π(N1)/N)
where d˜1(x,y) is the first and original fringe patterns; d˜n(x,y)n=2,3,...,N can be obtained from the captured fringe patterns d˜xyn(x,y) by Eq. (21). In other words, the left hand side of Eq. (22) is available. The phase map of the reference plane ϕ(x,y) can be calculated by Eq. (3). (f¯n(x,y),g¯n(x,y))n=2,3,...,N are also known for given rotation matrixes and translation vectors. Solving Eq. (22), a wrapped Φ(x,y) can be obtained as follows:
Φ(x,y)=arctanDADBDCDD
where
DA=n=1Nd˜n(x,y)cos2π(n1)Nn=1Ncos(ϕ(f¯n(x,y),g¯n(x,y))+2π(n1)N)sin2π(n1)N,
DB=n=1Nd˜n(x,y)sin2π(n1)Nn=1Ncos(ϕ(f¯n(x,y),g¯n(x,y))+2π(n1)N)cos2π(n1)N,
DC=n=1Nd˜n(x,y)cos2π(n1)Nn=1Nsin(ϕ(f¯n(x,y),g¯n(x,y))+2π(n1)N)sin2π(n1)N,
DD=n=1Nd˜n(x,y)sin2π(n1)Nn=1Nsin(ϕ(f¯n(x,y),g¯n(x,y))+2π(n1)N)cos2π(n1)N.
In Eqs. (24)(27), when n=1,

f¯1(x,y)=x,g¯1(x,y)=y.

As the rotation matrixes and translation vectors are assumed to be known in the above, the last question is how to determine them. Various approaches have been proposed to solve this problem [1214]. As only two-dimensional movement is considered and the object does not have deformation, the singular value decomposition (SVD) method [13] is chosen in this paper and is described as follows.

Assume that there are two sets of corresponding points and their coordinates are

Pj=[xjyj],Qj=[ujvj],j=1,2,...,J.
where Pj are the points on the object before movement and Qj are the corresponding points on the object after movement; J is the number of the corresponding points. With Eq. (7), Pj and Qj are related by the following:
Qj=R¯Pj+T¯+Vj
where Vj is a noise vector. In order to determine the rotation matrix R¯ and translation vector T¯, we define the following:

P=1Jj=1JPj,Pcj=PjP,
Q=1Jj=1JQj,Qcj=QjQ.

Then R¯ and T¯ are obtained by minimizing the square error below:

2=j=1JQjR¯^PjT¯^2=j=1JQcjR¯^Pcj2=j=1J(QcjTQcj+PcjTPcj2QcjTR¯^Pcj)
Equation (33) is minimized when the last term is maximized, which is equivalent to maximizing the trace of (R¯^,H), where H is a correlation matrix defined by

H=j=1JPcjQcjT

If the singular value decomposition of H is available by H=UΛVT, the optimal rotation matrix, R¯^, that maximizes the trace is the following:

R¯^=VUT
and the translation vector is determined as

T¯^=QR¯^P

Based on the above, the proposed technique can be implemented by the steps below:

  • Step 1: Based on the N-step PSP, N fringe patterns are projected onto the object surface, and d˜xyn(x,y)n=1,2,3,...,N are captured;
  • Step 2: Determine the rotation matrix and translation vector for the movement of object when each of the fringe patterns is captured;
  • Step 3: With the rotation matrix and translation vector, Eq. (21) is used to calculate d˜n(x,y);
  • Step 4: Determine the phase difference Φ(x,y) with Eq. (23);
  • Step 5: Work out the 3-D shape of object by Eq. (6).

4. Simulations and experiments

4.1 Simulations

Simulations are carried out to verify the performance of the proposed algorithm. In the simulations, three-step PSP algorithm is used and the object is a hemisphere. When the object is static, the reconstructed results are shown in Fig. 2. Next we carry out the simulations in which the object moves at the second step and the third step of PSP. Note that in the simulation, the rotation matrixes and translation vectors are known a priori, and hence the proposed technique is applied directly to reconstruct the object.

 figure: Fig. 2

Fig. 2 Hemisphere simulation. (a) Fringe patterns of the hemisphere for the first step of PSP; (b) Reconstructed result by Mesh display; (c) Front view of Fig. 2(b); (d) The cross section of the dash line in Fig. 2(c) where x = 300.

Download Full Size | PDF

In the first simulation the hemisphere moves in the right bottom direction as indicated by the arrow in Fig. 3(b), and the movement distance is 10 mm and 15 mm in the second step and third step respectively. Figures 3(a) and 3(b) are the reconstructed results using the traditional PSP directly. Figure 3(c) is the cross section of the object at the dash line in Fig. 3(b). There are obvious errors in the result.

 figure: Fig. 3

Fig. 3 Reconstructed results of traditional PSP when the object has oblique movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 3(a); (c) The cross section of the dash line in Fig. 3(b) where x = 300.

Download Full Size | PDF

When the proposed algorithm is applied, the object can be reconstructed successfully as shown in Figs. 4(a) and 4(b). Figure 4(c) is the cross section of the object on the dash line in Fig. 4(b). The surface becomes smooth by using the proposed algorithm.

 figure: Fig. 4

Fig. 4 Reconstructed results of the proposed algorithm when the object has oblique movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 4(a); (c) The cross section of the dash line in Fig. 4(b) where x = 300.

Download Full Size | PDF

Now we consider the case when the object is rotated clock-wisely around the top left corner in x-y plane as shown by the arrow in Fig. 5(b). The rotation angle is 0.213 rad and 0.311 rad at the second and third step of PSP respectively. Figures 5(a) and 5(b) are the reconstructed results when the traditional PSP is used. Figure 5(c) shows the cross section of the dash line in Fig. 5(b). Clearly, the movement causes distortion in the traditional PSP.

 figure: Fig. 5

Fig. 5 Reconstructed results of the traditional PSP when the object has rotation movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 5(a); (c) The cross section of the dash line in Fig. 5(b) where x = 300.

Download Full Size | PDF

Figures 6(a) and 6(b) are the reconstructed results obtained by the proposed algorithm when the object is subject to the above rotation. The smooth surface indicates that the object is well reconstructed.

 figure: Fig. 6

Fig. 6 Reconstructed results of the proposed algorithm when the object has rotation movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 6(a); (c) The cross section of the dash line in Fig. 6(b) where x = 300.

Download Full Size | PDF

4.2 Experiments

In the experiments, a plastic mask is used as the object. The size of the mask is approximate 250 mm × 250 mm. In order to calculate the rotation matrix and translation vector, we placed three marks on the mask (shown in Fig. 7) to indicate the corresponding points on the multiple fringe patterns. For achieving high accuracy in determining the rotation matrix and translation vector, the positions of the corresponding points must be accurately extracted. In order to achieve this, marks are circular with diameter of 15 mm, and the centers of these circles are employed as the corresponding points in Eq. (29). When multiple fringe patterns are acquired, we firstly extract a set of points on the edge of the circles using the approach presented in [15], and the accuracy can be sub-pixel, that is, <0.5 mm. Then least-square curve-fitting is employed to extract the centers of the circles, and the accuracy should also be <0.5 mm. Such accuracy enables us to have accurate estimation of the rotation matrix and translation vector using the algorithm described in Eqs. (29)(36).

 figure: Fig. 7

Fig. 7 Object with marks

Download Full Size | PDF

In the first experiment, the conventional three-step PSP algorithm is used. When the object is static, the reconstructed results are shown in Fig. 8. Figure 8(a) is the captured fringe patterns for the first step of PSP. Figures 8(b) and 8(c) are the reconstructed results of the mask. Figure 8(d) is the cross section of the dash line in Fig. 8(c) where x = 115. The results show that good reconstruction can be obtained.

 figure: Fig. 8

Fig. 8 Reconstructed results of the traditional PSP when the object is static. (a) Fringe patterns of the first step of PSP; (b) Reconstructed result by Mesh display; (c) Front view of Fig. 8(b); (d) The cross section of the dash line in Fig. 8(c) where x = 115.

Download Full Size | PDF

In the second experiment, the object is moved in the direction of the arrow as shown in Fig. 9(b) at the second and third step of PSP. The movement distance is 11 mm for the second step and 14 mm for the third step of PSP. Figures 9(a) and 9(b) are the measurement result of the traditional PSP algorithm. Figure 9(a) is the mesh display of the reconstructed result. Figure 9(b) is the front view of Fig. 9(a). Figure 9(c) shows the cross section of the dash line in Fig. 9(b) where x = 115. The errors in the reconstructed result are obvious and significant.

 figure: Fig. 9

Fig. 9 Reconstructed results of the traditional PSP when the object has oblique movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 9(a); (c) The cross section of the dash line in Fig. 9(b) where x = 115.

Download Full Size | PDF

Then the proposed algorithm is examined to the case where the object is moved by the same amount as above. The reconstructed results in Fig. 10 show that significant improvement was achieved.

 figure: Fig. 10

Fig. 10 Reconstructed results of the proposed algorithm when the object has oblique movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 10(a); (c) The cross section of the dash line in Fig. 10(b) where x = 115.

Download Full Size | PDF

In the final experiment, the object is rotated clockwise around the top left corner (as shown in Fig. 11(b)) from the second step to the third step of PSP. The rotation angle is 0.0387 rad in the second step and 0.0446 rad in the third step. The results with the traditional three-step PSP are shown in Figs. 11(a)11(c), which are significantly distorted in contrast to the original mask.

 figure: Fig. 11

Fig. 11 Reconstructed results of the traditional PSP when the object has rotation movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 11(a); (c) The cross section of the dash line in Fig. 11(b) where x = 120.

Download Full Size | PDF

The results with the proposed approach are shown in Figs. 12(a) and 12(b). The surface of the mask is well reconstructed and the cross section of the dash line in Fig. 12(b) is also smooth. The results are much better than those in Fig. 11.

 figure: Fig. 12

Fig. 12 Reconstructed results of the proposed algorithm when the object has rotation movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 12(a); (c) The cross section of the dash line in Fig. 12(b) where x = 120.

Download Full Size | PDF

In order to evaluate the performance improvement of the proposed technique over the traditional PSP, we also calculated the RMS (root mean square) measurement error for the experimental results presented above on the mask. As the true shape of the mask is not known, the measurement result in Fig. 8 (i.e., the mask is kept static) is used as the reference. The RMS errors with respect to Fig. 8 of the cases considered above are obtained in Table 1. It is seen that, without the proposed algorithm, the RMS error is 57.27 mm and 68.37 mm respectively. When the proposed technique is employed, the RMS error becomes 0.081 mm and 0.076 mm, indicating a significant reduction in the RMS error and thus significant improvement in measurement accuracy.

Tables Icon

Table 1. The RMS measurement error of the mask

5. Conclusion

In this paper, a new approach is presented with the aim to achieve accurate 3-D profile measurement of a moving object using PSP based FPP. The proposed algorithm inherits the advantage of robustness of PSP and enables the accurate measurement of moving object with low cost. The proposed technique consists of two steps. Firstly, the rotation matrix and the translation vector describing the two-dimensional movement of the object are estimated from the multiple fringe patterns. Then, the expressions of fringe patterns acquired from the object subject to a 2-D movement are derived. Based on these expressions the phase maps of the fringe patterns of the moving object can be obtained, which are used to yield an accurate 3-D shape of the object. The performance of the proposed algorithm has been verified by the simulations and experiments.

References and links

1. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

2. Y. Ding, J. Xi, Y. Yu, and J. Chicharo, “Recovering the absolute phase maps of two fringe patterns with selected frequencies,” Opt. Lett. 36(13), 2518–2520 (2011). [CrossRef]   [PubMed]  

3. S. Zhang, D. Van Der Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]   [PubMed]  

4. Y. Hu, J. Xi, J. Chicharo, and Z. Yang, “Blind color isolation for color-channel-based fringe pattern profilometry using digital projection,” J. Opt. Soc. Am. A 24(8), 2372–2382 (2007). [CrossRef]   [PubMed]  

5. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010). [CrossRef]   [PubMed]  

6. X. Su, W. Chen, Q. Zhang, and Y. Chao, “Dynamic 3-D shape measurement method based on FTP,” Opt. Lasers Eng. 36(1), 49–64 (2001). [CrossRef]  

7. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]   [PubMed]  

8. S. Zhang and S.-T. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng. 46(11), 113603 (2007). [CrossRef]  

9. E. Hu and Y. He, “Surface profile measurement of moving objects by using an improved π phase-shifting Fourier transform profilometry,” Opt. Lasers Eng. 47(1), 57–61 (2009). [CrossRef]  

10. Y. Hu, J. Xi, J. F. Chicharo, W. Cheng, and Z. Yang, “Inverse function analysis method for fringe pattern profilometry,” IEEE Trans. Instrum. Meas. 58(9), 3305–3314 (2009). [CrossRef]  

11. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

12. P. Meer, D. Mintz, A. Rosenfeld, and D. Y. Kim, “Robust regression methods for computer vision: a review,” Int. J. Comput. Vis. 6(1), 59–70 (1991). [CrossRef]  

13. K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987). [CrossRef]   [PubMed]  

14. B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. Opt. Soc. Am. A 4(4), 629–642 (1987). [CrossRef]  

15. A. Trujillo-Pino, K. Krissian, M. Alemán-Flores, and D. Santana-Cedrés, “Accurate subpixel edge location based on partial area effect,” Image Vis. Comput. 31(1), 72–90 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The structure of PSP system.
Fig. 2
Fig. 2 Hemisphere simulation. (a) Fringe patterns of the hemisphere for the first step of PSP; (b) Reconstructed result by Mesh display; (c) Front view of Fig. 2(b); (d) The cross section of the dash line in Fig. 2(c) where x = 300.
Fig. 3
Fig. 3 Reconstructed results of traditional PSP when the object has oblique movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 3(a); (c) The cross section of the dash line in Fig. 3(b) where x = 300.
Fig. 4
Fig. 4 Reconstructed results of the proposed algorithm when the object has oblique movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 4(a); (c) The cross section of the dash line in Fig. 4(b) where x = 300.
Fig. 5
Fig. 5 Reconstructed results of the traditional PSP when the object has rotation movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 5(a); (c) The cross section of the dash line in Fig. 5(b) where x = 300.
Fig. 6
Fig. 6 Reconstructed results of the proposed algorithm when the object has rotation movement. (a) Reconstructed result by Mesh display; (b) Front view of Fig. 6(a); (c) The cross section of the dash line in Fig. 6(b) where x = 300.
Fig. 7
Fig. 7 Object with marks
Fig. 8
Fig. 8 Reconstructed results of the traditional PSP when the object is static. (a) Fringe patterns of the first step of PSP; (b) Reconstructed result by Mesh display; (c) Front view of Fig. 8(b); (d) The cross section of the dash line in Fig. 8(c) where x = 115.
Fig. 9
Fig. 9 Reconstructed results of the traditional PSP when the object has oblique movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 9(a); (c) The cross section of the dash line in Fig. 9(b) where x = 115.
Fig. 10
Fig. 10 Reconstructed results of the proposed algorithm when the object has oblique movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 10(a); (c) The cross section of the dash line in Fig. 10(b) where x = 115.
Fig. 11
Fig. 11 Reconstructed results of the traditional PSP when the object has rotation movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 11(a); (c) The cross section of the dash line in Fig. 11(b) where x = 120.
Fig. 12
Fig. 12 Reconstructed results of the proposed algorithm when the object has rotation movement. (a) Reconstructed result by mesh display; (b) Front view of Fig. 12(a); (c) The cross section of the dash line in Fig. 12(b) where x = 120.

Tables (1)

Tables Icon

Table 1 The RMS measurement error of the mask

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

s n (x,y)=a+bcos(ϕ(x,y)+ 2π(n1) N )
d n (x,y)=a+bcos(ϕ(x,y)+Φ(x,y)+ 2π(n1) N )
ϕ r (x,y)=ϕ(x,y)=arctan n=1 N s n (x,y) sin2π(n1)/N n=1 N s n (x,y) cos2π(n1)/N
ϕ o (x,y)=ϕ(x,y)+Φ(x,y)=arctan n=1 N d n (x,y) sin2π(n1)/N n=1 N d n (x,y) cos2π(n1)/N
Φ(x,y)= Φ o (x,y) Φ r (x,y)
h(x,y)= l 0 Φ(x,y) Φ(x,y)2π f 0 d 0
[ x y ]=R[ u v ]+T, [ u v ]= R ¯ [ x y ]+ T ¯ .
R=[ r 11 r 12 r 21 r 22 ], T=[ t 1 t 2 ] ,
R ¯ =[ r ¯ 11 r ¯ 12 r ¯ 21 r ¯ 22 ], T ¯ =[ t ¯ 1 t ¯ 2 ].
R ¯ = R 1 , T ¯ = R 1 T.
h ˜ xy (u,v)= h xy (x,y)
h ˜ xy (u,v)= h xy (x,y)= h xy (f(u,v),g(u,v))
f(u,v)= r 11 u+ r 12 v+ t 1 , g(u,v)= r 21 u+ r 22 v+ t 2 .
h ˜ xy (x,y)= h xy (f(x,y),g(x,y))
d ˜ xy n (x,y)=a+bcos(ϕ(x,y)+ Φ ˜ (x,y)+ 2π(n1) N )
Φ ˜ (x,y)=Φ(f(x,y),g(x,y))
d ˜ xy n (x,y)=a+bcos(ϕ(x,y)+Φ(f(x,y),g(x,y))+ 2π(n1) N )
[ x y ]= R ¯ [ ξ η ]+ T ¯
d ˜ ξη n (ξ,η)= d ˜ xy n ( f ¯ (ξ,η), g ¯ (ξ,η)) =a+bcos(ϕ( f ¯ (ξ,η), g ¯ (ξ,η))+Φ(ξ,η)+ 2π(n1) N )
f ¯ (ξ,η)= r ¯ 11 ξ+ r ¯ 12 η+ t ¯ 1 , g ¯ (ξ,η)= r ¯ 21 ξ+ r ¯ 22 η+ t ¯ 2 .
d ˜ n (x,y)= d ˜ xy n ( f ¯ (x,y), g ¯ (x,y)) =a+bcos(ϕ( f ¯ (x,y), g ¯ (x,y))+Φ(x,y)+ 2π(n1) N )
{ d ˜ 1 (x,y)=a+bcos(ϕ(x,y)+Φ(x,y)) d ˜ 2 (x,y)=a+bcos(ϕ( f ¯ 2 (x,y), g ¯ 2 (x,y))+Φ(x,y)+2π/N) d ˜ N (x,y)=a+bcos(ϕ( f ¯ N (x,y), g ¯ N (x,y))+Φ(x,y)+2π(N1)/N)
Φ(x,y)=arctan D A D B D C D D
D A = n=1 N d ˜ n (x,y)cos 2π(n1) N n=1 N cos(ϕ( f ¯ n (x,y), g ¯ n (x,y))+ 2π(n1) N )sin 2π(n1) N ,
D B = n=1 N d ˜ n (x,y)sin 2π(n1) N n=1 N cos(ϕ( f ¯ n (x,y), g ¯ n (x,y))+ 2π(n1) N )cos 2π(n1) N ,
D C = n=1 N d ˜ n (x,y)cos 2π(n1) N n=1 N sin(ϕ( f ¯ n (x,y), g ¯ n (x,y))+ 2π(n1) N )sin 2π(n1) N ,
D D = n=1 N d ˜ n (x,y)sin 2π(n1) N n=1 N sin(ϕ( f ¯ n (x,y), g ¯ n (x,y))+ 2π(n1) N )cos 2π(n1) N .
f ¯ 1 (x,y)=x, g ¯ 1 (x,y)=y.
P j =[ x j y j ], Q j =[ u j v j ], j=1,2,...,J.
Q j = R ¯ P j + T ¯ + V j
P = 1 J j=1 J P j , P cj = P j P ,
Q = 1 J j=1 J Q j , Q cj = Q j Q .
2 = j=1 J Q j R ¯ ^ P j T ¯ ^ 2 = j=1 J Q cj R ¯ ^ P cj 2 = j=1 J ( Q cj T Q cj + P cj T P cj 2 Q cj T R ¯ ^ P cj )
H= j=1 J P cj Q cj T
R ¯ ^ =V U T
T ¯ ^ = Q R ¯ ^ P
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.