In this work, we propose a novel technique to retrieve 3D shape of dynamic objects by the simultaneous projection of a fringe pattern and a homogeneous white light pattern, both coded in an RGB image. The first one is used to retrieve the phase map by an iterative least-squares method. The second one is used to match object pixels in consecutive images, acquired at various object positions. The proposed method successfully accomplishes the requirement of projecting simultaneously two different patterns. One extracts the object's information while the other retrieves the phase map. Experimental results demonstrate the feasibility of the proposed scheme.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
2D and 3D machine vision systems may be found in several industrial processes for object positioning and location [1–3]. Current machine vision systems include, for example, sorting parts, verifying hole positions and dimensions, and checking the overall shape and fit, according to design requirements, just to name a few .
With technological advancements in patterned light projection and digital imaging, 3D optical metrology based on digital fringe projection and phase-shifting methods has improved considerably in the last decade. One of the major research challenges has been to improve the ability to work on a wide range and type of surfaces with variations in surface features, specifically, the bidirectional reflectance function, BRDF. Improvements are also reported in automated profile reconstruction and high-speed 3D shape measurements [2,4]. One of the recent major challenges has been to measure moving objects [5–7].
A number of optical techniques for three-dimensional (3D) surface measurement of moving objects have been developed . Normally, these techniques can be categorized into either multiple-shot or single-shot (like Fourier transform profilometry, FTP) categories. For moving objects, single-shot techniques are desired to acquire 3D surface. However, this methodology do not provide a sufficient measurement accuracy due mainly FTP is typically very sensitive to noise and surface texture variation .
On the other hand, in multiple-shot, it is generally accepted that at least three phase-shifted fringe patterns are required using phase shift algorithms to retrieve the phase. Although ultra-fast algorithms have been developed , these require, in the case of the moving object, additional information to estimate the lateral displacement or 2D movement of the object . In this sense, several approaches have been proposed along these lines [12,13]. K. Peng and associates suggest the use of orthogonal two-frequency fringe patterns. The phase-shifting direction is along the direction of motion using the high-frequency pattern; the low frequency fringes are perpendicular to the direction of motion; this arrangement may also be reversed . However, the main idea remains the same: the first pattern is used for phase retrieval, and the second one is incorporated to estimate the lateral displacement of the moving object using a pixel matching method. Some of these approaches for pixel matching requires to determine the modulation distribution or the phase distribution for each frame, which is obtained from high frequency patterns using a Fourier filter. These procedure could be computationally expensive for CPU systems compared with our proposed, where one only requires a sequence of pictures extracted from the blue channel of each RGB frame. Therefore, the proposed pixel matching procedure offers the advantages of simplicity and low computational cost, with potential in real applications. There exist other proposals to estimate the lateral displacement, for example utilizing markers for pixel matching  or other strategies [1,15].
One must first acquire a set of fringe patterns for phase retrieval from fringe projection. They are described mathematically as
In general, to calculate the phase at a pixel location (x,y), the Fourier series coefficients and are evaluated using
Expression (4) assumes that the phase steps are known. In [12,14] the authors assume that the phase shifts are evenly separated in the interval [0, 2π], that is .
In dynamic 3D profiling, however, this requirement is often difficult to meet exactly because the object speed is not known or necessarily constant. Additionally, the algorithms used to determine the phase steps from the experimentally obtained intensity patterns incorporate some uncertainty, resulting in error (see, e.g., [16,17]). In practical cases, the actual (measured) phase steps will not coincide with their nominal values and will not be evenly spaced . Something quite similar happens in the procedures proposed in [1,13,15] to retrieve the phase: it is based on a five step process referred to as the Stoliov’s algorithm. This is a tunable algorithm, i.e. it does not require the knowledge of the phase steps [18–20]. Its main disadvantage is its sensitivity to phase shift error. To eliminate this inconvenience, Yang Li et al.  proposed a three-dimensional on-line measurement method based on five unequal-step phase shifting algorithms.
In addition to the phase-shift error, another noise source in 3D measurement using fringe projection arises due to non-sinusoidal waveform patterns. Equation (4) indicates that (p + 2) step phase shifting techniques have to be used in order to retrieve the phase accurately from fringe patterns with harmonics up to the pth order . However, in case of random phase shifts and high harmonics, we cannot use the algorithm described in  and the number of steps must be increased to (2p + 1) (see ref .).
In this work, we propose a new technique to retrieve the 3D shape of a dynamic object by projecting a color image on its surface, while the object is transported in a straight-line motion with a uniform (linear) velocity. The RGB image consists of a sinusoidal fringe pattern, which is encoded in the red channel, and a homogeneous background (255 gray level) is encoded in the blue channel. The latter is used to track the object, and the former is used to estimate the depth of the object by employing the advanced iterative algorithm, AIA .
This paper is organized as follows. In Section 2, we describe the overall procedure to extract the 3D shape of a dynamic object recorded in a RGB video. In Sections 3 and 4, we show the principal results obtained from numerical simulation and the experimental results. Finally, Section 5 exposes the principal conclusions.
2. Description of the method
The online 3D measurement system, shown in Fig. 1, is based on a sinusoidal fringe projection. It consists of a digital light projector (DLP), which is used to project an RGB software-generated fringe pattern. A high-resolution charge-coupled device (CCD) camera is employed to capture a video where the fringe pattern is modulated by the object’s surface. The object is positioned on a platform that is moving along the direction of positive x-axis with an approximately constant velocity. Moreover, the platform may be replaced by a conveyor belt. Therefore, the proposed method can be used in industrial applications as well. The procedure for the phase retrieval consists in these two steps: In the first one, a sinusoidal fringe pattern without an assigned phase-shifting value and a white image (255 gray level) are encoded into the R and B channels of a color image. The DLP projector then projects this color image onto the surface of the object. Then the CCD camera acquires a sequence of images.
In the second step, we retrieve two images (R and B channels) from each of the acquired frames. The first one is used to estimate the depth of the object tested; the second one is used to track object position as it moves along the x-direction. A detailed description of the second step is developed in the rest of this Section.
2.1 Phase retrieval
A static sinusoidal grating pattern is projected onto the test object that is moving along the positive x-direction with a more or less constant velocity. Thus, the CCD camera acquires a signal modulated by the height of the object above its platform. The intensity of R channel of N color images that are acquired as a video sequence may be expressed asEq. (5)
Under the consideration that segmented patterns are extracted from the video as the test object moves within a duty cycle of the sinusoidal pattern, around to the center of point of view of CCD camera. We assume that and are spatially uniform across the image and that the object is at the same position in all sets of the fringe patterns, we can rewrite Eq. (6) in the form of Eq. (1)1,13,15]. To accomplish that aim, we need to achieve an accuracy control in the motion of the conveyor belt. Typically a calibration is required between the acquisition time and the velocity of linear travel of the stage in order to obtain a uniform phase shift. If the pipeline, i.e., the sequence of frames, is not well controlled due to the motion non-uniformity, the detected deformed patterns captured by the CCD at a constant frame rate, will not be spaced equally. Then, the phase calculated by use of tunable PS algorithms, that have been developed to work with arbitrary but equally spaced phase steps, may not been applied properly. Its results could fail.
2.2 AIA algorithm
With the procedure outlined above, we obtain a set of phase-shifted fringe patterns with the object located at the center. However, phase shifts are unknown and non-uniformly spaced. The AIA scheme requires knowing initial shift values to start the iterative steps. For this purpose, the initial estimation for the phase-shifts must be determined by the method described in .
The iterative method proposed by Wang and Han  consists of transforming the non-linear problem Eq. (1) to a linear iterative one employing two procedures: first, the phase map is approximated (spatial part) and then the phase-shifts are calculated (temporal part).
Phase estimation Considering that the background intensity and the fringe contrast are independent of time, they are considered only spatial functions. We can obtain an expression for the sum of quadratic error as follows
Here are the unknown variables to estimate in the least-squares sense. They are solved at each pixel The phase is estimated as. There and are calculated by solving the linear least-squares system of equations, generated from Eq. (8).
Phase-shift estimation We know that the background intensity and the fringe contrast are only functions of time; therefore, they do not have any spatial dependence. Thus, we may obtain the next expression
Here we employed the last estimation for . The number of pixels is denoted by and . Also, and are the unknown variables. Similarly as in the previous step, we build a system of linear equations, where the numerical solution allows us to estimate the phase shift as . A more detailed description may be found in .
The iterative scheme described in Eqs. (8) and (9) generates a sequence of approximations that converges to a solution. The stop condition to obtain the converged solution satisfies the inequality where denotes the Euclidean norm, and the variables and correspond to the threshold or acceptable error and the iteration index, respectively. The above procedure estimates both the phase map and wrapped phase-shifts (i.e., phase module). Hence, we use the unwrapping procedure in order to obtain a continuous phase map . In the following section, we show the main results obtained with the method described above.
In this Section, we are interesting in determining the performance of AIA algorithm for N samples (N is the number of phase-shifting step) taken into interval 0 and 2π in the presence of harmonics, which could be introduced by the non-linear response of the projector and camera system. From the experimental data (see Sec. 4), we determine a frequency of sampling of 11 samples or shifted patterns in the interval (0, ). Also, we found that phase-shifting steps are uniformly distributed with a standard deviation of 0.15 rad. In the first simulation, in agreement with Hoang et al. , the acquired intensity pattern may be described as23]. Therefore, we only employ the first five harmonics whose experimentally determined amplitudes are 0.5, 0.05, 0.03, 0.02, and 0.01. The simulated phase corresponds to the peak-function used in Matlab TM and normalized to rad, see Fig. 2(a). Using Eq. (10) we generate N sinusoidal fringe patterns of 418 × 418 pixels, one of which is presented in Fig. 2(b). For a particular case, with N = 7, the retrieved phase using AIA algorithm is exhibited in Fig. 2(c). The phase error is depicted in Fig. 2(d).
Figure 3 shows the RMS phase error obtained by increasing the number of phase-shifting step or samples between 0 and . The error in the phase estimation is measured by using the classical root mean square values where and denote the true phase and the numerical approximation retrieved by AIA, respectively.
In Fig. 3, one can observe that the phase error decreases significantly as we increase the number the samples up to 7; afterwards, it decreases slowly. This result is consistent with the tendency observed in classic phase shift algorithm described by Eqs. (2) to (4), (see ). Here the minimum number of samples to retrieve the phase accurately is given as the harmonic number (p + 2). This result contradicts the requirement established in , where the authors mention that (2p + 1) phase steps are required to recover the phase accurately using the iterative algorithms, like AIA (see ref .).
In a second simulation, we tested the performance of the AIA algorithm under conditions of phase-shift error and with the presence of harmonics; also as function of number of phase shift steps, , and the number of acquired fringe patterns, For this purpose, we generated different set of fringe patterns using Eq. (10) and the phase shown in Fig. 2(a) as object under test. For each set (5-AIA, 6-AIA,…, N-AIA), we generated M sinusoidal fringe pattern like that shown in Fig. 2(b). For each set of fringe pattern we introduce a phase-shift error, whose mean is equal to zero and σ is 0.15 rad. Figure 4 presents the RMS phase error as a function of N-samples and the number of acquired patterns for the AIA scheme.
Upon examining Fig. 3 and 4 we can observe that the RMS phase error decrease as the number of phase-steps increases (5-samples, 7-samples,…, N-samples) as it is expected. Also, in Fig. 4 one can observe that for N ≥ (p + 2) that for 7-sample the RMS phase error tends to a local minimum at M = 7, 15, and 22. Similarly, 11 samples, the RMS phase error tends to a local minimum at M = 11 and 22, i.e., RMS phase error tends to local minimum when M ≅ N, 2N, 3N .... But, this trend is not observed in 9-samples. In particular, for a sampling of five, we determine that the error decreases slowly. We would like to remark that as we increase the number of frames - independently of the sampling criteria - the RMS error tends to a constant residual value.
Finally, in these simulations we found that the phase error decreases as we increase the number of frames. This is consistent with the theory of least-squares employed in the AIA scheme.
4. Experimental results
In this section, we describe a validation experiment by employing a commercial DLP projector (ViewSonic model PJD7820) with 1920 × 1280 pixels. A color video was acquired with an 8-bit single-CCD camera (Thorlabs model DCU224C), whose resolution is 1280 × 1024 pixels, and the viewing angle deg. See Fig. 1 for a graphical presentation of the experimental setup. In the experiment, the pipeline moves at a velocity of 0.7 cm/s and the camera acquires a video with a frame rate of 5 fps. The test object is a hemisphere with a diameter of approximate 10 cm.
For the experiment we generated one color image: in the R channel and B channel we code a grating with a sinusoidal intensity profile with a pitch of 32 pixels and a uniform background with a gray level of 255, respectively. The G channel is set to zero. By coding only in R and B channels we eliminate the cross-talk problem.
From the recorded video, we extract N frames. Each one is split into two sequences of images: The first one includes a series of fringe patterns and the second one presents images of the object under test. The set of photographs is used to estimate the lateral displacement of the object using a pixel matching method described in . Figure 5(a) shows one frame extracted from the acquired video. Figure 5(b) shows the segmented region, where R and G channels correspond to a sinusoidal fringe pattern modulated by the object under test (Visualization 1), and the photograph of the object (Visualization 2); see Figs. 5(c) to (d), respectively.
In Fig. 6, we exhibit a set of six fringe patterns segmented and extracted from the acquired video. In Visualization 3 one include all the frames. We can observe that the object is located approximately at the same position in all frames, while the fringe patterns are non-uniformly shifted sideways (phase shifted).
With the set of extracted images, we can estimate the wrapped surface of the object with the AIA technique described in Section 2.2. As commented there, we require an initial estimate for the phase shift value between consecutive frames. We employ the algorithm proposed by Guo and Zhang  that determines an initial approximation of the phase shift. Finally, the unwrapped phase is calculated using the method developed by Ghiglia and Romero . In order to evaluate and compare our proposed technique, we retrieve the same object profile by the use of the PS algorithm for five arbitrary phase-steps, which is described by Eq. (19) in ref ,16] we estimated the phase steps.
In order to verify the presence of harmonics in the acquired patterns, we recover the phase using the 3-step algorithm with the static object. Figure 7 display our main results upon comparing AIA with five arbitrary phase-step PS algorithm. Also, we include the phase retrieved with the 3-steps technique. Figure 7 (a) display the unwrapped phase recovered as intensity map by projecting three sinusoidal fringe patterns over the static object and using the classical 3-step PS algorithm. Figure 7(b) depicts the recovered unwrapped phase over a dynamic object by the projection of a colored image using the algorithm described by Eq. (11) and phase steps δk = [0,1.73,2.76,3.81,5.61] which were determined by the Guo and Zhang algorithm . While, Fig. 7 (c) and (d) shown the retrieved phase by 5 sample-AIA whererad and 16 sample-AIA with an average phase-step of 0.392 rad, respectively. Figures 8 shows the cross section of the phase-maps presented in Fig. 7 (a) to (d). We observe in red line that the phase-map retrieved by three phase step algorithm contains a ripple like a structure that might arise from the projector non-linearity. While, in Fig. 8 one can see clearly the difference between the five-sample algorithm described by Eq. (11) in blue line, and the N-sample AIA : green line for 𝑁 = 5 and black line for 𝑁 = 16. This result is actually consistent with simulations, i.e., we observe as we increase the number of phase step the retrieved phase increases its accuracy.
In our experiment, we acquired a video and extracted 16 frames, as shown in Fig. 6. The general scheme is built in two blocks: the tracking-segmentation and the phase retrieval. The CPU time for each block was 1.16 and 4.30 secs., respectively. We coded the algorithms in Matlab and the computer employed was an Intel Core i7-7700HQ CPU running at 2.80GHz with 8GB RAM. The speed-up on both blocks can be significantly improved by coding the algorithm in C/C + + , and applying techniques of high performance computing such as loop-unrolling and parallel instruction set of operations. Finally, given the fine-grained algorithm of AIA, it can be codified in parallel architectures with graphics processors like the CUDA. Further work on this line is currently being developed by the authors.
We proposed a novel technique to recover the 3D shape of a moving object by the simultaneous projection of a sinusoidal fringe pattern and a homogeneous light pattern codified into two R and B channels of a RGB image, respectively.
Furthermore, we have demonstrated that the use of iterative algorithm with N-arbitrarily spaced phase-steps, in applications with moving objects, overcomes the necessity of efficient control in the velocity of the linear travel stage. Thus, the calibration between the acquisition time and velocity of the travel platform, in order to obtain a uniform phase shift, is unnecessary. In other words, we have shown that we can retrieve the phase without any previous calibration between a given commercial projector/camera system and the travel platform.
We only require a sequence of fringe patterns that we extract from a video. The accuracy of the proposed technique could be improved if we use robust PS algorithms to randomly phase shifts and harmonics, similar to those proposed by Hoang et al. .
Considering the simplicity of the implementation and use of the proposed method, it is believed potentially useful for dynamic measurements and real-time applications.
References and links
1. X. Xu, Y. Cao, C. Chen, and Y. Wan, “On-line phase measuring profilometry based on phase matching,” Opt. Quantum Electron. 48(8), 411 (2016). [CrossRef]
2. K. Harding, “3D profilometry: next requests from the industrial viewpoint,” Proc. SPIE 7855, 785513 (2010). [CrossRef]
3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]
4. K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” Int. J. Adv. Manuf. Technol. 76(9–12), 1563–1574 (2015). [CrossRef]
5. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289–23303 (2016). [CrossRef] [PubMed]
6. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39(23), 6715–6718 (2014). [CrossRef] [PubMed]
7. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef] [PubMed]
9. X. Su and W. Chen, “Fourier transform profilometry: a review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]
11. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef] [PubMed]
12. K. Peng, Y. Cao, Y. Wu, and M. Lu, “A new method using orthogonal two-frequency grating in online 3D measurement,” Opt. Laser Technol. 83, 81–88 (2016). [CrossRef]
13. K. Peng, Y. Cao, Y. Wu, C. Chen, and Y. Wan, “A dual-frequency online PMP method with phase-shifting parallel to moving direction of measured object,” Opt. Commun. 383, 491–499 (2017). [CrossRef]
14. C. Chen, Y. P. Cao, L. J. Zhong, and K. Peng, “An on-line phase measuring profilometry for objects moving with straight-line motion,” Opt. Commun. 336, 301–305 (2015). [CrossRef]
15. K. Peng, Y. Cao, Y. Wu, and Y. Xiao, “A new pixel matching method using the modulation of shadow areas in online 3D measurement,” Opt. Lasers Eng. 51(9), 1078–1084 (2013). [CrossRef]
17. C. T. Farrell and M. A. Player, “Phase step measurement and variable step algorithms in phase-shifting interferometry,” Meas. Sci. Technol. 3(10), 953–958 (1992). [CrossRef]
18. R. Juarez-Salazar, C. Robledo-Sanchez, F. Guerrero-Sanchez, and A. Rangel-Huerta, “Generalized phase-shifting algorithm for inhomogeneous phase shift and spatio-temporal fringe visibility variation,” Opt. Express 22(4), 4738–4750 (2014). [CrossRef] [PubMed]
19. J. F. Mosiño, J. C. Gutiérrez-García, T. A. Gutiérrez-García, F. Castillo, M. A. García-González, and V. A. Gutiérrez-García, “Algorithm for phase extraction from a set of interferograms with arbitrary phase shifts,” Opt. Express 19(6), 4908–4923 (2011). [CrossRef] [PubMed]
21. Y. Li, Y. P. Cao, Z. F. Huang, D. L. Chen, and S. P. Shi, “A three dimensional on-line measurement method based on five unequal steps phase shifting,” Opt. Commun. 285(21), 4285–4289 (2012). [CrossRef]
23. T. Hoang, Z. Wang, M. Vo, J. Ma, L. Luu, and B. Pan, “Phase extraction from optical interferograms in presence of intensity nonlinearity and arbitrary phase shifts,” Appl. Phys. Lett. 99(3), 031104 (2011). [CrossRef]
25. D. C. Ghiglia and L. A. Romero, “Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods,” J. Opt. Soc. Am. A 11(1), 107–117 (1994). [CrossRef]
26. G. A. Ayubi, C. D. Perciante, J. L. Flores, J. M. Di Martino, and J. A. Ferrari, “Generation of phase-shifting algorithms with N arbitrarily spaced phase-steps,” Appl. Opt. 53(30), 7168–7176 (2014). [CrossRef] [PubMed]
27. R. E. Guerrero-Moreno and J. Álvarez-Borrego, “Nonlinear composite filter performance,” Opt. Eng. 48(6), 067201 (2009). [CrossRef]