Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3-D shape measurement by composite pattern projection and hybrid processing

Open Access Open Access

Abstract

This article presents a projection system with a novel composite pattern for one-shot acquisition of 3D surface shape. The pattern is composed of color encoded stripes and cosinoidal intensity fringes, with parallel arrangement. The stripe edges offer absolute height phases with high accuracy, and the cosinoidal fringes provide abundant relative phases involved in the intensity distribution. Wavelet transform is utilized to obtain the relative phase distribution of the fringe pattern, and the absolute height phases measured by triangulation are combined to calibrate the phase data in unwrapping, so as to eliminate the initial and noise errors and to reduce the accumulation and approximation errors. Numerical simulations are performed to prove the new unwrapping algorithms and actual experiments are carried out to show the validity of the proposed technique for accurate 3-D shape measurement.

©2007 Optical Society of America

1. Introduction

Three-dimensional shape acquisition has diverse applications in engineering fields, such as online inspection of product quality, face recognition of robotic sensing in mechanical engineering, body surface evaluation for orthotics, and surgery simulation and navigation in biomedical engineering. At present, popular optical techniques of 3D shape measurement include stereo vision, laser scanning, structured light and moiré method. Depending on the light source and the receiving style, those optical systems can be classified as active systems and passive systems. The stereo vision method [1] uses two or more cameras to capture the scene from different viewpoints and to determine the height by matching image features of the corresponding surface textures. The laser scanning technique [2], that projects a light slice to mark a specified line for triangular measuring, can produce an accurate topography by scanning across the surface line by line, which is more suitable to static objects. As another active system, the technique of structured light generates a full pattern to cover the object so that the surface can be captured as a whole field recording. The encoding of the structured patterns, therefore, becomes important because it determines the local features of the recorded images to be identified in spatial recognition and height evaluation. Recently, Salvia et al. [3] classified the pattern encoding modes into three types: time-multiplexing, neighborhood codification and direct codification. The time-multiplexing encoding [4] uses a codeword formed by a sequence of values within a series of patterns to encode a pixel, which can produce high resolution results in measurement even though it is suitable only to static objects because of its multi-pattern projection. Using a single projection by encoding the pattern with local information of neighboring pixels, on the other hand, the neighborhood codification technique [5] can measure dynamic objects with an entire coding scheme. The direct codification [6] encodes the pixel using its own information of gray level or color value, to produce patterns with high resolution. Some techniques using color fringes in RGB space are related to the direct codification in shape measurement. Jeong and Kim [7] used color grating projection to generate three phase shifting moirés in RGB color space of one image to measure 3-D contour at high speed with time-integral fringe capturing. Skydan et al. [8] used three projectors to illuminate the object surface with three color fringe patterns at different viewpoints to overcome the shadowing influence caused by single projection. The technique proposed by Zhang et al. [9] combined three fringes with different frequency in one color image comprising the red, green and blue channels, to obtain absolute 3D shape phases and object colors simultaneously.

The structured pattern using neighborhood codification offers a one-shot technique for 3-D surface measurement. The spatial resolution of this pattern encoding, however, is normally not very high. For a stripe or a spot encoded by a unique code in this kind of pattern, for instance, the detail features inside are indistinguishable in the spatial domain. To increase the measurement resolution of shape detection, as presented in this paper, phase modulation needs to be introduced to the structured patterns so that the information included in the sparsely encoded patterns becomes greater. Some techniques such as phase shifting or image transform, therefore, can be utilized to solve the phase demodulation with automatic processing [1012]. In fact, for a carrier pattern consisting of fringes with distributed intensity, such as that resulting from coherent light interference, the pattern distortion caused by the measured object can be considered as the frequency modulation to the intensity phases. The phase shifting method [10] uses several of those intensity patterns to demodulate the whole-field phase distribution with artificial phase changes, so the technique is valuable for static object measurement. Fourier transform [11] and wavelets transform techniques [12], on the other hand, use normally one fringe pattern and map the modulated intensity into the spectrum or scale-space domain, so that the phases related to the object depth can be solved by spectral filtering and phase demodulation based on the carrier frequency. Nevertheless, all those methods need an unwrapping processing to connect the interrupted phases caused by inversely tangential or imaginary phase algorithm. The unwrapping is very sensitive to the noises in the image and the error at the starting point of each unwrapping line (hereafter denoted as the initial error), which can produce significant mistakes when the phase errors accumulate along the columns or rows of the image.

In this paper, a novel pattern is proposed as a composite projection of color encoded stripes and cosinoidal intensity fringes aligned in parallel, to acquire 3-D shapes by a hybrid solution of combining triangulation with wavelet processing. The color stripes offer an identifiable pattern so that the height phases on the stripe edges can be obtained to relate with the surface height by triangular geometry. Therefore, the relative phases distributed inside the cosinoidal fringes of the intensity pattern, which are obtained here by wavelet transform (WT) processing, can be calibrated by those absolute height phases for each fringe. In this way, not only the approximation errors in the WT processing can be compensated, but also the accumulated errors in the unwrapping process can be much reduced in the whole pattern so that the surface topography can be obtained with high accuracy. The method to generate this kind of composite pattern is presented in Section 2, and the hybrid processing for the pattern is given in Section 3, describing ways of phase measurement and calibration. In Section 4, simulations and experimental tests are presented to prove the validity of the new method for the 3-D surface evaluation.

2. Pattern acquisition

2.1 Optical system for pattern projection and triangular measurement

Figure 1 presents a layout of the optical system to project the composite pattern on object surface with one-shot illumination. In this configuration, the connection line of the projector lens center Op and the camera lens center Oc, with a distance D in between, is parallel to the reference plane. The triangle ▵ABC is similar to ▵OcOpC and the side from the point B to A on the reference plane, corresponding to the side OpOc¯, results in a shift d from χr to χo on the image plane, when the object intersects the projected pattern at point C. The shift d can be measured by comparing the pattern image distorted by the object shape with that projected on the reference plane, so as to obtain the object height h by triangulation [13], given by

h=(H×d×k)(d×k+D),

where H is the distance between the camera and the reference plane, k=fc/H*s is the magnification of the optical system related to the focal length fc of the camera and the intrinsic parameter s representing the effective size of the sensor pixel [1]. In fact, the shift d on the image plane is related to its phase difference by ϕ=2/kϕ, representing the phase change from the reference pattern to the distorted pattern, with a constant kϕ determined in calibration.

 figure: Fig. 1.

Fig. 1. Optical system for structured pattern projection and triangulation.

Download Full Size | PDF

2.2 Composite pattern of color stripes and intensity fringes

An optical pattern of structured light is composed of color encoded stripes and cosinoidal intensity fringes in a parallel arrangement, with the stripe edges coinciding with the extreme locations of the fringe intensity. The color encoding is embedded in the hue channel and the cosinoidal intensity is in the value channel of the HSV color space, respectively. In comparison with other structured light patterns [1315], this composition included complementary information carried by these two kinds of optical patterns. The borderlines of the color stripes, which can be searched using edge detectors, provide the absolute height phases determined by triangulation. The relative phases of the intensity distributed inside each fringe, moreover, can be solved by the wavelet processing and then unwrapped based on the correction of the absolute phases on each stripe edge, to obtain the whole map of the surface shape. In comparison with the composite pattern proposed by Fong et al. [15], which combined a group of color strips and intensity fringes in an orthogonal arrangement, the difficulty in edge detection can be avoided by this new composition. The key points are that the two patterns are aligned in parallel and the stripe edges are located in the brightest positions of the intensity fringes, which make the stripe edges easily detected in the composite pattern. Moreover, the phase solution can be much improved by using the wavelet transform without low pass filtering processing for those fringes with non-uniform frequencies.

The De Bruijn sequences are used to encode the color stripes [3]. Such a sequence of order m over an alphabet of n symbols is a circular string of length nm, where the substring with length m appears only once. The sequence is designed to search a Hamiltonian circuit over a De Bruijn graph whose vertices are the words of length m+1. Since we need to find the borderlines of the color stripes, the adjacent stripes should be encoded with different colors for recognition. To fulfill this condition, the algorithm is improved by deleting the graphic vertices whose words have the same adjacent symbols, and searching for a Hamiltonian circuit over the new graph to produce a sequence of length n(n-1)m-1, which visualizes the borderlines without any identical symbols between the neighboring stripes. Five colors (yellow, green, cyan, blue and magenta, represented by the symbol numbers of 1, 2, 3, 4 and 5, respectively), are used to produce the stripe pattern encoded by an order 2 De Bruijn sequence of length 20, as shown in Fig. 2(a). Without any repetition, each edge is encoded by the color symbols of its neighboring stripes. That is, if the color numbers are cl and cr for the left and right stripes, then the edge between them is thus encoded as (cl,cr).

A group of brightness fringes, as presented in Fig. 2(b), are superimposed on those color encoded stripes, as shown in Fig. 2(c). This means that the bright intensity along the coordinate x-axis is a cosinoidal distribution in the value channel, given by

V(x)=cos(2πxfv)×38+58,

where V(x)∈[0,1], fv is the frequency of the cosinoidal fringes. With the spatial frequency of the value channel identical to that in the hue channel, i.e., fv=20, this intensity distribution ensures that the edge positions of the color stripes locate in the extremal positions of the cosinoidal fringes, when the two groups of patterns are parallelized, to make the stripe edges easily recognizable and the intensity distribution detectable in the processing.

 figure: Fig. 2.

Fig. 2. The structured light pattern consists of color encoded stripes (a) and cosinoidal intensity fringes (b), to form the composite pattern (c).

Download Full Size | PDF

3. Image processing

After the composite pattern described above is projected onto a scene and captured by a digital camera, image processing is performed to obtain the surface shape from the recorded pattern. Firstly, color space conversion is made to extract color stripes and intensity fringes from the image. Secondly, the edges of the color stripes are searched to obtain their absolute phases. Thirdly, the wavelet transform is carried out for the intensity fringes to solve the relative phase data. Fourthly, the phase unwrapping is performed to calibrate the phase map based on the absolute height phases. These four steps are described in detail as follows.

3.1 Extraction of color stripes and intensity fringes

The pattern combining color stripes and intensity fringes is composed in HSV color space. After projecting it on an object surface, we convert the captured image from the RGB color space back to the HSV color space to separate them for respective processing, with an algorithm (MATLAB v7.0) of

Ht={(GB)(max(R,G,B)min(R,G,B)),R=max(R,G,B)2+(BR)(max(R,G,B)min(R,G,B)),G=max(R,G,B)4+(RG)(max(R,G,B)min(R,G,B)),B=max(R,G,B)
H={Ht6,Ht>0Ht6+2,Ht<0,
S=(max(R,G,B)min(R,G,B))max(R,G,B)
V=max(R,G,B)

where R,G,B∈[0,1] and H,S,V ∈[0,1], max(R,G,B) and min(R,G,B) represent the maximum and minimum values in R, G and B channels, respectively, and Ht is a temporary variable in the algorithm. A captured image from the reference plane is presented in Fig. 3(a). The extracted color stripes in the hue channel and the extracted intensity fringes in the value channel are presented in Fig. 3(b) and Fig. 3(c), respectively. The results basically recover the designed patterns. As shown in Fig. 3(c), however, the extracted fringes have some brightness changes caused by intensity imbalance among different colors produced in the projector and camera, which make the fringe intensity be deviated from the exact cosinoidal distribution in some degree. Therefore, the wavelet transform, which will be described in Section 3.3, is used to automatically reduce this influence by a bank of filterers with different bandpass [16].

 figure: Fig. 3.

Fig. 3. (a). A captured image from the reference plane. (b) The extracted pattern in the hue channel. (c) The extracted pattern in the value channel.

Download Full Size | PDF

3.2 Edge searching and phase solution in color stripes

The absolute phases along the edge lines are solved by two main steps:

(1) Identifying stripe edges:

The pattern produced in the section above can be searched by automatic algorithms such as the Canny edge detector [1], to easily find the edges. False boundaries, however, may still exist in the places with low light intensity due to noise and color aberration, as shown in Fig. 4(a). In this case, some post processing techniques are carried out to improve the results, as presented in Fig. 4(b). Firstly, the false edges with low intensity are recognized and deleted through statistics of neighboring intensities. Because the real edges locate in the places with the maximum intensity in the cosinoidal fringes, their neighboring pixels must keep relative high brightness. Thus a statistical detection of neighboring pixel intensity is performed with a threshold, to eliminate the false pixel when over 1/3 of its neighboring pixels have lower intensities than the limit. Secondly, the short segments caused by noises are eliminated by considering that the real edges between the color stripes are long but those produced by noises are short. Thus the pixel amount included in each segment is counted to check its length, to remove the false edges with low pixel numbers. For the interrupted edges, morphological algorithms are then used to bridge the disconnections.

 figure: Fig. 4.

Fig. 4. (a). The stripe edges searched by the Canny edge detector. (b) The edge lines processed by the statistical and morphological algorithms.

Download Full Size | PDF

(2) Determining the absolute phase

To obtain the phase change on the edges of color stripes due to the height change of object surface, boundary matching is carried out between the captured images of the reference plane and the object surface. The stripe color encoding with De Bruijn sequence, as mentioned in Section 2.2, provides a well defined pattern to recognize the color edges by distinguishing their adjacent stripe colors. When the pattern is projected onto the reference plane, there are 19 edges in the image, with codes of ECreferencej=(clreferencej,crreferencej),(1≤j≤19) to form a sequence of ECreference 1=(1, 2), ECreference 2=(2,1), ECreference 3=(1, 3), ECreference 4=(3,1), ECreference 5=(1, 4), ECreference 3=(4,1), ECreference 7=(1, 5), ECreference 8=(5, 2), ECreference 9=(2,3), ECreference 10=(3, 2), ECreference 11=(2, 4), ECreference 12=(4, 2), ECreference 13=(2,5), ECreference 14=(5,3), ECreference 15=(3, 4), ECreference 16=(4,3), ECreference 17=(3,5), ECreference 18=(5, 4), ECreference 19=(4,5). So when an edge in the image of the object surface has a color edge code of ECobject=(clobject, crobject), which matches the code ECreferencej of the jth edge in the image of the reference plane, given by clobject=clreferencej,crobject=crreferencej, the absolute phase of this edge can be calculated as ϕ=2. For example, an edge in the captured image of the object surface has a yellow stripe on the left side and a magenta stripe on the right side. Its edge code is ECobject=(1,5), equal to the 7th edge code in the sequence of the reference plane. Therefore, the absolute phases of the pixels on that edge are ϕ=14π.

3.3 Wavelet transform processing for intensity fringes

The wavelet analysis is performed for the cosinoidal fringes to obtain the phase values in the intensity pattern. In general, the intensity distribution of such fringes can be expressed as:

I(x)=I0(x)+I1(x)cos(ϕ(x)),

where I 0(x) is the image background of illumination, I 1(x) the fringe contrast, and ϕ(x) is the phase function

ϕ(x)=2πf(x)+ϕm(x),

where ϕm(x) is the phase related to the surface height, and f(x) the frequency of the fringe pattern. Even though the designed pattern has a constant spatial frequency fv, its projection on the reference plane produces a frequency change due to the inclined illumination, and the intersection of the measuring object with the projected pattern results in also frequency changes of the fringes non-uniformly distributed in the captured images. In this case, wavelet transformation is useful to solve the phases of the intensity fringes with varied frequencies. With the characteristics of multi-resolution in process to represent spatial information with local frequency, the wavelet analysis transforms the intensities of the fringe pattern into WT coefficient distribution in a space-spectrum domain so that the phase changes involved in fringe frequency variations can be revealed from the related WT coefficient maps, without any problem of filter-window choice in Fourier transform processing which may introduce significant errors due to the absence of the uniform frequency in carrier fringes and the strong frequency localizations caused by shape change [16].

A continuous wavelet transform for the intensity distribution on I(x) is defined as an integral of the signals with translation and dilation of the complex conjugation of a mother wavelet ψ(x), given by

WTI(a,b)=1aI(x)ψ*(xba)dx,

where a>0 is the scale parameter inversely related to the spatial frequency, and b the shift parameter representing the translation along the x-axis. For the processing of fringe patterns, the Morlet wavelet is normally used as the mother wavelet, or ψ(x)=exp(-x 2/2)×exp(0x), thus the wavelet transform coefficients of the intensity I(x) can be written as [17]

WT(a,b)=2π(1+a4ϕ2(b))14exp(i2arctan(a2ϕ(b)))
×exp(a22(ϕ(b)ω0a)211ia2ϕ(b))I1(b)exp(iϕ(b)).

To make the fringe phase ϕ(b) solvable by the WT coefficients, the phase item with arctan(a 2 ϕ″(b)) should be ignored as a smaller term related to the second order derivative. Thus the phase of the WT coefficient WT(a,b) will be directly the fringe intensity phase ϕ(b), if ω 0/a(b)=ϕ′(b) is satisfied. The satisfaction of this condition means that the amplitude of the WT coefficient reaches the maximum. Therefore, by searching the peak values in a map of the WT coefficient amplitude, we can obtain corresponding phases in the phase map of the WT coefficients, so as to solve the related fringe phase. Of course, in this solution of the ridge searching for the maximum WT amplitude, the approximation errors exist in the procedure as the second order derivative of the phase is disregarded, which can be compensated by the absolute phase on the stripe edges, as described in the next section.

The searching range of the maximum amplitude of WT coefficient can be estimated referring to the method in [17]. In our case with non-uniform frequency f(x) distributed in the fringe pattern, the range is [ω 0/(10πf max/3),ω 0/(10πf min/3)], where f max and f min are the maximum and minimum frequency in f(x), which can be found in the reference image. Corresponding to the positions of the maximum amplitude determined in this range, the phases of the wavelet coefficients are obtained in the WT phase map, ranged in [-π,π] or [-π/2,π/2]. As a result, an unwrapping procedure is needed to make the interrupted phases continued, as presented in the following section.

3.4 Unwrapping based on absolute phase

For most phase maps, the unwrapping results are sensitive to the initial error of the phases at the starting point, and to the image noise spreading along the unwrapping path, whose errors may be accumulated to result in significant mistakes in the columns or rows of the unwrapped patterns. Our image matching and optical triangulation offers the absolute phases of the surface height on the color stripe edges, which are then used as the correction data to reduce the errors in the unwrapping of the intensity phases.

It is assumed that there is no abrupt depth variation between any two adjacent edges with the known phases resulting from triangulation. By using a general unwrapping procedure, the unwrapped phases at every point between two adjacent known phases in a row can be obtained as ϕunwrapi, where the index i∈[1, len] includes the points from i=1 at the position of the left known phase to i=len at the position of the right known phase. The differences between the unwrapping phase and the absolute phase at those two points are given by

ϕ1diff=ϕ1ϕ1unwrap,ϕlendiff=ϕlenϕlenunwrap.

From them, a linear interpolation is produced as the correction phase, given by

ϕilinear=(ϕlendiffϕ1diff)(len1)×(i1)+ϕ1diff.

Therefore, a corrected phase at the ith point between the two adjacent stripe edges can be obtained as

ϕi=ϕiunwrap+ϕilinear.

Because the calibration data on the color stripe edges results from the independent measurement of triangulation, this strategy of using the known phases at the two edge points to correct the phases of the points included in between, not only limits the error accumulation along the unwrapping path caused by either the initial error or the noise involved, but also reduces the approximation errors of the wavelet transform as ignoring the second order of phase derivatives. Subtracting the phases of the reference image from the object image, as inversely expressed by Eq. (5), the whole-field phases ϕm(x) of surface height can be obtained to realize static and dynamic 3-D shape measurement.

4. Results

Numerical simulations are performed to demonstrate the technique of wavelet analysis for the intensity fringe processing with the help of the known phase calibration in the unwrapping procedure. Two examples of surface shape measurement, moreover, are presented to show the advantages of the proposed method by using the composite pattern projection and the related image processing.

4.1 Simulation for the phase correction in unwrapping

To verify the unwrapping procedure calibrated by the known phase to correct the phase errors in the WT processing, we give an example with continuous intensity of

I(x)=1+cos(32π+5sin(2πx)).

This is a fringe pattern with non-uniform spatial frequency, as shown in Fig. 5(a), including a jump of phase slope in the middle region to test the influence of rapid phase change on the WT processing. In fact, this kind of signal intensity is difficult to be processed by Fourier transform because the carrier frequency does not keep constant and strong frequency localization exists in the pattern. By using the WT process to digitalize the intensities with sampling number of 2048, the signal is transformed into a scale-space domain with the scale range [3/80,3/16] divided into 2000 intervals. The wavelet transform coefficients of the signal are presented in Fig. 5(b) to show their amplitudes, and in Fig. 5(c) to show their phase distribution, including the interrupting points at the phase with ±π. By searching the maximum values in the amplitude map of the WT coefficient (the black curve in the brightest region of Fig. 5(b)), the fringe phases are solved in the corresponding WT phase map [the corresponding black curve in Fig. 5(c)] and then unwrapped to connect the phase interruptions. For comparison, a traditional unwrapping procedure (MALTLAB v7.0) is used to directly connect those interrupting phases, as presented in Fig. 5(d), showing big differences between the designed values and the calculated phase curve, with a standard deviation of 0.32628. In the region around two phase peaks with high second order derivatives, Fig. 5(d) demonstrates the influence of the approximation errors on the WT phase, and near the two ends of the plot and in the middle area with abrupt change of the first derivative of the phases, the results present significant errors resulting from the non-smoothness of the phase variation. By using our new method to calibrate the points at phase 2 with the known phases specified in the intensity distribution, and to correct the phase data at the points among them with the above algorithm, the unwrapped results show very good agreement with the designed phases, as presented in Fig. 5(e), with a standard deviation of 0.07648, showing that the phase errors in the WT processing have been much reduced.

 figure: Fig. 5.

Fig. 5. A simulation of WT processing and unwrapping for a signal of ϕm(x)=5|sin(2πx) (a). The WT magnitude map (b) and the phase map (c) are used to track the magnitude maxima and the fringe phases. The results from the traditional WT processing (d) and from the proposed processing (e).are compared with the designed values.

Download Full Size | PDF

4.2 Two examples of measurement experiment

A DLP projector (HP, xb31) and a digital camera (Canon EOS 350D) are used in the tests to build up a 3D acquisition system, as illustrated in Fig. 1. The composite pattern of color encoded stripes with cosinoidal intensity fringes, as shown in Fig. 2(c), is projected firstly on a reference plane to record the carrier pattern with initial spatial frequencies and to calibrate the optical system. After an object is moved into the optical system to distort the carrier pattern, the modulated fringes are captured by the camera and then processed with the described algorithms by a PC (CPU: Pentium4 2.8GHz). As the pattern is projected on the reference plane with an inclined illumination, the maximum carrier frequency is f max=21.83 and the minimum frequency is f min=17.72, respectively, so that the scale searching range in wavelet analysis is [0.0274,0.1693], which is divided into 200 intervals to search the maximum amplitudes of the WT coefficients.

A piece of twisted paper is used as the measuring object to detect the efficiency of the acquisition system. Figure 6(a) shows the projection of the composite pattern on the curved surface, from that the height phases involved in the fringes are solved by the wavelet processing. Using the method of traditional unwrapping (MATLAB v7.0) to directly connect the interrupted points at 2, Fig. 6(b) presents the phase map with reasonable smoothness due to smooth paper surface with regular boundaries, where the phase errors at the starting points of unwrapping are small and the noise in the unwrapping path is little in each row. The maximum and minimum phases obtained by the traditional process are 9.91 and 0.42, respectively. The results obtained by the proposed method, however, have larger value of 10.36 as the maximum phase and smaller value of 0.08 as the minimum phase, respectively, as indicated in Fig. 6(c), in which the calibration has been performed based on the absolute phases of every stripe edge in the phase unwrapping. The phase map is thus improved by eliminating the accumulation errors and the approximation errors, showing the 3-D shape of the curved surface with high accuracy.

 figure: Fig. 6.

Fig. 6. (a). A pattern projected on a piece of curved paper. (b) 3D surface shape from traditional wavelet transform and unwrap process. (c) 3D shape from the proposed method by eliminating the approximation errors in WT process.

Download Full Size | PDF

Furthermore, the shape of a female model is measured when the surface is illuminated by the color-fringe pattern, as shown in Fig. 7(a). Also for comparison, after the relative phases included in the intensity fringes are solved by the wavelet processing, the phase unwrapping is firstly carried out by the traditional unwrapping algorithm through MATLAB computation. Figure 7(b) presents the result of the phase map with many interrupted segments, caused by the initial error at the starting point of unwrapping process and the accumulating errors of the noise distributed in the phase map. On the other hand, by using the new process to calibrate the whole phase map with the absolute phases measured along the color stripe edges, the phases are corrected at every initial point to unwrap each fringe of the pattern, and the errors due to noises do not cumulate in the whole pattern, as presented in Fig. 7(c), showing a height phase map with smooth contours to reflect the real shape of the 3-D object surface.

 figure: Fig. 7.

Fig. 7. (a). A pattern projected on a female model. (b) 3D surface shape from traditional wavelet transform and unwrapping process. (c) 3D shape topography from the proposed method by calibrating the phases in unwrapping.

Download Full Size | PDF

5. Conclusion

The paper presents a novel optical technique of 3D shape acquisition by using a projection pattern composed of color encoded stripes and cosinoidal intensity fringes in HSV space. Wavelet transformation is used to extract the relative phases from the cosinoidal intensity fringes; and a matching process is carried out to search the stripe edges of the color pattern to obtain the absolute phases. Therefore, a new phase unwrapping procedure is realized by calibrating the phase distribution based on the absolute phase so as to eliminate the initial errors of the unwrapping process and to reduce the accumulation errors in whole pattern unwrapping and the approximation errors in WT processing. The numerical simulation and the shape detection of the 3-D object surface have proved the validity of the shape acquisition technique, and continuous height phase maps are obtained by the complementary information provided by the color encoded stripes and the intensity fringes included in the composite pattern, without the flaws of traditional unwrapping. The proposed pattern has no temporal coherence assumption within the one-shot projection so that technique is capable of capturing dynamic scenes with high resolution. This offers wide applications in surface shape detection, such as chest movement of human body, which is progressing in our research of lung volume estimation. Further improvement of the technique needs to consider the influence of shadowing and the reflectivity variation of the object surface, and to develop fast algorithm of wavelet transformation to reduce the computational intensiveness.

Acknowledgment

The support of National Basic Research Program of China (No. 2007CB935602) is greatly appreciated.

References and links

1. E. Trucco and A. Verri, Introductory techniques for 3-D computer vision, (Prentice Hall, 1998).

2. R. Furukawa and H. Kawasaki, “Interactive shape acquisition using marker attached laser projector,” in Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling (2003), pp. 491–498.

3. J. Salvia, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recogn. 37, 827–849 (2004). [CrossRef]  

4. D. Caspi, N. Kiryati, and J. Shamir. “Range imaging with adaptive color structured light,” IEEE Trans Pattern Anal. Mach. Intel. 20, 470–480 (1998). [CrossRef]  

5. F. Tsalakanidou, F. Forster, S. Malassiotis, and M. G. Strintzis, “Real-time acquisition of depth and color images using structured light and its application to 3D face recognition,” Real-Time Imag. 11, 358–369 (2005). [CrossRef]  

6. Z. J. Geng, “Rainbow 3-dimensional camera: new concept of high-speed 3-dimensional vision systems,” Opt. Eng. 35, 376–383 (1996). [CrossRef]  

7. M. S. Jeong and S. W. Kim, “Color grating projection moiré with time-integral fringe capturing for high-speed 3-D imaging,” Opt. Eng. 41, 1912–1917 (2002). [CrossRef]  

8. O. A. Skydan, M. J. Lalor, and D. R. Burton, “Technique for phase measurement and surface reconstruction by use of colored structured light,” Appl. Opt. 41, 6104–6117 (2002). [CrossRef]   [PubMed]  

9. Z. H. Zhang, C. E. Towers, and D. P. Towers “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency selection,” Opt. Express 14, 6444–6455 (2006). [CrossRef]   [PubMed]  

10. S. Zhang and S. -T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase shifting method,” Opt. Express 14, 2644–2649 (2006). [CrossRef]   [PubMed]  

11. C. Karaalioglu and Y. Skarlatos, “Fourier transform method for measurement of thin film thickness by speckle interferometry,” Opt. Eng. 42, 1694–1698 (2003). [CrossRef]  

12. H. J. Li, H. J. Chen, J. Zhang, C. Y. Xiong, and J. Fang, “Statistical searching of deformation phases on wavelet transform maps of fringe patterns,” Opt. Laser Tech. 39, 275–281 (2006). [CrossRef]  

13. C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11, 406–417 (2003). [CrossRef]   [PubMed]  

14. A. K.C. Wong, P. Niu, and X. He, “Fast acquisition of dense depth data by a new structured light scheme,” Comput. Vis. Image Underst. 98, 398–422 (2005). [CrossRef]  

15. P. Fong and F. Buron, “Sensing deforming and moving objects with commercial off the shelf hardware,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005), Vol. 3, pp.20–26.

16. J Fang, C. Y. Xiong, and Z. L. Yang, “Digital transform processing of carrier fringe patterns from speckle-shearing interferometry,” J. Mod. Opt. 48, 507–520 (2001). [CrossRef]  

17. H. J. Li and H. J. Chen, “Phase solution of modulated fringe carrier using wavelet transform,” Acta Sci. Nat. Uni. Pek. 43, 317–320 (2007).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Optical system for structured pattern projection and triangulation.
Fig. 2.
Fig. 2. The structured light pattern consists of color encoded stripes (a) and cosinoidal intensity fringes (b), to form the composite pattern (c).
Fig. 3.
Fig. 3. (a). A captured image from the reference plane. (b) The extracted pattern in the hue channel. (c) The extracted pattern in the value channel.
Fig. 4.
Fig. 4. (a). The stripe edges searched by the Canny edge detector. (b) The edge lines processed by the statistical and morphological algorithms.
Fig. 5.
Fig. 5. A simulation of WT processing and unwrapping for a signal of ϕm (x)=5|sin(2πx) (a). The WT magnitude map (b) and the phase map (c) are used to track the magnitude maxima and the fringe phases. The results from the traditional WT processing (d) and from the proposed processing (e).are compared with the designed values.
Fig. 6.
Fig. 6. (a). A pattern projected on a piece of curved paper. (b) 3D surface shape from traditional wavelet transform and unwrap process. (c) 3D shape from the proposed method by eliminating the approximation errors in WT process.
Fig. 7.
Fig. 7. (a). A pattern projected on a female model. (b) 3D surface shape from traditional wavelet transform and unwrapping process. (c) 3D shape topography from the proposed method by calibrating the phases in unwrapping.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

h = ( H × d × k ) ( d × k + D ) ,
V ( x ) = cos ( 2 π x f v ) × 3 8 + 5 8 ,
H t = { ( G B ) ( max ( R , G , B ) min ( R , G , B ) ) , R = max ( R , G , B ) 2 + ( B R ) ( max ( R , G , B ) min ( R , G , B ) ) , G = max ( R , G , B ) 4 + ( R G ) ( max ( R , G , B ) min ( R , G , B ) ) , B = max ( R , G , B )
H = { H t 6 , H t > 0 H t 6 + 2 , H t < 0 ,
S = ( max ( R , G , B ) min ( R , G , B ) ) max ( R , G , B )
V = max ( R , G , B )
I ( x ) = I 0 ( x ) + I 1 ( x ) cos ( ϕ ( x ) ) ,
ϕ ( x ) = 2 π f ( x ) + ϕ m ( x ) ,
W T I ( a , b ) = 1 a I ( x ) ψ * ( x b a ) dx ,
WT ( a , b ) = 2 π ( 1 + a 4 ϕ 2 ( b ) ) 1 4 exp ( i 2 arctan ( a 2 ϕ ( b ) ) )
× exp ( a 2 2 ( ϕ ( b ) ω 0 a ) 2 1 1 i a 2 ϕ ( b ) ) I 1 ( b ) exp ( i ϕ ( b ) ) .
ϕ 1 diff = ϕ 1 ϕ 1 unwrap , ϕ len diff = ϕ len ϕ len unwrap .
ϕ i linear = ( ϕ len diff ϕ 1 diff ) ( len 1 ) × ( i 1 ) + ϕ 1 diff .
ϕ i = ϕ i unwrap + ϕ i linear .
I ( x ) = 1 + cos ( 32 π + 5 sin ( 2 π x ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.