Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A correspondence finding method based on space conversion in 3D shape measurement using fringe projection

Open Access Open Access

Abstract

Phase correlation is an effective method used for 3D shape measurement. It has a defect in the step of finding corresponding points. This work analyses the complexity of phase maps and the problems caused by it in real applications, proposes a correspondence finding method based on space conversion. Applying space conversion, two sets of phase maps from two cameras are integrated to a unique phase space. Accordingly, searching corresponding point between two images can be carried out in the same image coordinate system of the projector. As a supplementary, two algorithms are given for CC method and VR method. Experimental results show that proposed algorithms are successful and effective.

© 2015 Optical Society of America

1. Introduction

In recent years, optical 3D shape measurement has become one of the most active research areas. As an important contactless surface measurement technologies, Phase-shifting Projected Fringe Profilometry (PSPFP) is widely used in surface inspection, quality control, and reverse engineering. Compared to other optical 3D measurement methods, such as passive stereo vision and laser scanning, PSPFP has the advantage of fast acquiring dense point clouds (see Table 1 for definitions).

Calculation of 3D surface points is typically based on triangulation known from photogrammetry. Finding homologous points from stereo images by phase information is a crucial task for fringe projection systems. Current methods for establishing correspondences fall into three main categories: CP, CC and VR [1]. In order to simplify the stereo matching of corresponding points, epipolar geometry is introduced into them, and forms three extension methods, which are CPE, CCE and VRE. Among these methods, CC and VR are of most flexible for both of them can be carried out in case of unknowing the cameras’ extrinsic parameters in advance. Furthermore, once the matching result is achieved, it can be utilized to calibrate the extrinsic parameters and further calculate the 3D point clouds. In addition, a potential advantage of VR is that the desired 3D point density can be easily obtained by defining the resolution of virtual phase raster. The reason why CC and VR methods have above advantages is that a technique called phase correlation is applied in both of them.

The idea of phase correlation was proposed by Reich to measure 3D shape of complex objects by combining photogrammetry and fringe projection [2]. Kühmstedt enlarged this idea and gave its process in detail [3]. Theoretically, if all the phase maps from two cameras are well-shaped, phase correlation can work very well in finding homologous points. But in real applications, affected by the complexity of measured object surface and the directions of sensors (cameras and projector), phase maps from two cameras may inevitably represent many complex features such as abruption and discontinuous.

It’s clear that the complexity of phase maps is ignored by most researchers. Bräuer-Burchardt seems to be aware of this problem and believe that a preliminary rough measurement should be carried out to obtain the information about the shape of the measuring object [1]. Undoubtedly, an additional measurement will consume more computing time.

To solve the problem of finding homologous points between two sets of complex phase maps, we proposes a new correspondence finding method based on space conversion. By converting phase maps from two image spaces of two cameras to the unique phase space of projector, homologous points between two camera coordinate systems can be easily located and exactly calculated by their phase values. This algorithm doesn’t need any additional information about the shape of the measuring object to guide the search procedure, and can be applied in CC and VR methods adaptively.

2. Phase correlation

The system setup with two cameras and one projector is commonly used in CC and VR, its sensor arrangements can be shown in Fig. 1. According to phase correlation, projection (by projector) and observation (by camera C1 and C2) of a fringe image series consisting of two orthogonal sequences subsequently produce two phase maps Фx and Фy in each of the two camera coordinate systems. For each pixel p(x,y) in every camera image, there is a corresponding phase values (φx, φy), which can be read from its phase maps Фx and Фy. Accordingly, for every given phase values (φx, φy), we can find two points p(x1,y1) and q(x2,y2) from two camera images, respectively. Thus, p and q is a pair of correlative points.

 figure: Fig. 1

Fig. 1 Sensor arrangements of shape measurement system with a projector and two cameras. (Sample figure adapted from Applied Measurement systems.218 (2012) [4])

Download Full Size | PDF

There are some differences between CC and VR. In CC, phase correlation starts from the image of camera C1, as shown in Fig. 2. For any pixel point p1(x1,y1) with phase values (φ1,x, φ1,y), four adjacent pixels surrounding the phase values (φ1,x, φ1,y) can be found out in camera C2. Bilinear interpolation can be used to determine the sub-pixel exact position of p2(x2,y2) with phase values (φ2,x, φ2,y) where (φ2,x, φ2,y) = (φ1,x, φ1,y).

 figure: Fig. 2

Fig. 2 Correlation of phase maps between two cameras (CC method).

Download Full Size | PDF

In VR, phase correlation starts from virtual phase point, which can be shown in Fig. 3. For any virtual phase point p with phase values (φx, φy), four adjacent pixels surrounding the phase values (φx, φy) can be found out in each of the two camera coordinate systems. Two bilinear interpolation can be carried out to determine the sub-pixel exact position of p1(x1,y1) (with phase values (φ1,x, φ1,y) where (φ1,x, φ1,y) = (φx, φy)) and p2(x2,y2) (with phase values (φ2,x, φ2,y) where (φ2,x, φ2,y) = (φx, φy)). With respect to CC, VR consumes more computation time, because its search and interpolation time is twice the time of CC. However, VR is more flexible than CC for its virtual phase points can be defined freely, which means we can obtain 3D points with any density by setting appropriate virtual phase raster.

 figure: Fig. 3

Fig. 3 Correlation of phase maps between two cameras (VR method).

Download Full Size | PDF

3. Problems caused by the complexity of phase maps

In-depth research found that phase correlation is a little ambiguous when it comes to the step of finding corresponding points in a camera’s image coordinate system according to a given phase values. To find out four adjacent pixels surrounding the phase values, an effective search strategy should be developed to accommodate a variety of different situations. Obviously, the author completely ignored the complexity of phase maps and only used the simple phase maps to describe phase correlation.

Referring to the complexity of phase maps, let’s take the four phase maps from two cameras in a real measurement as an example. The measured object is a mask, which looks like a crocodile’s head. As is shown in Fig. 4, (a) and (b) are respectively horizontal and vertical phase maps from camera C1, while (c) and (d) are those from camera C2. In each phase map, phase value component of each pixel is replaced with its corresponding gray level. So the larger the phase value component is, the brighter the pixel shows. For those pixels with no phase value component, we use black to denote them.

 figure: Fig. 4

Fig. 4 Four phase maps from two cameras in a real measurement. (a) Horizontal phase map from camera C1. (b) Vertical phase map from camera C1. (c) Horizontal phase map from camera C2. (d) Vertical phase map from camera C2.

Download Full Size | PDF

Figure 4 reflects some interesting phenomena which are often appear in real applications. Firstly, perspectives’ difference in shape between the phase maps from different cameras sometimes is huge. It can be observed by comparing Fig. 4(a), (b) to (c), (d). The reason is obvious: known from stereo vision, it should be ensured that there are different perspectives between cameras, and the perspectives’ difference between cameras give rise to the shapes’ difference between phase maps. Secondly, abruption is a common phenomenon, which appears as some holes and separated small regions in each phase map. Some reasons should be responsible for this result, such as: the surface of measured objects is complex, which maybe consisted of multi-objects or a single object with deep holes in its surface; concerning the reflection intensity of measured surface to projected fringe images, those over-saturated and under-saturated pixels in captured images can result in larger systematic or nonlinear errors, which usually be removed from the phase maps. Thirdly, discontinuity is a less obvious phenomenon existed in phase maps. It is a drastic change between a pixel and its surrounding pixels, which can be seen from Fig. 4(c) and (d), just at the edge of the eyes and their adjacent areas of the mask. This phenomenon is often happened at these places where a part of an object is sheltered by another object or another part of itself. Lastly, outliers are hardly to be observed for they sprawl out in the phase maps. They are errors in the stage of phase unwrapping, arising from the miscalculation of the orders of the phase function.

As the complexity of phase maps is concerned, it is a challenging task to search corresponding points between two image coordinate systems by phase values. Almost no researchers have ever addressed this problem and proposed any solutions in fringe projection systems.

Known from stereo vision, the operation of searching correlative points between two image coordinate systems usually is time-consuming. Some factors, such as abruption and discontinuity, can even lead to failure.

Firstly, a good search algorithm should consider locating the start pixel the faster the better. That will helps us determining the scope of further search rapidly. Obviously, the perspectives’ difference makes it difficult to locate the initial pixel which we can start from.

Secondly, most of the search algorithms concentrate on the case that the phase information are full of all the image space. It’s understandable that the further search can be easier carried out by comparing adjacent pixels to guild us to the exact place. But the abruption of the phase maps makes the situation sophisticated, because holes or gaps cut the search paths. In these cases, choosing another initial location to begin a new search is unavoidable.

Thirdly, when it comes to the issue of the discontinuity, the situation becomes even worse. As is mentioned above, the discontinuity of phase maps is caused by sheltering. Some areas appear in the phase maps from a camera may be hidden in the phase maps from another camera. In other words, we can’t see them from the second camera. That means some points in one set of phase maps have no corresponding points in the other set of phase maps. To guarantee the robustness of a search algorithm, an enormous amount of time should be spent on these points. In the worst situation, we have to traverse every pixels of a phase map again and again to ensure that there’s no omission happened. Obviously, this situation is unbearable.

4. Principle analyses and algorithm description

To solve the problems caused by the complexity of phase maps, regarding the two set of phase maps form two cameras as a whole rather than two separates, this idea may helps us finding the correspondence points, because the phase information contained in both of the two set of phase maps are consubstantial and share the same definition with the phase values of the images projected by the projector. Following this idea, this section will analysis the optical and mathematical principles of space conversion and gives the description of our algorithm in detail. For the convenience of further discussion, some concepts need to be clarified first.

4.1 Image space and phase space

Known from phase correlation, there involves two kinds of coordinate systems: one is image coordinate system, the other is phase coordinate system. There are three image coordinate systems responding to three sensors, including two cameras’ (C1 and C2) and a projector’s. Meanwhile, as is known from fringe projection, the phase coordinate system is unique, which is defined by the phase encodings of the projector.

To simplify the description, use “image space”(IS) to note an image coordinate system. Known from photography and computer graphics, an image is usually considered a set of points (or pixels) in an image space, in which a pixel is considered a point with gray or color value. Specially, in this work, call a pixel (x,y) with phase values (φx,φy) in the image coordinate system a “pixel point” and note it by ((φx,φy)|(x,y)), all such pixel points with phase values in an image is considered a “pixel point set” in the image space. Name the two pixel point sets from camera C1 and C2 respectively by I1 and I2.

Be similar to image space, “phase space”(PS) can be regarded as a continuous 2D phase coordinate system. Its coordinate system is consisted of horizontal and vertical phase values which are defined by the phase encodings of the projector. If a point in the phase space carries a pixel information which comes from a special sensor, name it a “phase point” and notes it by ((x,y)|(φx,φy)). Accordingly, a set of phase points is named a “phase point set”. Name the two phase point sets from camera C1 and C2 respectively by S1 and S2.

4.2 Space conversions

Name the coordinate transformation of a point set between an image space and the phase space a “space conversion”(SC). There are two kinds of space conversions in fringe projection systems: one is from an image space to its phase space, which is noted by “image-phase space conversion”; the other is from a phase space to its image space, which is noted by “phase-image space conversion”. In mathematically, it’s very simple to convert a point from a kind of space to the other, just by swapping its phase values and pixel values. Meanwhile, it is important to understand their meanings in optical and computer graphical ways.

On one hand, an image-phase space conversion can be explained from optics. As is known from optics, reversibility of light is a basic principle. Suppose the phase maps are “projected” by a digital camera back to the measured objects immediately after they are captured. On the surface of the measured objects, there are some light-points come from the pixel point set. The projector “captures” the light-points reflected from the surface of the measured objects at its measuring place. Assume the image pane of the projector is analog just like a film, because we should guarantee that all these light-points can be captured.

Now, two point sets S1and S2 can be “captured” when we “project” the two point sets I1 and I2 from camera C1 and C2 respectively. The process converting two point sets from their image spaces to the unique phase space is shown in Fig. 5. In which, p1((φ1,x,φ1.y)|(x1,y1)) and p2((φ2,x,φ2.y)|(x2,y2)) are two pixel points respectively in I1 and I2. By swapping their own phase values and pixel values, they are converted to the phase points in the phase space of the projector, noted by p1((x1,y1)|(φ1,x,φ1.y)) and p2((x2,y2)|(φ2,x,φ2.y)).

 figure: Fig. 5

Fig. 5 Image-phase space conversion.

Download Full Size | PDF

On the other hand, a phase-image space conversion originated from the fringe encoding of the projector, so it can be explained in computer graphical way. Known from fringe projection, given a pixel p(xP,yP) in the image space of the projector, its phase values (φx, φy) can be calculated by Eq. (1),

{ϕx=ΦhWxPϕy=ΦvHyP
where W and H are the width and height of the encoded image in fringe projection system, Φh and Φv are the maximum encoding phases in the horizontal and vertical directions. Conversely, given a phase values (φx, φy), the only pixel p(xP,yP) in the image space of the projector can be located by Eq. (1). Please note, two components xP,yP of a pixel point can be extended to sub-pixel precision here.

As is shown in Fig. 6, the two phase point sets S1 and S2 in Fig. 5 are merged into a unique phase coordinate system here because they are under the same coordinate system – the unique phase space of the projector. By Applying Eq. (1) to the phase values of all the phase points in S1 and S2, they are eventually converted to the pixel points in the image space of the projector. To distinguish the pixel points in the image space of projector from those in the image spaces of cameras, use IP1 and IP2 to note the two pixel point sets respectively come from S1 and S2.

 figure: Fig. 6

Fig. 6 Phase-image space conversion.

Download Full Size | PDF

With the help of image-phase space conversion and phase-image space conversion, all the pixel points from different image spaces of cameras can be converted to the pixel points in the image space of the projector. Seen from Fig. 6, p1(x1,y1)|(φ1,x,φ1.y) and p2(x2,y2)|(φ2,x,φ2.y) are two phase points in S1 and S2 respectively. By applying Eq. (1), they are converted to the pixel points in IP1 and IP2, noted by (φ1,x,φ1.y)|(x1P,y1P) and (φ2,x,φ2.y)|(x2P,y2P) respectively. Or, for the purpose of recording their origin, notes them by (x1,y1),(φ1,x,φ1.y)|(x1P,y1P) and (x2,y2),(φ2,x,φ2.y)|(x2P,y2P) .

From now on, we can discuss how to find corresponding points between IP1 and IP2 in the same image coordinate system – the image space of the projector.

4.3 Principle of finding initial corresponding points

First, let’s pay close attention to the problem of how to search the initial corresponding points between I1 and I2.

Known from fringe projection, two pixel points respectively from I1 and I2 are a pair of corresponding points only when their phase values are the same. However, the digital images are discontinuous, so it is almost impossible to achieve the goal by this way. The unique phase space provides us a new idea for solving the problem. As is shown in Fig. 6, any two nearest phase points respectively from S1 (or IP1) and S2 (or IP2) can be considered a pair of initial corresponding points, from which we can start our further search.

To be specific, assume p1 is a point in S1 (or IP1) and v(p1) is the phase values of p1, p represents any points from S2 (or IP2) and v(p) is the phase values of p. Define the phase distance δ between p1 and p as

δ=|v(p)-v(p1)|=|(ϕ2,xϕ1,x1)2+(ϕ2,yϕ1,y1)2|.
Theoretically speaking, for a given phase point p1 in S1 (or IP1), apply Eq. (2) to all the phase points in S2 (or IP2), the point p2 with the minimum phase distance in S2 (or IP2) can be found eventually [5].

However, as is known from computer programming, a well-defined data structure is necessary for a good search algorithm. Obviously, the image space of the projector is more suitable to store a point set than the phase space of the projector, because the former is divided into regular cells by the contours. Every point in IP1 can be assigned to a certain cell by applying the rounding operator [.] on its pixel values xP and yP respectively. Accordingly, a special data structure is defined to hold a pixel point set in the image space of the projector, as is shown in Fig. 7. In which, the image space is defined as a 2D array with the same width and height of the projected images. Every cell keeps a pointer to a particular link table. The link table connected all the points belong to the same cell. A point in a link table records its phase values in the phase space and pixel values in its original image space. For the cells which have no points fallen into, set their pointer as “NULL”.

 figure: Fig. 7

Fig. 7 Data structure for storing a pixel point set of the projector.

Download Full Size | PDF

On the basis of the new defined data structure, the process of finding the initial corresponding points in CC method can be described as: (1) store IP2 into the new data structure; (2) read a pixel point p1 ((φx1, φy1)|(x1,y1)) from I1; (3) calculate its pixel coordinates (x1P,y1P) in the image space of the projector by Eq. (1); (4) locate a cell (A in Fig. 7) in the 2D array applying the rounding operator [.] to both of its pixel coordinate components (x1P,y1P); (5) find the point (p2 in Fig. 7) with the minimum phase distance among all the points which contained in cell A and its adjacent 8 cells. Then, p2 is the initial corresponding point for p1.

In fact, above description is not perfect as the complexity of the phase maps is concerned. In order to introduce our algorithm smoothly, some extreme cases will be discussed in the following section.

4.4 Principle of finding accurate corresponding points

Now, let’s focus on how to search the accurate corresponding points after the initial corresponding points are found. Here, still take CC method for an example, the process can be shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Search process of finding single corresponding points (CC method).

Download Full Size | PDF

Assume the pixel point p2((x2,y2),(φ2,x,φ2.y)|(x2P,y2P)) in IP2 is the initial corresponding point for p1((φ1,x1, φ1,y1) |(x1,y1)) in I1. According to the original pixel (x2,y2) recorded in p2, we can locate the pixel point p2 in I2 immediately. Guided by the phase values (φx1, φy1) of p1, the four adjacent pixel points in I2 can be selected among the nine points (including p2 and the eight points around p2) by comparing their phase values to (φx1, φy1). After that, linear intersection can be used on the four adjacent pixel points to calculate the exact point p2’.

Next, let’s discuss some extreme cases.

  • (1)Suppose there is abruption around p2 in I2. That means, some points next to p2 have no phase values (or one component of phase values is lost). In this case, we cannot find four adjacent points sometimes. If this happens, the process of finding accurate corresponding points should be terminated.
  • (2)Suppose there is discontinuous around p2 in I2. Or, more specifically, there are drastic changes between p1 and some of its adjacent points in phase values. In this case, a threshold (noted by “Np”) of phase distance should be set to prevent such point from being selected into the four adjacent points. In practices, it’s enough to set Np as twice the maximum of Φh/W and Φv/H, where Φh, W, Φv and H keeps the same definition with those in Eq. (1).
  • (3)Suppose p1 cannot be seen from Camera C2. When we search initial corresponding point for p1 in Section 4.3, there are two probable results, either no eligible point or a point p2 with larger phase distance from p1, because the process only search the points in the nine cells around A. In any cases, the result will be the same: there’s no accurate corresponding point for p1 will be found.
  • (4)Suppose p2 is an outlier in I2. As is discussed in Section 3, the phase values of p2 appears abnormal from that of its adjacent pixel points in I2. According to case (2), the process of finding four adjacent points around p2 will be fail, because Np will kick off all the other pixel points around p2 in I2. However, as we know, p2 is a wrong initial corresponding point for p1. In this case, more processes of finding initial corresponding point except p2 should be carried out. In this case, the worst is that all the pixel points around cell A in IP2 are explored but the exact corresponding point for p1 has not been obtained yet. Then, the process will be determined.
  • (5)Suppose p1 is an outlier in I1. Obviously, above process cannot kick off the outliers from I1. To solve this problem, case (4) and (2) provides us a clue: calculating the phase distances from p1 to each of its adjacent pixel points. If all the phase distances are larger than Np, p1 may be an outlier in I1. It should be noted that this method can help us put off those isolated outliers from I1 only. It will fail if more than one outliers hide in a continuous area.

4.5 Algorithm Description

It’s time to give our algorithm by steps. Assume I1 and I2 has been stored in two 2D arrays. Let’s begin from CC method.

  • Step 1, create the storage of IP2. Create a 2D array as Fig. 7 in according with the height and width of projected fringe image. Set the pointers of all the cells in the 2D array with NULL.
  • Step 2, phase-image space conversion. Read all the pixel points in I2 in sequence. For each pixel point (φx, φy)|(x,y), calculate its pixel coordinates (xP,yP) in the image space of the projector by applying Eq. (1).
  • Step 3, image-phase space conversion. Locate a cell ([xP],[yP]) in the image space of the projector by applying the rounding operator [.] on each of the pixel coordinates components xP and yP.
  • Step 4, store pixel points in IP2. Create a new data structure of phase point to save (φx, φy)|(x,y). Link it to the cell (xP,yP).
  • Step 5, locate initial corresponding cells of IP2 for the points of I1, as is shown in Fig. 8. Read a point p1(φx1, φy1)|(x1,y1) from I1. Calculate its pixel coordinates (x1P, y1P) in the image space of the projector by applying Eq. (1). Locate a cell (([x1P],[y1P]), notes by A) in IP2 by applying the rounding operator [.].
  • Step 6, create a link table of pixel points in I2 around A. Create a null link table LT1. Read all the pixel points in I2 linked into the nine cells of IP2 around A. For each pixel points in I2, calculate its phase distance from p1 by Eq. (2). Sort all the pixel points by phase distances and store them into LT1 in ascending order.
  • Step 7, Take the first pixel point from LT1 as the initial corresponding point for p1, note it by p2((φ2,x2, φ2,y2) |(x2,y2)). If LT1 is null, there’s no corresponding point in I2 for p1. Return to Step 5.
  • Step 8, locate four adjacent points in I2 around p2. Take all the non-null pixel points around p2 (include p2). Calculate their phase distances from p1 by Eq. (2) and compare them to Np. Discard those points whose phase distance from p1 is less than Np. If there are more than four points left, select four adjacent points among them by the perquisite that the phase values of p1 is fallen into the area covered by these points.
  • Step 9, if Step 8 succeed, apply bilinear interpolation to the selected four points to get the point p2’(φx1, φy1)|(x2’,y2’). Then (x2’,y2’) is the accurate corresponding in I2 for p1, return to Step 5. If Step 8 failed, remove p2 from LT1, return to Step 7 to do more attempts of finding corresponding points for p1.

Then, let’s turn to VR method. Before we introduce it, virtual raster should be clarified first.

A virtual raster is similar to the phase encoding of the projector. It can be regard as a virtual 2D digital image, whose horizontal and vertical resolution (noted by Nh and Nv respectively) can be arbitrarily chosen. For every pixel p(x,y) in the 2D array, there is a 2D phase values (φx, φy) associated with it, which can be calculated by Eq. (1) after replacing Φh and Φv with Nh and Nv respectively.

Hence, VR method can be briefly described by follows (see Fig. 9):

 figure: Fig. 9

Fig. 9 Corresponding points finding (VR method).

Download Full Size | PDF

  • Step 1, create and initialize IP1 and IP2 respectively from I1 and I2.
  • Step 2, define a virtual raster VR1.
  • Step 3, as is shown above, take a virtual pixel point p(φx, φy)|(x,y) from VR1 and locate the cell in IP1 or IP2.
  • Step 4, find initial corresponding points p1((x1,y1),(φ1,x,φ1.y)|(x1P,y1P)) with minimum phase distance in IP1 and p2((x2,y2),(φ2,x,φ2.y)|(x2P,y2P)) in IP2.
  • Step 5, locate p1 in I1 according to the original pixel coordinates(x1,y1), find four adjacent pixels in I1. Locate p2 in I1 according to the original pixel coordinates(x2,y2), find four adjacent pixels in I2.
  • Step 6, apply bilinear interpolation to the selected four points in I1 and selected four points in I2 respectively. p1’(x1’,y1’) and p2’(x1’,y1) are responding points for a virtual pixel point p. Then (p1’, p2’) is a pair of corresponding point.

5. Experiments and analyses

Considering the extreme cases in proposed algorithm, the sensor arrangements as Fig. 10 is applied in our experiment. The baseline between two cameras is large enough to generate big perspectives’ difference in shape between the phase maps from different cameras. The length, width and height of the measured object are about 20cm, 30cm and 15 cm. Camera C1 and camera C2 are the same type of cameras with resolution of 1280 × 1024. The resolution of the projector is 1024 × 768. In order to acquire phase maps with lower errors, SETPU with PAM (n = 10) is adopted in the stage of phase measuring [6]. Four phase maps from two cameras are shown in Fig. 2, in which (a), (b) are two components of the pixel points set (I1) in the image space of camera C1 and (c), (d) are those of I2 in camera C2.

 figure: Fig. 10

Fig. 10 Sensor arrangements in real measurement.

Download Full Size | PDF

First, to observe the pixel point sets IP1 in the pixel space of the projector intuitively, phase space conversion (including image-phase space conversion and phase-image conversion) is exerted on I1, and all the pixels (cells) which are not nil are replaced with white color. The result shows in Fig. 11(a). The same action is exerted on I2 and its result shows in Fig. 11(b).

 figure: Fig. 11

Fig. 11 Pixel point sets in the image space of the projector from two cameras. (a) IP1 from camera C1. (b) IP2 from camera C2.

Download Full Size | PDF

Seen from Fig. 11, more black pixels appear in the surface of the mask comparing to Fig. 2. That means these cells have no pixel points in I1 or I2 fallen into them. It is caused by the rounding operator [.] used in the stage of phase-image space conversion (Fig. 6). In addition, if pay close attention to these images, some isolated pixel points outside the edge of the measured object can be clearly observed. They are some outliers hidden in I1 or I2.

Next, two experiments (noted by Exp1 and Exp2) are performed following proposed CC method. In Exp1, fixing I1, find corresponding points from I2. In Exp2, fixing I2, find corresponding points from I1. Every experiment is divided 3 stages by the algorithm steps: A, the initial condition (before the algorithm runs); B, the initial corresponding point was found successfully (Step 5); C, the accurate corresponding point was found successfully (Step 9). At each of the 3 stages, the number of points remained in the fixed point set is given in Table 2. At the same time, the responding points and the rest points of the two fixed point sets in Exp1 and Exp2 are converted two images and shown in Fig. 12 and Fig. 13. Accordingly, Fig. 14 gives the 3D point cloud calculated from the corresponding points in Exp1 by triangulation.

Tables Icon

Table 2. Experimental data in CC method

 figure: Fig. 12

Fig. 12 Corresponding points and rest points of I1 under the image coordinate system of camera C1 in Exp1. (a) Corresponding points. (b) Rest points.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Corresponding points and rest points of I2 under the image coordinate system of camera C2 in Exp2. (a) Corresponding points. (b) Rest points.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 3D point cloud in Exp1. (a) Elevation view. (b) Right elevation.

Download Full Size | PDF

Some conclusion can be drawn from Table 2, Fig. 12, Fig. 13 and Fig. 14. (1) To acquire more corresponding points, we should fix the pixel point set with larger number of valid points and find correlatives in the other pixel point set. Seen from Table 2, the number of valid points in I1 is larger than that in I2, hence the number of corresponding points in Exp1 is larger than that in Exp2. (2) Affected by the complexity of phase maps, the number of corresponding points is far smaller than that of valid points in fixed pixel point set. Nearly 100,000 points are kicked out in Exp1. It is up to 170,000 in Exp2. (3) The proposed CC method is successful in overcoming the complex conditions appeared in phase maps, because the corresponding points both in Exp1 and Exp2 covered almost all the overlapped areas seen from camera C1 and C2, which can be seen from Fig. 12 and Fig. 13. (4) There are still small number of outliers remaining in the result of CC method, which can be clearly seen from Fig. 14(b).

Lastly, four kinds of virtual raster with different resolution (Nh and Nv), including 1024 × 768, 512 × 384, 256 × 192 and 128 × 96, are selected in our experiments. Accordingly, four experiments are performed following proposed VR method, which are noted by Exp3, Exp4, Exp5 and Exp6 respectively. During every experiment, the numbers of points remained in the virtual raster at the stage of B (Step 4) and C (Step 6) are recorded in Table 3. Also, the corresponding points and 3D point cloud in Exp3 are given in Fig. 15 and Fig. 16.

Tables Icon

Table 3. Experimental statistics in VR method

 figure: Fig. 15

Fig. 15 Corresponding points under the image coordinate system of the projector in Exp3.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 3D point cloud in VR method. (a) Elevation view. (b) Right elevation.

Download Full Size | PDF

Interestingly enough, the success rates (C/B) in VR method (Table 3) are higher than those in CC method (Table 2). It’s nearly 89% in VR method, while it is 76% in CC method. Though the resolution (1024 × 768) of the virtual raster in Exp3 is only 60% of the image resolution (1280 × 1024) in CC method, the number of corresponding points in Exp3 is higher than that in Exp2. Comparing to Fig. 11, the corresponding points in Fig. 15 almost covered all the overlapped area by Fig. 11(a) and Fig. 11(b). That means proposed VR method is applicable to the complex phase maps. Figure 16(b) has fewer outliers than Fig. 14(b), that means VR method has more ability than CC method in putting off outliers hidden in original phase maps.

6. Conclusion

Complexity of phase maps in fringe projection system is almost neglected by all researchers. Phase correlation has an obvious detect in real applications. Proposed correspondence finding method upon space conversion can void all the problems caused by the complexity of phase maps effectively. Experiment results show that proposed CC method and VR method are successful and effectively. Especially, VR method has more advantages than CC method in flexibility of defining the resolution of virtual raster, success rate of accurate corresponding points to initial corresponding points, and restraint of outliers.

Acknowledgments

This project is supported by Scientific Research Fund of SiChuan Provincial Education Department(14ZA0098) and the Doctorial Innovation Fund by Southwest University of Science and Technology (15zx7113).

References and links

1. C. Bräuer-Burchardt, M. Möller, C. Munkelt, P. Kühmstedt, and G. Notni, “Comparison and evaluation of correspondence finding methods in 3D measurement systems using fringe projection,” Proc. SPIE 7830, 783019 (2010). [CrossRef]  

2. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

3. P. Kühmstedt, C. Munckelt, M. Heinze, C. Bräuer-Burchardt, and G. Notni, “3D shape measurement with phase correlation based fringe projection,” Proc. SPIE 6616, 66160B (2007). [CrossRef]  

4. C. Bräuer-Burchardt, M. Möller, C. Munckelt, M. Heinze, P. Kühmstedt, and G. Notni, “Determining Exact Point Correspondences in 3D Measurement Systems Using Fringe Projection- Concepts, Algorithms and Accuracy Determination,” Applied Measurement systems, Prof. Zahurul Haq (Ed.), ISBN: 978–953–51–0103–1, InTech, Available from: http://www.intechopen.com/books/applied-measurementsystems/determining-exact-point-correspondences-in-3d-measurement-systems-using-fringe-projectionconcepts-a. 211–228 (2012) [CrossRef]  

5. H. Zhao and J. Li, “Stereo image matching based on phase unwrapped,” Proc. SPIE 5253, 394–397 (2003). [CrossRef]  

6. L. Yong, H. Dingfa, and J. Yong, “Flexible error-reduction method for shape measurement by temporal phase unwrapping: phase averaging method,” Appl. Opt. 51(21), 4945–4953 (2012). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Sensor arrangements of shape measurement system with a projector and two cameras. (Sample figure adapted from Applied Measurement systems.218 (2012) [4])
Fig. 2
Fig. 2 Correlation of phase maps between two cameras (CC method).
Fig. 3
Fig. 3 Correlation of phase maps between two cameras (VR method).
Fig. 4
Fig. 4 Four phase maps from two cameras in a real measurement. (a) Horizontal phase map from camera C1. (b) Vertical phase map from camera C1. (c) Horizontal phase map from camera C2. (d) Vertical phase map from camera C2.
Fig. 5
Fig. 5 Image-phase space conversion.
Fig. 6
Fig. 6 Phase-image space conversion.
Fig. 7
Fig. 7 Data structure for storing a pixel point set of the projector.
Fig. 8
Fig. 8 Search process of finding single corresponding points (CC method).
Fig. 9
Fig. 9 Corresponding points finding (VR method).
Fig. 10
Fig. 10 Sensor arrangements in real measurement.
Fig. 11
Fig. 11 Pixel point sets in the image space of the projector from two cameras. (a) IP1 from camera C1. (b) IP2 from camera C2.
Fig. 12
Fig. 12 Corresponding points and rest points of I1 under the image coordinate system of camera C1 in Exp1. (a) Corresponding points. (b) Rest points.
Fig. 13
Fig. 13 Corresponding points and rest points of I2 under the image coordinate system of camera C2 in Exp2. (a) Corresponding points. (b) Rest points.
Fig. 14
Fig. 14 3D point cloud in Exp1. (a) Elevation view. (b) Right elevation.
Fig. 15
Fig. 15 Corresponding points under the image coordinate system of the projector in Exp3.
Fig. 16
Fig. 16 3D point cloud in VR method. (a) Elevation view. (b) Right elevation.

Tables (3)

Tables Icon

Table 2 Experimental data in CC method

Tables Icon

Table 3 Experimental statistics in VR method

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

{ ϕ x = Φ h W x P ϕ y = Φ v H y P
δ=| v(p)-v(p1) |=| ( ϕ 2,x ϕ 1,x1 ) 2 + ( ϕ 2,y ϕ 1,y1 ) 2 |.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.