Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Absolute phase retrieval based on fringe amplitude encoding without any additional auxiliary pattern

Open Access Open Access

Abstract

An absolute phase retrieval method based on fringe amplitude encoding is proposed. Different from the conventional intensity coding methods which are based on time division multiplexing with multiple additional auxiliary patterns, the proposed fringe order encoding strategy is codeword overlapping interaction based on space division multiplexing. It just directly encodes different fringe amplitudes for different periods in corresponding sinusoidal phase-shifting patterns to generate space division multiplexing composite sinusoidal phase-shifting patterns and quantifies the fringe amplitudes into four levels as encoding strategy, so it can retrieve absolute phase without any additional auxiliary patterns. To improve the anti-interference capability of the proposed method, a codeword extraction method based on image morphological processing is proposed to segment the grayscale. Consequently, both the phase-shifting sinusoidal deformed patterns and the single frame space division multiplexing four gray-level codewords for fringe order recognition can be extracted respectively from the captured composite deformed patterns. Then, a half-period single-connected domain correction method is also proposed to correct the codewords. Moreover, in order to suppress the effect of jump errors, the phase zero points are constructed to segment the positive and negative ranges of the phase, making the phase unwrapping process segmented. The experimental results demonstrate the feasibility and effectivity of the proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) [13] enables contactless reconstruction of 3D shape accompanying with the advantages of high accuracy and fast measurement. It has been extensively used in industrial quality inspection, cultural relics protection, dentistry diagnosis and treatment and so on [4]. Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP) are two major FPP techniques, both of which reconstruct the 3D shape by projecting sinusoidal fringe patterns [57]. Full-period equal-step phase-shifting algorithm and arbitrary equal-step phase-shifting algorithms, such as Carré algorithm [8] and Stoilov algorithm [9], are the main algorithms utilized by the PSP. Since the phase is typically calculated by employing the arctangent operation, the phase is wrapped within [-π, π). Consequently, the phase unwrapping is essential to solve phase ambiguity problem and obtain the continuous absolute phase.

The proposed phase unwrapping so far can be broadly divided into spatial phase unwrapping (SPU) [1013] and temporal phase unwrapping (TPU) [1416]. SPU relies on the phase continuity of neighboring pixel on the unwrapping path to unwrap phase, and the choice of unwrapping path determines the reliability of the 3D reconstruction results. It is required that the phase jump on the path cannot be greater than 2π and the fringe order is sequential [1720]. Due to the inevitable phase unwrapping error propagation, SPU is not suitable for the measurement of complex isolated objects. In contrast, TPU locates the fringe order by projecting additional encoded patterns [21], and each wrapped phase period has a separate integer order, avoiding the propagation of phase unwrapping errors. By projecting N frames of sinusoidal fringes and M frames of additional patterns, TPU achieves high phase quality and measurement robustness through guaranteeing the sinusoidal nature of the fringes. Among established TPU methods, Gray-code encoding strategy [2224], phase-code encoding strategy [25,26], three-frequency TPU strategy [27] and two-frequency heterodyne TPU strategy [28] are the most widely used, and each of these techniques requires additional patterns to solve the fringe order. The number of additional patterns for the three-frequency, two-frequency heterodyne and the phase-code methods is the number of three, two, and one sets of phase-shifted patterns, respectively, and the codeword blurring of these three methods increases with the number of fringe periods in the projected patterns. The Gray-code method encodes two grey levels as intensity domain codewords, the quantization error of the codewords is small and the robustness is high, but as the increase of fringe frequency, the projection efficiency decreases greatly. The periodic marking of high-density fringes is a challenge for N + M coding patterns [29,30]. Unfortunately, the number of additional patterns increases significantly to expand the range of fringe orders identified by codewords and reduce the probability of codeword errors in coding methods. To this day, many absolute phase retrieval methods [31,32] have been extensively investigated to reduce the additional encoding patterns in the expectation of improving measuring efficiency. An et al. [33] obtained six codewords from only one coding pattern by rotating the calculation sequences of the phase-shifting patterns within each fringe cycle, which can then achieve a wide range of absolute phase measurements by employing geometric constraints. Noting that the calculation of phase step offsets in the Carré algorithm has point-to-point independence, Zhang et al. [34] employed the phase shift step offsets in the phase period as the basis for determining the stage period, and proposed an absolute phase retrieval based on the equal-step algorithm that can obtain both the wrapped phase and the stage from the projected four frames of phase-shifting fringes. To significantly reduce the number of projected frames in the light intensity domain, grey-level coding strategy [3537] uses multiple grey levels as codewords to achieve a robust large-scale 3D measurement with only few measurement frames. Porras-Aguilar proposed a new grey-level coding method [36] based on a space filling curve design to overcome typical defocus errors, which leads to a large improvement in the robustness of grey-level coding methods. Further, the proposed method differs from existing grey-level coding strategy by encoding the grey levels directly into the fringe amplitude intensities, eliminating the need for additional auxiliary patterns.

In this paper, an absolute phase retrieval method based on fringe amplitude encoding without any additional patterns is proposed for the full-period equal-step phase shift algorithm. Different from the traditional TPU based on time division multiplexing, the coding method directly modulates the four-level grey code used to extract fringe order onto the amplitude of the phase shifted sinusoidal fringe pattern to form a composite phase-shifting fringe patterns for space division multiplexing. It just directly encodes different fringe amplitudes for different periods and quantifies the fringe amplitudes into four levels as codeword encoding strategy. The normalized modulation can then be constructed from the captured composite deformed patterns, and then the fringe order information can be demodulated directly from the normalized modulation. While the encoded parameter of the proposed method is amplitude of the phase-shifting sinusoidal fringe pattern, different from the traditional intensity coding method, the proposed method has better noise immunity for shaded regions and does not need any additional auxiliary pattern. Furthermore, a phase segmentation unwrapping method with applicability is employed to correct the jump error.

2. Principle

2.1 Phase-shifting profilometry

In general, phase-shifting profilometry can obtain the changes of the object surface height by comparing phase changes of the sinusoidal fringes in the projected and photographed directions, typically using a full-period equal step algorithm to solve for phase information modulated by object surface height. For the N-step PSP algorithm, the deformation fringes captured by the camera can be expressed as:

$${I_n}(x,y) = r(x,y)[{a + b\cos (\varphi (x,y) + {{2\pi n} / {N)}}} ]$$
where n represents the phase-shift index n=0, 1, 2, …, N-1, $(x,y)$ is the pixel coordinate in the camera space, $r(x,y)$ is the surface reflectance distribution, $ar(x,y)$ represents the intensity of background light, $br(x,y)$ represents the modulation, and $\varphi (x,y)$ is the phase caused by the height of the object surface. It can be extracted by the following equation:
$$\varphi (x,y) ={-} arctan[\frac{{\sum\limits_{n = 1}^{n = N} {{I_n}(x,y)\sin (\frac{{2\pi n}}{N})} }}{{\sum\limits_{n = 1}^{n = N} {{I_n}(x,y)\cos (\frac{{2\pi n}}{N})} }}]$$

Due to the finiteness of the results obtained from the calculation of the arctangent operation in Eq. (2), the phase is wrapped between [-π, π). Consequently, a phase-unwrapping algorithm is required to obtain an absolute phase $\phi (x,y)$. The relationship between $\varphi (x,y)$ and $\phi (x,y)$ can be expressed as:

$$\phi (x,y) = \varphi (x,y) + 2\pi k(x,y)$$
where $k(x,y)$ represents the distribution of the fringe order. Conventionally, $k(x,y)$ is determined by projecting some additional encoding patterns in various absolute phase unwrapping algorithms. In this paper, an absolute phase retrieval method based on fringe amplitude encoding is proposed to improve the measuring efficiency, which can simultaneously obtain both $\varphi (x,y)$ and $k(x,y)$ from the captured composite deformed patterns without any additional auxiliary patterns.

2.2 Proposed encoding principle

In N-step PSP, higher measuring accuracy and better error immunity are obtained when N is bigger enough. To locate the fringe order, the amplitudes of the N phase-shifting sinusoidal fringe patterns are used to encode into four-level codewords in the intensity domain. For sake of the measuring efficiency, N = 4 is selected in this paper to illustrate the basic principles of the proposed method. The corresponding four projected composite fringe patterns $I_n^p$ can be assigned as:

$$I_n^p({x^p},{y^p}) = a + b({x^p})\cos (2\pi {f^p}{x^p} + {{2\pi n} / N})$$
where ${f^p}$ represents the frequency of the sinusoidal fringe pattern, a is the background light intensity, and $b({x^p})$ is the amplitude distribution along the phase shifting direction, which is encoded as four levels as shown in Fig. 1(d). Figure 1(a) illustrates the normal sinusoidal fringe amplitude, and Fig. 1(b)-(c) show the traditional sinusoidal fringe projection pattern and its cutaway view, respectively. And Fig. 1(e)-(f) illustrate the composite projection pattern of the proposed method and its cutaway view. As shown in Fig. 1, the encoded parameter of the proposed method is the amplitude of the phase-shifting sinusoidal fringe pattern, and compared with the traditional projected fringe pattern, the fringe amplitude of the proposed method varies with the period.

 figure: Fig. 1.

Fig. 1. (a)-(c) The projected fringe pattern of traditional method. (d)-(f) The projected fringe pattern of proposed method.

Download Full Size | PDF

The encoding strategy of the codewords is the critical factor in identifying the fringe order. Different from the conventional intensity coding methods which are based on time division multiplexing with multiple additional auxiliary patterns, the proposed encoding strategy is codeword overlapping interaction based on space division multiplexing. The current codeword relies on the next adjacent codewords to determine its location in the fringe order as long as it ensures that any two adjacent amplitude levels in the above four-level amplitude encoding method are different. If the amplitude levels are encoded one by one according to the code elements LUT as shown in Fig. 2(a), and every four adjacent code bits consist a codeword subsequence. It can be seen in codeword LUT that the first codeword subsequence {4,3,2,1} is appointed as the first fringe order, and the second codeword subsequence {3,2,1,3} is appointed as the second fringe order, and the 25th codeword subsequence {4,3,4,3} is appointed as the 25th fringe order. In this way, since no repeat codewords exist in LUT, all the fringe orders can be appointed as shown in Fig. 2(b). Based on this encoding strategy, the number of fringe orders appointed can be obtained by a simple permutation as follows:

$${n_k} = C_L^1 \times {(C_{L - 1}^1)^{sq - 1}}$$
where nk represents the maximum number of fringe orders specified by LUT, L represents the number of amplitude levels and sq is the length of subsequence. In this paper, L and sq are assigned to 4, so the value of nk is 108. Typically, the resolution of the projector is 1140${\times} $ 912 pixels and a single fringe period of around 20 pixels is sufficient to achieve high experimental accuracy, and the number of fringe orders is usually around 60 to account for the full projection pattern, which implicates that nk is satisfactory for large-scale 3D measurements.

 figure: Fig. 2.

Fig. 2. The coding principle. (a) Codeword LUT. (b) Fringe order. (c) The encoded amplitude intensity.

Download Full Size | PDF

To improve measuring efficiency, all codewords are modulated onto the required amplitude of the phase-shifting sinusoidal fringe patterns to form the corresponding composite phase-shifting fringe patterns. When the number of fringe orders k in the projection are specified as 32 and 108 respectively, the most common number of projection frames required for different conventional TPU methods are listed in Table 1, where grey-level coding method uses 8 grey levels. Theoretically, for M frames with additional auxiliary patterns, Gray-code can locate 2 M fringe orders, and grey-level coding with p grey levels can locate pM fringe orders. And the number of additional auxiliary patterns for phase-coding, three-frequency and two-frequency heterodyne method are determined by the patterns for solving phase. It can be known from Table 1 that the additional auxiliary pattern of the proposed method is zero and the total number of projection patterns are significantly reduced compared to the other five conventional TPU methods. One of the composite phase-shifting patterns with 24 fringe order is shown in Fig. 2(c).

Tables Icon

Table 1. The projection patterns of different methods

Note that even though the last fringe near the edge of the valid measuring region has no corresponding codewords sequence, it can rely on the previous four consecutive codewords to solve the fringe order. There are usually sharp height variations at object edges, which can make the proposed method less robust to codewords near these regions. During the decoding of fringe order close to the edge, the current fringe order is obtained by adding or subtracting one from the adjacent fringe order on the side away from the edge. In addition, the decoding correctness in this paper requires that at least five consecutive codewords within the measurement range can be correctly decoded, implying that the length of the codeword sequence to be looked up in LUT is greater than the length of the encoded codeword subsequence. If five consecutive codewords can find the corresponding order in the lookup table, the decoding priority of these codewords is higher than that of the neighboring codewords, even if the neighboring codewords are wrong, we can correct them according to the continuity of the codewords.

The four frames of the composite phase-shifting fringe patterns are then projected respectively onto the object and the camera is used to capture the corresponding deformed patterns modulated by the object, which can be expressed as:

$$I_n^c(x,y) = r(x,y)[a + b(x)cos\textrm{(}\varphi \textrm{(}x,y) + {{2\pi n} / {N)}}]$$

The average intensity $ar(x,y)$ and the modulation distribution $b(x)r(x,y)$ can be extracted as follows:

$$ar(x,y) = {{\left[ {\sum\limits_{n = 1}^{n = N} {I_n^c(x,y)} } \right]} / N}\;\;$$
$$b(x)r(x,y) = \frac{2}{N}\sqrt {{{\left[ {\sum\limits_{n = 1}^{n = N} {I_n^c({x,y} )\sin ({{2\pi n} / {N)}}} } \right]}^2} + {{\left[ {\sum\limits_{n = 1}^{n = N} {I_n^c({x,y} )\cos ({{2\pi n} / {N)}}} } \right]}^2}}$$

To mitigate the effect of surface reflectivity on decoding, the normalized modulation for suppressed reflectivity is obtained by processing the modulation intensity using the average background intensity, and the normalized modulation is processed as follows:

$$MC(x,y) = \frac{{b(x)r(x,y)}}{{ar(x,y)}} = \frac{{b(x)}}{a}$$

Since the reflectivity factor is removed from the modulation intensity according to the above equation and a is a constant, the $MC(x,y)$ can reflect the feature of the amplitude code $b(x )$. Additionally, the traditional phase-shifting sinusoidal deformed patterns ${I_n}(x,y)$ just as shown in Eq. (1) can be obtained from $I_n^c(x,y)$ by the following equation:

$${I_n}(x,y) = ar(x,y) + \frac{{b[{I_n^c(x,y) - ar(x,y)} ]}}{{aMC(x,y)}}\;\;$$

Consequently, the phase-shifting sinusoidal deformed patterns and four gray level codewords containing order information can be extracted simultaneously from captured composite deformed patterns and then the phase $\varphi (x,y)$ caused by the height of the object surface can be obtained with Eq. (2) naturally. Taking the four-step phase shift as an example, Fig. 3 shows the pattern extraction procedure. The four captured composite deformed patterns are shown in Fig. 3(a). The normalized modulation is as shown in Fig. 3(b), which reflects the four levels codewords feature. The extracted traditional sinusoidal deformed patterns are as shown in Fig. 3(c). And the phase $\varphi (x,y)$ caused by the height of the object surface is as shown in Fig. 3(d). As shown in Fig. 4(a-b), the captured composite deformed patterns are transformed by Eq. (10) to obtain the equal-amplitude In.

 figure: Fig. 3.

Fig. 3. Pattern extraction procedure. (a)Captured composite deformed patterns. (b) Normalized Modulation distribution. (c) Traditional phase-shifting sinusoidal deformed patterns. (d) Wrapped phase.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) The 880th column cutaway view of $I_1^c$ and $I_4^c$. (b) The 880th column cutaway view of I1 and I4.

Download Full Size | PDF

In the decoding process, by applying an intensity threshold segmentation method based on the grayscale histogram of the $MC(x,y)$, an initial range of four codewords can be extracted. The shadow region makes the regions of different fringe orders become the same connected region, and the shadow region must be removed to get the effective reconstruction region. In addition, in order to cope with the measurement of the object with abrupt height change, the surface contours of the objects can be extracted in advance using Sobel edge detection algorithm, and then the extracted contours are processed by morphological open and closed operations to get the edge binary mask em(x,y). The em(x,y) has the value of 1 at the edge pixel point and 0 at the non-edge point. Therefore, the valid region $vr(x,y)$ can be generated by the following equation:

$$vr(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\textrm{ }ar(x,y) > {{mean(ar(x,y))} / {2\textrm{ }and\textrm{ }em(x,y) = 0}}\\ 0,\textrm{ }else \end{array} \right.$$
where $mean(ar(x,y))$ represents the average value of $ar(x,y)$. Here, the region where the background intensity is greater than half of the average background intensity and is not part of the object edge abrupt change location is used as the valid region for the subsequent gray intensity threshold division. The grayscale histogram of $MC(x,y)$ in valid region is as shown in Fig. 5(a). Based on the grayscale histogram of the grayscale levels and their corresponding number of pixels, a grayscale statistical curve is generated. The curve is smoothed to suppress the effect of the jaggedness of the curve on the threshold finding, and then the local minimum values of the grayscale curve are used as the grayscale thresholds to generate rough codewords with statistical constraints on the number of low gray-level pixel points, as illustrated in Fig. 5(b). It can be seen in Fig. 5(b) that three thresholds have been obtained from the grayscale curve: th1, th2, th3.

 figure: Fig. 5.

Fig. 5. The principle of the thresholding segmentation. (a) Grayscale histogram of modulation. (b) Grayscale curve of (a) after smoothing.

Download Full Size | PDF

Due to the uneven reflectivity of the object surface and shadow occlusion, the method of relying on the intensity encoding is prone to order confusion in the decoding process. In addition, some codewords in low-quality pixels caused by sharp surface changes are unreliable. To enhance the robustness of amplitude-intensity coding, this paper proposes a codeword overlap correction method based on image morphological processing techniques.

2.3 Principle of correcting the codeword

To improve the decoding efficiency and reduce the difficulty of morphological processing, this paper uses the threshold th2 to segment the $MC(x,y)$ to generate the binary codeword template $t{c_1}(x,y)$ containing codewords 1 and 2, and then the threshold values th1 and th3 are used to generate the binary codeword template $t{c_2}(x,y)$ containing codewords 2 and 3, wherein the binary codeword templates are generated by the following equations:

$$t{c_1}(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\textrm{ }MC(x,y) > \textrm{th2}\\ 0,\textrm{ }else \end{array} \right.$$
$$t{c_2}(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\mathrm{\ th1\ < }MC(x,y) < \textrm{th3}\\ 0,\textrm{ }else \end{array} \right.$$

Then, A morphological closure operation is employed on the codeword template before correcting operation, using structural elements to smooth the contours of the image and connect isolated areas in $t{c_1}(x,y)$ and $t{c_2}(x,y)$. Taking the codeword template $t{c_1}(x,y)$ as an example, the detailed steps of the correcting method are described as follows:

Step 1. Generate masks and label their single-connected domain. The wrapped phase can be segmented into single-connected regions according to the phase value ranges. Then, masks $bw(x,y)$ and $mw(x,y)$ are generated by the following equations:

$$bw(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\textrm{ }\varphi (x,y) < \textrm{0}\\ 0,\textrm{ }else \end{array} \right.$$
$$mw(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\textrm{ }{{ - \pi } / 2}\mathrm{\ < }\varphi (x,y) < {\pi / 2}\\ 0,\textrm{ }else \end{array} \right.$$

Since the masking operation is a pixel-wise operation, the coordinate index $(x,y)$ will be omitted in the mask for simplicity. As shown in Fig. 6, the binary mask $bw$ is the negative region of the wrapped phase, and mw is the central phase region. Then, three half-period masks ymn, n = 1, 2, 3 are obtained by using mask vr to process bw and mw, which can be interpreted by the following equation:

$$\left\{ \begin{array}{l} y{m_1} = vr \times bw\\ y{m_2} = vr \times (1 - bw)\\ y{m_3} = vr \times mw \end{array} \right.$$

 figure: Fig. 6.

Fig. 6. Period segmentation. (a) Segment wrapped phase. (b) Mask bw. (c) Mask mw.

Download Full Size | PDF

Accordingly, the connected components can be extracted from the binary images ymn, which give different labels to the disconnected areas of different periods.

Step 2. Match the same label region of the mask to the codeword template. The marked ymn are multiplied separately with the $t{c_1}(x,y)$. If the area of each label in the codeword template is greater than half area of the associated domain to which the same label belongs in the labeled $y{m_\textrm{n}}$, the single-connected region of the codeword is considered to match the connected domain of the marks ymn. As shown in Fig. 7, the matching results of ymn are pmn.

 figure: Fig. 7.

Fig. 7. The process of the correcting operation. (a) Mask ym1, ym2, and ym3. (b) Results of labelling the single connected domain of (a). (c) Results of processing (b) with the mask $tc$. (d) The results of matching (c) to the connected domain of (a) based on the same label.

Download Full Size | PDF

Step 3. Verify the correctness to ensure that only one codeword exists for a period. The matching results pm1 and pm2 are independent from each other in a period and both only account for half-period of the phase domain, which may lead to half-period codeword error. To address this problem, the matching result pm3 is constructed as a reference value to correct the code templates processed by the bw. Perform a label region by label region process for ym3, and if pm1 and pm2 have different values at that label region location, set the values of the single connected domain at that location in both to the value of pm3 at that location.

Eventually, two single-connected codeword templates pm1 and pm2 occupying half-period phase domain are added to obtain the corrected codeword template $t{c_1}(x,y)$. Moreover, the closing and hole-filling operation of the codeword templates removes the shadow areas between the different connected domains in each order period and results in the corrected codeword template that contains the codewords 3 and 4. The same operation is performed on the codeword template $t{c_2}(x,y)$ to obtain codewords 2, 3. The codeword distribution $dc(x,y)$ can be extracted according to the following equation:

$$dc(x,y) = \left\{ \begin{array}{l} 1,\textrm{ }if\textrm{ }t{c_1}(x,y) = 0\textrm{ }and\textrm{ }t{c_2}(x,y) = 0\\ 2,\textrm{ }if\textrm{ }t{c_1}(x,y) = 0\textrm{ }and\textrm{ }t{c_2}(x,y) = 1\\ 3,\textrm{ }if\textrm{ }t{c_1}(x,y) = 1\textrm{ }and\textrm{ }t{c_2}(x,y) = 1\\ 4,\textrm{ }if\textrm{ }t{c_1}(x,y) = 1\textrm{ }and\textrm{ }t{c_2}(x,y) = 0 \end{array} \right.$$

The half-period single-connected domain correction method unifies the codewords in the one-period connected domain and solves the codeword confusion problem caused by uneven reflectivity. After obtaining the distribution diagram for the four codewords, the fringe order is progressively developed according to the position of current codeword in LUT generated at the time of encoding. As shown in Fig. 8(a)-(c), the final fringe order is successfully resolved by the proposed method. Particularly, to guarantee that the same connected domain is in one period, the order with the most pixels in the current connected domain is used to fill the entire single-connected domain.

 figure: Fig. 8.

Fig. 8. The proposed decoding method. (a) The codeword template dc (x, y). (b) Fringe order. (c) The 720th column cutaway view of (a) and (b).

Download Full Size | PDF

In actual measurements, even if the fringe order is corrected by the phase half-period template, the presence of shadow areas or noise in the wrapped phase diagram will cause the period edge of the fringe order change and the period edge of the wrapped phase jump not to be strictly aligned with each other, inevitably resulting in jump error.

2.3 Principle of jump error correction

There are two main types of jump error [21] in absolute phase retrieval that cannot be easily eliminated by median filtering operations: a random error generated by the wrapped phase itself near the 2π jump position, and a mismatch error between the order edge and the phase period edge. In this paper, the second type error is well suppressed by constructing the fringe orders based on filling the half-period single-connected domain of the segmented phase. It is therefore necessary to pay attention to the suppression of the first type. A method of correcting the jump error based on positioning phase zeros is proposed, which holds valid for other phase unwrapping methods as well.

A phase period is divided into two halves using the constructed phase zero. In one fringe order, the phase values in the left (right) half that are greater (less) than zero add (subtract) 2π. Then, fringe order is directly added to the left and right halves of the wrapped phase. As can be seen in Fig. 9, there is an edge phase jump error due to the arctangent calculation in the right half of the fringe order, and this column jump error produces the existence of values less than zero, which can be eliminated by adding 2π. Determining the position of the zero point is crucial for correcting phase jump error. However, locating the phase zero point based on the positive or negative value of the $bw$ jump can easily lead to the 2π jump boundary of the phase period being misclassified as the zero point. Therefore, the position of the zero point is where the $bw$ jump position on the $mw$ has a value of 1, as shown in Fig. 9(a). Furthermore, the presence of phase noise can cause multiple zero points within a fringe order. In this case, the whole pixels between the first and last zero points are treated as one zero point.

 figure: Fig. 9.

Fig. 9. The principle of jump error correction. (a) Positioning the zero position. (b) The 720th column sectional views of unwrapped phase after correcting jump error.

Download Full Size | PDF

Finally, a phase-to-height mapping algorithm [38,39] can be used to obtain 3D reconstruction from the absolute phase.

3. Experiments

To demonstrate the validity of the proposed method, a measurement system is developed using DLP LightCrafter 4500 projector with a resolution of $1140 \times 912$ pixels, a projection rate of 120 fps and the ability to project sinusoidal fringes with 8-bit depth, GEV-B1610M CCD camera with a resolution of $1624 \times 1236$ pixels, as shown in Fig. 10. The projected fringe patterns have a fringe period size of 24 pixels and the phase change direction is vertical. The experiments verify that the proposed method can efficiently achieve absolute phase retrieval in 3D reconstruction without the need for additional patterns, while ensuring high reconstruction accuracy.

 figure: Fig. 10.

Fig. 10. Measurement system.

Download Full Size | PDF

3.1 Measurement on continuous objects

Firstly, two isolated continuous objects as shown in Fig. 11 are chosen to be measured with four frames of the traditional PSP method, one frame of the traditional FTP method and four frames of the proposed method for comparison experiments. In this experiment, both the PSP method and the FTP method project sinusoidal fringes with the uniform intensity amplitude of the fringes projected by the proposed method.

 figure: Fig. 11.

Fig. 11. (a) Wrapped phase obtained by PSP. (b) Wrapped phase obtained by FTP. (c) Wrapped phase obtained by proposed method. (d) Reconstruction result by PSP. (e) Reconstruction result by FTP. (f) Reconstruction result by the proposed method.

Download Full Size | PDF

Figure 11(a)-(c) show the wrapped phase and the reconstruction results extracted by PSP, FTP, and the proposed method respectively, where the fundamental frequency of FTP is extracted by rectangular window filtering. Figure 11(d)-(f) is the reconstruction results of Fig. 11(a)-(c). Since it is only the accuracy of the phase calculation that needs to be concerned, the selected fringe orders of PSP and FTP are all extracted by the proposed method. The root-mean-square errors (RMSEs) of the difference between the reconstruction results of the proposed method and FTP relative to PSP are calculated, where the RMSE of the proposed method is 0.0875 mm and that of FTP is 0.4038 mm. Figure 12(a) shows the profiles of line row 577 of the results that are reconstructed using different methods and its partial enlargement is shown in Fig. 12(b). It can be clearly seen in Fig. 12(b) that the reconstruction result of the proposed method in the dorsal region of the turtle model is closer to that of PSP comparing to that of FTP. The experimental result demonstrates that the accuracy of the proposed method is close to that of PSP and much greater than that of FTP. For achieving absolute phase retrieval, the proposed method is superior to the method of projecting a sinusoidal fringe pattern plus M-frame encoded patterns, and can achieve absolute measurements without additional auxiliary patterns while ensuring a certain accuracy.

 figure: Fig. 12.

Fig. 12. (a) the profiles of the row 577. (b) Partial enlargement of (a).

Download Full Size | PDF

3.2 Measurement on isolated complex objects

Furthermore, several isolated complex objects with abrupt edges as shown in Fig. 13 are chosen to verify the validity of the proposed method. Figure 13(a) displays the four captured composite deformed patterns. Then, the modulation in Fig. 13(b) and the phase-shifting sinusoidal deformed patterns can be simultaneously obtained from the composite deformed patterns by pixel-by-pixel calculation. The wrapped phase in Fig. 13(c) is solved from the phase-shifting sinusoidal deformed patterns. The validity of the method is verified by the fact that the fringe order can be appointed by the position of the codewords in LUT, as illustrated in Fig. 13(d-e). In addition, areas of code confusion can be well corrected. Ultimately, the method proposed in this paper successfully extracts codewords based on modulation decoding from the composite deformed patterns.

 figure: Fig. 13.

Fig. 13. Measurement of complex isolated objects. (a) Composite deformed patterns. (b) Modulation. (c) Wrapped phase. (d) Codewords template. (e) Fringe order.

Download Full Size | PDF

Figure 14 shows the 3D shape of the reconstructed objects by different methods. As shown in Fig. 14(a-c), the reconstructed depth maps of the conventional Gray-code encoding strategy clearly reveal many jump errors, which can be corrected by Wang’ Gray-code method [24] and the proposed method. Using the reconstruction depth of Wang’ Gray-code method as quasi-true value, the RMES of conventional Gray-code encoding strategy without jump error correction is 1.7500 mm, while the RMES of the proposed method is 0.0633 mm. Compared to the conventional Gray-code encoding strategy, the proposed method can suppress the jump error efficiently so that the reconstruction result is smoother and its experimental accuracy is closer to the quasi-true value. The low modulation inevitably introduces more random measurement noise, which may reduce the accuracy of the proposed method compared to Wang’ Gray-code method. However, the number of projected patterns in the proposed method is significantly reduced compared to the N + 6 frames required for Wang’ Gray-code method, resulting in a significant increase in absolute measurement efficiency with some compromises on accuracy. In this experiment, when the number of projected phase-shifting patterns is 4 frames, the projection efficiency of the proposed method is improved by a factor of 1.5 compared to Wang's method.

 figure: Fig. 14.

Fig. 14. Reconstructed depth maps of the complex isolated objects. (a) Wang’ Gray-code method. (b) Conventional Gray-code encoding strategy. (c) Proposed method.

Download Full Size | PDF

Finally, the robustness is verified by measuring a standard stair, and Fig. 15 shows the corresponding reconstructed process. Figure 15(a)-(f) show one captured pattern, wrapped phase, modulation, template codewords, fringe order and reconstructed result, respectively. Close to the shaded area edge, it is possible that only a small part of the codeword is present. But it can be corrected based on the position of adjacent consecutive codewords in LUT. By just adding or subtracting 1 from the order solved by adjacent consecutive codewords, the fringe order can be corrected efficiently. After the simple order correction, the shape of objects with discontinuous surfaces can be successfully reconstructed. The height of each step of the standard stair is 29 mm, and the reconstruction results using the proposed method demonstrate that the average height of each step is approximately 29.1714 mm. It can be concluded that the proposed encoding strategy based on space division multiplexing is feasible for the measurement of standard stepped objects.

 figure: Fig. 15.

Fig. 15. Stair reconstruction. (a) One of captured composite deformed patterns. (b) Wrapped phase. (c) Modulation. (d) Codewords template. (e) Fringe order. (f) Reconstructed result.

Download Full Size | PDF

4. Conclusion

An absolute phase retrieval method based on fringe amplitude encoding without any additional auxiliary pattern is proposed. By cleverly utilizing the amplitude modulation technique, the four-level grey code used to extract fringe order is modulated onto the amplitude of the phase-shifting sinusoidal fringe pattern to form a composite phase-shifting fringe pattern. To reduce decoding difficulty, the encoding strategy based on space division multiplexing is chosen to build a lookup table to decode the codewords to achieve absolute phase retrieval. Consequently, four-level codewords can be demodulated from the modulation of the captured composite deformed patterns. Furthermore, morphological processing is employed in the segmentation of the modulation, the result of which is matched and filled to generate a codeword template with period edge that roughly coincides with the wrapped phase by using a half period single-connected phase domain. In practice, the proposed method decodes the codewords accurately based on LUT to obtain the fringe order, which is combined with a segmented phase correction to suppress jump errors. Compared to conventional TPU methods, the proposed method enables efficient 3D measurements by simultaneously obtaining the wrapped phase and fringe order from the composite deformed patterns without projecting any additional auxiliary patterns. The feasibility and effectiveness of the proposed method has been verified with successful measuring multiple isolated continuous objects and multiple isolated complex objects with abrupt edges.

Funding

National Natural Science Foundation of China (No.62375188).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data will be made available on request.

References

1. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

2. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

3. S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

4. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

5. C. Zuo, S. Feng, L. Huang, et al., “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

6. X. Su and W. Chen, “Fourier transform profilometry: A review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

7. Y. Wu, Y. Cao, Z. Huang, et al., “Improved composite Fourier transform profilometry,” Opt. Laser Technol. 44(7), 2037–2042 (2012). [CrossRef]  

8. K. Qian, F. Shu, and X. Wu, “Determination of the best phase step of Carr aIgorithm in phase shifting interferometry,” Meas. Sci. Technol. 11(8), 1220–1223 (2000). [CrossRef]  

9. G. StoiIov and T. Dragostinov, “Phase-stepping Interferometry; Five-frame AIgorithm with an Arbitrary Step,” Opt. & Lasers in Eng. 28(1), 61–69 (1997). [CrossRef]  

10. D. J. Bone, “Fourier fringe analysis: the two-dimensional phase unwrapping problem,” Appl. Opt. 30(25), 3627–3632 (1991). [CrossRef]  

11. S. Li, X. Wang, X. Su, et al., “Two-dimensional wavelet transform for reliability-guided phase unwrapping in optical fringe pattern analysis,” Appl. Opt. 51(12), 2026–2034 (2012). [CrossRef]  

12. K. Chen, J. T. Xi, and Y. G. Yu, “Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement,” Opt. Commun. 294, 139–147 (2013). [CrossRef]  

13. H. An, Y. Cao, H. Wu, et al., “Spatial-temporal phase unwrapping algorithm for fringe projection profilometry,” Opt. Express 29(13), 20657–20672 (2021). [CrossRef]  

14. J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated i­nterferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993). [CrossRef]  

15. Y. Wan, Y. Cao, X. Liu, et al., “High-frequency color-encoded fringe-projection profilometry based on geometry constraint for large depth range,” Opt. Express 28(9), 13043–13058 (2020). [CrossRef]  

16. C. Zuo, J. Qian, S. Feng, et al., “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

17. D. Ghiglia and L. A. Romero, “Minimum Lp-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996). [CrossRef]  

18. S. Lian, H. Yang, and H. Kudo, “Simple phase unwrapping method with continuous convex minimization,” Opt. Express 30(18), 33395–33411 (2022). [CrossRef]  

19. S. Zhang, X. Li, and S. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]  

20. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

21. C. Zuo, L. Huang, M. Zhang, et al., “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

22. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

23. Z. Wu, W. Guo, Y. Li, et al., “High-speed and high-efficiency three-dimensional shape measurement based on Gray-coded light,” Photonics Res. 8(6), 819–829 (2020). [CrossRef]  

24. L. Wang, Y. Cao, and H. An, “Gray-code fringe order jump error self-correction based on shifted phase encoding for phase measuring profilometry,” Opt. Commun. 524, 128763 (2022). [CrossRef]  

25. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]  

26. H. An, Y. Cao, Y. Zhang, et al., “Phase-Shifting Temporal Phase Unwrapping Algorithm for High-Speed Fringe Projection Profilometry,” IEEE Trans. Instrum. Meas. 72, 1–9 (2023). [CrossRef]  

27. H. Li, Y. Cao, Y. Wan, et al., “An improved temporal phase unwrapping based on super-grayscale multi-frequency grating projection,” Opt. Lasers Eng. 153, 106990 (2022). [CrossRef]  

28. H. Zhang, Y. Cao, H. Li, et al., “Real-time computer-generated frequency-carrier Moire profilometry with three-frequency heterodyne temporal phase unwrapping,” Opt. Lasers Eng. 161(1), 109201 (2023). [CrossRef]  

29. Y. Zheng, Y. Jin, M. Duan, et al., “Joint coding strategy of the phase domain and intensity domain for absolute phase retrieval,” IEEE Trans. Instrum. Meas. 70, 1–11 (2021). [CrossRef]  

30. H. Wu, Y. Cao, Y. Dai, et al., “Ultra-fast 3D imaging by a big codewords space division multiplexing binary coding,” Opt. Lett. 48(11), 2793–2796 (2023). [CrossRef]  

31. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107(1), 28–37 (2018). [CrossRef]  

32. Y. Hu, M. Duan, Y. Jin, et al., “Shading-based absolute phase unwrapping,” Opt. Lett. 46(8), 1955–1958 (2021). [CrossRef]  

33. H. An, Y. Cao, N. Yang, et al., “Absolute phase retrieval using one coding pattern for the dynamic 3-D measurement,” Opt. Lasers Eng. 159, 107213 (2022). [CrossRef]  

34. Y. Zhang, N. Fan, Y. Wu, et al., “Four-pattern, phase-step non-sensitive phase shifting method based on Carré algorithm,” Measurement 171(6), 108762 (2021). [CrossRef]  

35. R. Porras-Aguilar, K. Falaggis, and R. Ramos-Garcia, “Optimum projection pattern generation for grey-level coded structured light illumination systems,” Opt. Laser Eng. 91, 242–256 (2017). [CrossRef]  

36. R. Porras-Aguilar and K. Falaggis, “Absolute phase recovery in structured light illumination systems: sinusoidal vs. intensity discrete patterns,” Opt. Lasers Eng. 84, 111–119 (2016). [CrossRef]  

37. R. Porras-Aguilar, K. Falaggis, and R. Ramos-Garcia, “Error correcting coding-theory for structured light illumination systems,” Opt. Laser Eng. 93, 146–155 (2017). [CrossRef]  

38. Q. Ma, Y. Cao, C. Chen, et al., “Intrinsic feature revelation of phase-to-height mapping in phase measuring profilometry,” Opt. Laser Technol. 108, 46–52 (2018). [CrossRef]  

39. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]  

Data availability

Data will be made available on request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. (a)-(c) The projected fringe pattern of traditional method. (d)-(f) The projected fringe pattern of proposed method.
Fig. 2.
Fig. 2. The coding principle. (a) Codeword LUT. (b) Fringe order. (c) The encoded amplitude intensity.
Fig. 3.
Fig. 3. Pattern extraction procedure. (a)Captured composite deformed patterns. (b) Normalized Modulation distribution. (c) Traditional phase-shifting sinusoidal deformed patterns. (d) Wrapped phase.
Fig. 4.
Fig. 4. (a) The 880th column cutaway view of $I_1^c$ and $I_4^c$. (b) The 880th column cutaway view of I1 and I4.
Fig. 5.
Fig. 5. The principle of the thresholding segmentation. (a) Grayscale histogram of modulation. (b) Grayscale curve of (a) after smoothing.
Fig. 6.
Fig. 6. Period segmentation. (a) Segment wrapped phase. (b) Mask bw. (c) Mask mw.
Fig. 7.
Fig. 7. The process of the correcting operation. (a) Mask ym1, ym2, and ym3. (b) Results of labelling the single connected domain of (a). (c) Results of processing (b) with the mask $tc$. (d) The results of matching (c) to the connected domain of (a) based on the same label.
Fig. 8.
Fig. 8. The proposed decoding method. (a) The codeword template dc (x, y). (b) Fringe order. (c) The 720th column cutaway view of (a) and (b).
Fig. 9.
Fig. 9. The principle of jump error correction. (a) Positioning the zero position. (b) The 720th column sectional views of unwrapped phase after correcting jump error.
Fig. 10.
Fig. 10. Measurement system.
Fig. 11.
Fig. 11. (a) Wrapped phase obtained by PSP. (b) Wrapped phase obtained by FTP. (c) Wrapped phase obtained by proposed method. (d) Reconstruction result by PSP. (e) Reconstruction result by FTP. (f) Reconstruction result by the proposed method.
Fig. 12.
Fig. 12. (a) the profiles of the row 577. (b) Partial enlargement of (a).
Fig. 13.
Fig. 13. Measurement of complex isolated objects. (a) Composite deformed patterns. (b) Modulation. (c) Wrapped phase. (d) Codewords template. (e) Fringe order.
Fig. 14.
Fig. 14. Reconstructed depth maps of the complex isolated objects. (a) Wang’ Gray-code method. (b) Conventional Gray-code encoding strategy. (c) Proposed method.
Fig. 15.
Fig. 15. Stair reconstruction. (a) One of captured composite deformed patterns. (b) Wrapped phase. (c) Modulation. (d) Codewords template. (e) Fringe order. (f) Reconstructed result.

Tables (1)

Tables Icon

Table 1. The projection patterns of different methods

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = r ( x , y ) [ a + b cos ( φ ( x , y ) + 2 π n / N ) ]
φ ( x , y ) = a r c t a n [ n = 1 n = N I n ( x , y ) sin ( 2 π n N ) n = 1 n = N I n ( x , y ) cos ( 2 π n N ) ]
ϕ ( x , y ) = φ ( x , y ) + 2 π k ( x , y )
I n p ( x p , y p ) = a + b ( x p ) cos ( 2 π f p x p + 2 π n / N )
n k = C L 1 × ( C L 1 1 ) s q 1
I n c ( x , y ) = r ( x , y ) [ a + b ( x ) c o s ( φ ( x , y ) + 2 π n / N ) ]
a r ( x , y ) = [ n = 1 n = N I n c ( x , y ) ] / N
b ( x ) r ( x , y ) = 2 N [ n = 1 n = N I n c ( x , y ) sin ( 2 π n / N ) ] 2 + [ n = 1 n = N I n c ( x , y ) cos ( 2 π n / N ) ] 2
M C ( x , y ) = b ( x ) r ( x , y ) a r ( x , y ) = b ( x ) a
I n ( x , y ) = a r ( x , y ) + b [ I n c ( x , y ) a r ( x , y ) ] a M C ( x , y )
v r ( x , y ) = { 1 ,   i f   a r ( x , y ) > m e a n ( a r ( x , y ) ) / 2   a n d   e m ( x , y ) = 0 0 ,   e l s e
t c 1 ( x , y ) = { 1 ,   i f   M C ( x , y ) > th2 0 ,   e l s e
t c 2 ( x , y ) = { 1 ,   i f   t h 1   < M C ( x , y ) < th3 0 ,   e l s e
b w ( x , y ) = { 1 ,   i f   φ ( x , y ) < 0 0 ,   e l s e
m w ( x , y ) = { 1 ,   i f   π / 2   < φ ( x , y ) < π / 2 0 ,   e l s e
{ y m 1 = v r × b w y m 2 = v r × ( 1 b w ) y m 3 = v r × m w
d c ( x , y ) = { 1 ,   i f   t c 1 ( x , y ) = 0   a n d   t c 2 ( x , y ) = 0 2 ,   i f   t c 1 ( x , y ) = 0   a n d   t c 2 ( x , y ) = 1 3 ,   i f   t c 1 ( x , y ) = 1   a n d   t c 2 ( x , y ) = 1 4 ,   i f   t c 1 ( x , y ) = 1   a n d   t c 2 ( x , y ) = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.