Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase correction strategy based on structured light fringe projection profilometry

Open Access Open Access

Abstract

Fringe projection profilometry based on structured light has been widely used in 3-D vision due to its advantages of simple structure, good robustness, and high speed. The principle of this technique is to project multiple orders of stripes on the object, and the camera captures the deformed stripe map. Phase unwrapping and depth map calculation are important steps. Still, in actual situations, phase ambiguity is prone to occur at the edges of the object. In this paper, an adaptive phase segmentation and correction (APSC) method after phase unwrapping is proposed. In order to effectively distinguish the stable area and unstable area of the phase, a boundary identification method is proposed to obtain the structural mask of the phase. A phase compensation method is proposed to improve the phase accuracy. Finally, we obtain the 3-D reconstruction result based on the corrected phase. Specific experimental results verify the feasibility and effectiveness of this method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) is an effective non-contact measurement method which is widely used in 3-D applications such as medical imaging, industrial inspection, and virtual reality. The primary computational process of FPP is to unwrap the phase. There are two methods of phase unwrapping: spatial phase unwrapping (SPU) and temporal phase unwrapping (TPU). TPU is more advantageous when measuring isolated objects or discontinuous complex surfaces. Fourier transform profilometry [1,2] and wavelet transform profilometry [37] are representative phase extraction algorithms based on single frame stripes. The representative phase extraction algorithm based on multi-frame stripes is phase shift profilometry [812]. In actual situations, TPU will be affected by certain errors that affect the reconstruction accuracy. One is the random error caused by the optical imaging mechanism and environmental noise. The other is the phase ambiguity caused by the calculation of the arctangent function and fringe order.

For phase noise, Rathjen et al. [13] and Servin et al. [14] considered the interference of Gaussian noise as the leading cause of phase noise and summarized the phase error model. To reduce the phase noise and improve the accuracy of 3-D reconstruction, many improved phase unwrapping methods have been developed in the past decades. Firstly, the data error is reduced by enhancing the coding strategy before extracting the phase. The second is to correct the error if the already obtained degree.

Scholars have proposed many methods to improve the coding strategy. Zuo et al. [15] proved that the modulation strength of the captured stripe is inversely proportional to the phase error, and the phase error is inversely proportional to the signal-to-noise ratio. Zhang proposed a Complementary Gray Code (CGC) method [16] to reduce the phase jump error. He solved the problem of decoding error at the boundary of the object by projecting a complementary gray code. But this method affects the measurement efficiency. On this basis, members of the laboratory have proposed the Cyclic Complementary Gray Code (CCGC) method [17] and the Shifted Gray Code (SGC) method [18]. With the same number of projection patterns, it further extends the range of unambiguous phase measurements and improves the calculation speed, but the measurement range is limited. Cong et al. [19] embedded a set of sparse markers in the pattern to facilitate phase unwrapping to obtain the absolute phase without any other pattern. Although the method is friendly to isolated surfaces, the embedded markers must be carefully designed and processed. Zhang et al. [20] introduced a reference-guided phase unwrapping algorithm that uses the first unwrapped phase map to complete the unwrapping of all other phase maps. This method requires a simple test scenario and tiny height variations on the object's surface. An et al. [21] combined SPU and TPU to retrieve the absolute phase. The algorithm divides the wrapped phase into several regions and determines the fringe order of each region based on reliable points. It provides better measurement robustness for complex surfaces but requires further processing of the stripe pattern to obtain valid pixels. Yang [22] proposed a scattering-assisted four-step phase-shifting 3-D measurement method, including four-step phase-shifting stripe pattern and scattering to eliminate phase ambiguity. The method can effectively solve the 3-D reconstruction problem of steep target objects.

In addition to improving the coding strategy, several methods have been proposed to perform error correction for the acquired phase. Guo et al. [23] proposed a least squares method to eliminate the effects of stripe harmonics caused by various unfavorable factors on the phase calculation. Chen et al. [24] identified the invalid points by comparing the least-squares fits to unwrap the phase at different stripe frequencies. However, some weak points can affect the least-squares fitting errors and the experimental results. Deng et al. [25] proposed an edge-preserving correction strategy, which preserves the subject's edges during the correction process and corrects them by eight-neighborhood filtering. The algorithm effectively improves the phase noise. An et al [26] introduced an absolute phase retrieval framework using geometric constraints. This method can obtain the phase by a single shot. However, the method cannot handle object surface depth variations beyond 2π in the phase domain, which means that the measurement depth range can be significantly limited and the FTP method usually requires high-frequency modes. Song [27] proposed a 3-D global phase filtering (3D-GPF) method, which employs six-step phase-shifting images to obtain the phase and effectively protects the structural information. Zhang et al. [28,29] and Song et al. [30] removed the noise points by phase monotonic unwrapping. This method can eliminate some invalid points and improve the steak noise. However, it does not apply to some discontinuous test scenarios and does not correct for the jump error. Feng et al. [31] proved the method by the phase relationship of neighboring pixels and Gaussian filtering. Zheng et al. [32] designed an adaptive median filter to attenuate the neighborhood contamination problem. Still, the framework requires multiple iterations of different median filters, and therefore, real-time processing is limited.

To improve the problem of phase error, an adaptive phase segmentation and correction (APSC) method is proposed in this paper. Firstly, a segmentation method is proposed to obtain a structural mask version of the phase, which distinguishes stable and unstable regions. Then, the stable region is optimized by adaptive filtering, and the unstable region phase is compensated by the stable data. The corrected phase is used as the reference pixel, and the structural mask version of the phase is updated. Finally, the corrected complete phase data is obtained, and the 3-D shape of the measured object is obtained according to the coordinate system transformation relation. The rest of the paper is organized as follows. Section 2 explains the causes of phase errors and the limitations of the conventional phase computation method. Section 3 describes the proposed phase unwrapping algorithm in detail. Section 4 validates the algorithm by two experiments: accuracy analysis and 3-D reconstruction. Finally, section 5 summarizes the paper, including the proposed algorithm's advantages and limitations.

2. Methods

2.1 Gray code-assisted phase shift technique

The structured light phase-shifting is a computer vision technique used for 3-D reconstruction. It utilizes optical technology to project specific structured light on the surface of the measured object. Then it records the phase difference after reflecting off the surface of the object, and finally calculates the 3-D shape of the object. Specifically, the phase-shifting has two main steps: data acquisition and calculation. First, specific structured light, such as stripes or grids, is projected to produce images. A sequence of images acquired by camera. Finally, the 3-D shape of the object surface is calculated by using the phase difference. The phase-shifting has a wide range of applications in the fields of 3-D reconstruction, face recognition, medical imaging, etc. With the continuous development of the technology, its accuracy and speed have been continuously improved. The four-step phase-shifting sinusoidal stripe used in this paper can be expressed as:

$$\left\{ \begin{array}{l} {I_1}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )} )\\ {I_2}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )+ {\pi / 2}} )\\ {I_3}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )+ \pi } )\\ {I_4}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )+ {{3\pi } / 2}} )\end{array} \right.$$
$$\varphi ({x,y} )= \arctan \left( {\frac{{{I_4}({x,y} )- {I_2}({x,y} )}}{{{I_1}({x,y} )- {I_3}({x,y} )}}} \right)$$
where $A({x,y} )$ is the background intensity, $B({x,y} )$ is the modulation system, $\varphi ({x,y} )$ is the truncated phase, which can be calculated by the inverse tangent function. Due to the limitation of the inverse tangent function, the phase is truncated inside $({ - \pi ,\pi } )$, and it is necessary to carry out the continuous phase development with the help of other ways. In 3-D measurement, considering the speed and accuracy of measurement, we use gray code as an auxiliary. We get the stripe level information through gray code decoding and combine it with the phase-shifting images to further get the continuous phase. The process of gray code level decoding can be expressed as:
$$k({x,y} )= f\left[ {\sum\limits_{i = 1}^N {{G_i}({x,y} )\ast {2^{N - i}}} } \right]$$

In the formula, $N$ gray code pattern can be labeled with ${2^N}$ stripe cycles, ${G_i}({x,y} )$ gets the decoded decimal number. $f({\cdot} )$ indicates the computational relationship between the code word and the phase degree. In this paper, we use a 7-step gray code. According to the truncated phase $\varphi ({x,y} )$ and the phase degree $k({x,y} )$, we can get the absolute phase $\phi ({x,y} )$. This data represents the change rule of the luminance value of each pixel point in the spatial domain and the frequency domain, which contains the information of the pixel point's texture, edge, and so on.

2.2 Phase noise analysis

In the actual measurement, phase unwrapping is susceptible to external factors. Firstly, when the camera captures images, the noise is generated by sensors, which affects the image quality. The sensors’ noise includes dark current noise, readout noise, gain noise, thermal image noise, fixed-mode noise, etc. Secondly, environmental factors such as insufficient light and uneven illumination during image acquisition can interfere with image quality. Thirdly, the image transmission process may be interrupted, due to signal attenuation, electromagnetic interference, and so on. Fourthly, some nonlinear errors can be caused by the optical system. Fifthly, due to the limitations of the inverse tangent function, phase unwrapping errors are prone to occur in pixel areas adjacent to the boundary. In addition to the above reasons, there are also imperfections in the algorithm, insufficient computer arithmetic power, and so on.

Based on the value of errors in each pixel after phase unwrapping, these errors can be divided into jump errors and random errors. The error with large value can be called jump error, and the error with small value can be called random error. In this paper, we use a 7-step gray code-assisted 4-step phase-shifting code coding strategy. Figure 1(a) represents 7-step gray code images, Fig. 1(b) represents 4-step phase-shifting images, Fig. 1(c) represents the absolute phase map, Fig. 1(d) represents the phase data of line 370. A careful analysis reveals that the red box of Fig. 1(c) appears as a jump error as shown in Fig. 1(e)(f). As shown in Fig. 1(g), the green box appears as a random error. In this paper, we mainly analyze these two types of errors.

 figure: Fig. 1.

Fig. 1. Phase noise distribution. (a) Gray code images; (b) phase-shifting images; (c) phase map; (d) phase on line 370; (e) (f) phase of the red frame; (g) phase of the middle green frame.

Download Full Size | PDF

The first is the random error. The ideal phase is a linear increasing trend. In practice, due to the optical imaging system, the phase will appear to be a wavy line-increasing trend, and the neighboring phase changes are inconsistent. According to the phase of the calculation of the point cloud, it will cause the point cloud roughness to be large due to the fluctuation of the degree.

Another problem is the jump error, mainly caused by the inconsistency of the periods of the gray code and the phase-shifting code. The correct decoding result is shown in Fig. 2(a). The blue dashed line indicates the gray code level, the green dashed line indicates the wrapped phase, and the period of the truncated phase corresponds to the gray code level. The absolute phase shown in the black solid line is finally obtained. The trend of the change is linearly increasing. However, in practice, due to the instability of the pixels at the edge of the cycle, even a minimal disturbance can cause the phase of the pixel point to change, which may cause an error of one cycle. When the misaligned pixel gray code cycle is shifted forward, concave noise is generated, as shown in Fig. 2(b). Convex noise is generated when the gray code period is moved backward, as shown in Fig. 2(c). These two types of noise can cause the point cloud to appear visibly noisy or hollow. To improve the effect of phase noise on the reconstruction accuracy of the point cloud, we propose a phase correction strategy.

 figure: Fig. 2.

Fig. 2. Schematic diagram of phase. (a) Ideal phase period; (b) concave noise; (c) convex noise.

Download Full Size | PDF

3. Generalized phase unwrapping method

Based on the previous analysis, it can be seen that the random error exists globally, and most of the jump error occurs at the boundary where the object has depth change. According to the characteristics of these errors, we propose an adaptive phase segmentation and correction (APSC) method. Phase segmentation is performed first. We get the complete target object image based on the gray code images. Then, perform threshold filtering and morphological processing on grayscale images of the objects. The improved Soble operator is proposed to get the initial edge information. Phase unwrapping is performed, and the corresponding gradient is calculated based on the striped image. The second edge information is performed based on the distribution of the phase gradient. The final segmentation result is obtained after edge detection from the gray map and phase map. We obtain a structural mask version of the phase and divides the complete phase map into stable and unstable regions. Then, to calculate the error distribution of the phase, an adaptive window filtering method is performed on the stable region. The processed data is used as a reference for subsequent calculation. After that, the boundary phase correction method is proposed. The city distance formula and pixel priority determine the correction order in the unstable region. The error compensation is performed for the target to be corrected according to the error distribution. Then, the pixel priority and phase mask version are updated, and the already corrected phase is used as the referable data in the stable area. Finally, the corresponding 3-D result is calculated by the phase-height relationship.

3.1 Phase segmentation

3.1.1 Initial segmentation

Phase segmentation is divided into two steps, first, grayscale image segmentation, and then phase error segmentation. According to the gray code image splicing, we can get the complete and non-striped grayscale image of the target object, as shown in Fig. 3. To reduce the impact of light intensity overexposure, the sixth and seventh level gray codes are used for image stitching. The two stitching results fuse the fringe junction to obtain a complete grayscale image. To get rich features on the image using morphological processing of the open operation for image enhancement, first corrosion and then expansion.

 figure: Fig. 3.

Fig. 3. Schematic of a synthetic grayscale image.

Download Full Size | PDF

The Sobel operator has a significant suppression effect on Gaussian noise, and it is more effective in processing images with gradual grayscale and considerable noise. So we choose this method for edge detection of the enhanced image. Our primary purpose is to calculate the average gradient estimate of the center pixel. The specific procedure is the gradient vector (the ratio of the forward difference of the pixel's gray level to the city distance) is summed over the gradients in four directions (vertical, horizontal, 45° diagonal, 135° diagonal) in the Cartesian network. To extend the boundary range, the traditional 3 × 3 directional convolution template was changed to a 5 × 5 scale.

Sobel uses the city distance as the pixel distance, and the distance value between diagonal neighboring pixels is 2, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. 5 × 5 Cartesian networks and city distances. (a) Neighborhood pixel position; (b)city Distance.

Download Full Size | PDF

${Z_{13}}$ denotes the center pixel point, ${Z_{7,8,9,12,14,17,18,19}}$ denote neighboring pixels, ${Z_{1,3,5,11,15,21,23,25}}$ denote the outer pixel with the same direction as the neighboring pixel. The average gradient estimation of the center pixel point is calculated mainly with reference to the neighboring pixels and the outer pixels. The specific calculation process is shown below.

(1) Calculate the average gradient of neighboring pixels. The direction vectors are $({{Z_7},{Z_{19}}} )$, $({{Z_8},{Z_{18}}} )$, $({{Z_9},{Z_{17}}} )$ and $({{Z_{14}},{Z_{12}}} )$. The unit components of the corresponding control difference directions are $({ - 1,1} )$, $({0,1} )$, $({1,1} )$ and $({1,0} )$. The corresponding inverse distance weights are ${1 / 4}$, ${1 / 2}$, ${1 / 4}$ and ${1 / 2}$. The average gradient computed from the four directions is:

$$\begin{array}{ll} {{G_1} = }&{({{Z_7} - {Z_{19}}} )/4 \ast [{ - 1,1} ]+ ({{Z_8} - {Z_{18}}} )/2 \ast [{0,1} ]+ }\\ {}&{({{Z_9} - {Z_{17}}} )/4 \ast [{1,1} ]+ ({{Z_{14}} - {Z_{12}}} )/2 \ast [{1,0} ]} \end{array}$$

(2) Calculate the average gradient of the outer pixels. The direction vectors are $({{Z_1},{Z_{25}}} )$, $({{Z_3},{Z_{23}}} )$, $({{Z_5},{Z_{21}}} )$ and $({{Z_{15}},{Z_{11}}} )$. The unit components of the corresponding control difference directions are the same as the previous step. The corresponding inverse distance weights are ${1 / 8}$, ${1 / 4}$, ${1 / 8}$ and ${1 / 4}$. The average gradient computed from the 4 directions is:

$$\begin{array}{ll} {{G_2} = }&{({{Z_1} - {Z_{25}}} )/8 \ast [{ - 1,1} ]+ ({{Z_3} - {Z_{23}}} )/4 \ast [{0,1} ]+ }\\ {}&{({{Z_5} - {Z_{21}}} )/8 \ast [{1,1} ]+ ({{Z_{15}} - {Z_{11}}} )/4 \ast [{1,0} ]} \end{array}$$

(3) Calculate the average gradient estimate of the center pixel point. By summing the components obtained in the above two steps, we can get the average grayscale estimation of the center pixel. Then, we solve the formula in the denominator, and decompose the procedure according to the X and Y directions to get the template with a scale of 5 × 5 direction X denotes the convolution template in the horizontal X-direction, and direction Y denotes the convolution template in the vertical Y-direction:

$$directionX = \left[ {\begin{array}{lllll} { - 3}&{ - 2}&0&2&3\\ { - 4}&{ - 6}&0&6&4\\ { - 6}&{ - 12}&0&{12}&6\\ { - 4}&{ - 6}&0&6&4\\ { - 3}&{ - 2}&0&2&3 \end{array}} \right]$$
$$directionY = \left[ {\begin{array}{ccccc} 3&4&6&4&3\\ 2&6&{12}&6&2\\ 0&0&0&0&0\\ { - 2}&{ - 6}&{ - 12}&{ - 6}&{ - 2}\\ { - 3}&{ - 4}&{ - 6}&{ - 4}&{ - 3} \end{array}} \right]$$

(4) The grayscale image is convolved with the operator templates in both directions to obtain the gradient values in both directions ${g_x}({x,y} )$ and ${g_y}({x,y} )$. In order to avoid data overflow, the average grayscale estimation of the center pixel point is obtained by the attenuation factor $\sigma$. The attenuation factor is taken to be 10. The direction of the resulting center pixel point determines the direction. Such a calculation is done for all pixel points to determine the gray value and edge direction of all pixel points of the image.

$$g({x,y} )= {{[{|{{g_x}({x,y} )} |+ |{{g_y}({x,y} )} |} ]} / \sigma }$$

(5) Based on the method proposed by Zheng et al. [32], in 3 × 3 window, the mean value is taken as the final threshold for the center pixel point after removing the maximum and minimum values.

$$T = {{\left[ {\sum\limits_{i = 1}^9 {{g_i}({x,y} )- } \min ({{g_i}({x,y} )} )- \max ({{g_i}({x,y} )} )} \right]} / 7}$$
Where, $\sum\limits_{i = 1}^9 {{g_i}({x,y} )}$ represents the sum of the grayscale values of each pixel point in the 3 × 3 neighborhood of the pixel point. To obtain the threshold, we subtract the maximum and minimum values, and take the average value as the threshold value within the neighborhood. The gray value of pixel whose gray value is greater than its threshold T is set to 255 and marked as an unstable region. The gray value of pixel smaller than the threshold is set to 0 and marked as a stable region. Iterating through all the pixels that the global stable and unstable regions can be determined.

3.1.2 Second segmentation

Phase can be seen as 2-D data with depth information. In order to make the segmentation result more accurate, we perform the second phase segmentation. The phase map and the gradient of the neighboring phase are obtained according to the stripe map. These data are used as the main basis for the second segmentation of the phase. We carry out error analysis from the pixel point of the six directions of the template scale. The weighting coefficients can be accumulated to obtain the gradient weighting matrix of the point. As shown in Fig. 5, the pentagram area indicates the target pixel point. Figure 5(a) shows six directions, Fig. 5(b) shows the horizontal direction scale template, Fig. 5(c) shows the 45° direction scale template, Fig. 5(d) shows the 135° direction scale template, Fig. 5(e) shows the vertical direction scale template, Fig. 5(f) shows the 225° direction scale template, and Fig. 5(g) indicates the 315° direction scale template. The correlation coefficient of the pixel point can be integrated to obtain the gradient weighting matrix of the issue. The correlation coefficients can be combined to obtain the weighting coefficients of the 5*5 scale, as shown in Fig. 5(h), and multiply the phase gradient of the stabilization zone filtering. we can obtain the error evaluation index for this point.

 figure: Fig. 5.

Fig. 5. Correlation coefficient template for second-phase segmentation. (a) Neighborhood error weighted direction; (b) horizontal scale template; (c) 45° direction scale template; (d) 135° direction scale template; (e) vertical scale template; (f) 225° direction scale template; (g) 315° direction scale template; (h) gradient weighting factor.

Download Full Size | PDF

The segmentation calculations of the grayscale and phase are two independent processes which can be performed sequentially or in parallel. A structural mask version of the phase is obtained by the fusion of the two segmentation results. It divides the phase map into stable and unstable regions, as a basis for the structural judgment of the subsequent phase correction. The results are shown in Fig. 6. Figure 6(a) indicates gray-scale segmentation only, Fig. 6(b) indicates phase segmentation only, and Fig. 6(c) indicates the segmentation result of fusing gray-scale and phase maps. We aim to repeat the detection of some of the details to coarsen the segmentation contour of the phase and avoid edge information mixed in the data of the stabilized region.

 figure: Fig. 6.

Fig. 6. Phase segmentation results. (a) Grayscale segmentation; (b) phase segmentation; (c) segmentation result of fusing grayscale and phase map.

Download Full Size | PDF

3.2 Phase correction

After the phase segmentation, we can get the structural mask version of the phase. The following part is mainly to correct the phase in the stable and unstable regions respectively. The first is the phase of the stable region. We screen the matching data and analyze the distribution of phase residuals in the area. Then, we use the obtained results as reference data for phase correction in unstable regions. The phase correction for the boundary unstable area mainly refers to the stable area. Finding suitable reference points is one of the key steps. The specific process is shown below.

General filtering needs to refer to the surrounding neighborhood pixels for the target point processing. To make the phase have an excellent linear increasing trend and less loss of original data, we choose to perform mean filtering in the row direction of the image. The mean filtering method is more effective in suppressing Gaussian noise than other filters. For the selection of filter data and window, we need to refer to the texture information near the pixel. The phase error of neighboring pixels has already been taken into account in the phase segmentation. So we only need to determine the scale size of the filter window based on the number of consecutive pixel points in the horizontal direction.

$$size({x,y} )= \left\{ {\begin{array}{ccc} {r = 1}&{W = 1 \times 3}&{,3 \le n < 5}\\ {r = 2}&{W = 1 \times 5}&{,5 \le n < 7}\\ {r = 3}&{W = 1 \times 7}&{,7 \le n < 9}\\ {r = 4}&{W = 1 \times 9}&{,9 \le n} \end{array}} \right.$$
$$phase({x,y} )= \frac{1}{n}\sum\limits_{i \in W} {{p_i}}$$
where, $W$ denotes a one-dimensional window centered at a pixel point $({x,y} )$ with radius r, the input data is the phase of the stable zone. n is a consecutive valid pixel point within the template window W. ${p_i}$ denotes the phase of the valid pixel point, and the corrected $phase({x,y} )$ of pixel point $({x,y} )$ is obtained by averaging. This process is only for the case of not less than 3 consecutive valid pixel points, for the case of less than 3 valid points, this type of pixel point is considered as an isolated point, and the corresponding mask version value is updated to 0, and subsequently corrected together with the unstable region. After completing the filtering of the stable region in the horizontal row direction, the phase gradient of the adjacent pixel points of the filtered data is calculated. The gradient with the largest probability is used as the amplitude reference for the next boundary correction.

Phase ambiguity, hardware, algorithm, environment and other factors easily lead to occur cycle misalignment for the phase, which makes the obtained point cloud noisy or hollow. To reduce the noise effect, after completing the phase filtering of the stable region, it is essential to compensate for the error of the unstable phase. Based on minimizing the phase loss as much as possible, the reconstruction accuracy is improved. The pixel points to be repaired are highly correlated with the distribution of known pixels in the neighborhood and the texture structure information. Firstly, the distribution of available pixels in the area can be described in two dimensions, including the number of known pixel points and the distance between general pixels and points to be repaired. Considering the information theoretic point of view, the information of the pixel point to be corrected can be provided by the known pixel points in the neighborhood. The closer the neighboring points to the point to be repaired are, the greater the amount of information they can provide.

The main phase correction process for the unstable region is to match suitable pixel points. Firstly, the pixel information of the unstable region is taken as input, and the pixel point with the highest priority in the region is filtered. Then, a window with a scale of 5 × 5 is determined horizontally at that point, and the best matching pixel point is searched as the window moves. The best matching pixel point is referenced to the phase error distribution. Finally, the phase value and the corresponding structural mask version are updated.

(1) Calculate the priority of the phase to be corrected. As shown in Fig. 7, Ω denotes the stable region and $\varPsi$ denotes the unstable region. $\varsigma$ denotes the pixel region in the unstable region that is closest to the stable region, and the pixel point in this region has a higher priority and is repaired preferentially during error correction. The priority $P({x,y} )$ of a pixel point is determined by the mask version $C({x,y} )$ generated by boundary recognition, phase value, and distance between the point and the nearest pixel point of the stable zone in the horizontal direction.

$$C({x,y} )= \left\{ {\begin{array}{cc} 0&{({x,y} )\in \Omega }\\ 1&{({x,y} )\in \psi } \end{array}} \right.$$
$$I = \Omega + \psi$$
$$P({x,y} )= C({x,y} )\ast {{phase(x,y)} / d}({x,y} )$$

(2) Error correction refers to the distribution of phase errors and the positional relationship between the point to be matched and the point to be repaired. After obtaining the priority of phase correction in unstable regions, we need to find suitable stable region data for phase correction. In order to obtain more effective data, it is necessary to have more stable zone data as a reference. When the number of pixels in the stable area of the 9 × 9 window is greater than 60% and the number of pixels in the horizontal stable area is greater than 5, phase correction can be performed.

(3) Calculate the error distribution of adjacent pixels in phase. The phase degree is obtained from the Gray code images, as shown in Fig. 8(a). The truncated phase is obtained from the phase shift code images, as shown in Fig. 8(b), and the final absolute phase is obtained by adding these two types of data. The error of phase degree and truncated phase will be transmitted to the absolute phase. It is necessary to refer to the error distribution of phase degree and truncated phase during error correction of pixels in Unstable Regions.

$$E_k^c(x,y) = mean\left[ {\sum\limits_{i = 0}^{i = 8} {\sum\limits_{j ={-} 4}^{j = 4} {{E_k}({x + i,y + j} )\ast ({1 - C({x + i,y + j} )} )} } } \right]$$
$$E_\varphi ^c(x,y) = mean\left[ {\sum\limits_{i = 0}^{i = 8} {\sum\limits_{j ={-} 4}^{j = 4} {{E_\varphi }({x + i,y + j} )\ast ({1 - C({x + i,y + j} )} )} } } \right]$$
where, ${E_k}$ indicates the error of phase degree $k(x,y)$, as shown in Fig. 8(c). ${E_\varphi }$ indicates the error of truncated phase $\varphi (x,y)$, as shown in Fig. 8(d). Calculate the average error of phase degree and truncation phase based on the selection of stable pixels within the window.$E_k^c(x,y)$ indicates the phase degree reference error of the point to be corrected, $E_\varphi ^c(x,y)$ indicates the truncation phase reference error of the point to be corrected.

 figure: Fig. 7.

Fig. 7. Priority calculation schematic. (a) Position distribution before calculation; (b) position change after calculation.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Error distribution diagram. (a) Phase level, (b) truncated phase, (c) error distribution of phase level, (d) error distribution of truncated phase.

Download Full Size | PDF

(4) Pixels in unstable areas are divided into two categories for error correction. Firstly, pixels with a priority greater than 60% are corrected. Obtain new phase degree and truncated phase based on $E_k^c(x,y)$ and $E_\varphi ^c(x,y)$. Pixels with higher priority are corrected based on the new phase level and truncated phase. After completing the error correction, we update the phase of the pixel points and change their mask version values from 1 to 0.

$${k_{new}}(x,y) = k(x,y) + E_k^c(x,y)$$
$${\varphi _{new}}(x,y) = \varphi (x,y) + E_\varphi ^c(x,y)$$
$$\phi _1^c(x,y) = {k_{new}}(x,y) + {\varphi _{new}}(x,y)$$

(5) For other pixels with lower priority, correction is made based on the phase of the globally stable region. Firstly, calculate the mean phase error of the stable zone, and then find the nearest stable zone pixel in the horizontal direction to the point to be corrected, and combine the distance and mean phase error to make corrections.

$$E_{phase}^m = mean\left[ {\sum\limits_{i = 0}^{i = 8} {\sum\limits_{j ={-} 4}^{j = 4} {{E_{phase}}({x + i,y + j} )\ast ({1 - C({x + i,y + j} )} )} } } \right]$$
$$\phi _2^c(x,y) = \phi (x,y + d) + E_{phase}^m \ast d$$
where, $E_{phase}^m$ indicates mean phase error of the stable zone, d indicates the distance between the reference point and the point to be corrected. And so on until all the unstable points have been corrected.

3.3 Flow charts of the proposed method

The adaptive phase segmentation and correction (APSC) method can be summarized as follows based on the above technical analysis and methodology presented:

Step 1, phase segmentation, the stripe map is spliced to get the complete grayscale image, and the improved Sobel operator is used for initial segmentation. The phase is calculated according to the stripe map unwrapping. The phase gradient is used as input for phase segmentation, and the same scale as the gray map segmentation is used for convolution. The two data are fused to obtain a structural mask version of the phase, used as a judgment condition for the stable and unstable regions.

Step 2, phase correction of the stable region. The filtering process of the adaptive window is performed in the stable phase. Then we calculate the filtered gradient.

Step 3, phase correction of the unstable region. We calculate the pixel priority in the region with reference to the stable phase and gradient distribution. Sequentially, the phase is compensated, and the corresponding structure mask version value is updated.

To better illustrate the method proposed in this paper, the workflow is shown in Fig. 9. Through phase segmentation and error compensation, we can get the corrected phase. Then, according to the coordinate system transformation and the phase-depth relationship, we can obtain the 3-D result of the object. The effectiveness and feasibility of the method can be evaluated by analyzing the experiments.

 figure: Fig. 9.

Fig. 9. Flow charts of the proposed method

Download Full Size | PDF

4. Experiments

4.1 System introduction

To verify the effectiveness and feasibility of the algorithm proposed in this paper, we built a 3-D reconstruction system, as shown in Fig. 10. The system consists of an 8-bit grayscale image camera with a resolution of 1024 × 1280 and a single-axis MEMS projection reflection module. The single-axis MEMS projection is a new technique with the advantages of focused projection, small size, low cost, and low power consumption. It can project a stripe pattern with 1024 resolution along the horizontal axis. Since it is a focused projection system, the measurement range of the proposed approach is 350∼800 mm, limited only by the depth of field of the camera. The smallest pixel width in the coding method used in this paper is a four-step phase-shifted stripe pattern with 64 cycles, each with a pixel width of 16 pixels. Since we want the geometric constraints not to affect the measurement range, the geometric constraints in this paper have a depth range of 300∼900 mm, which is wider than the designed measurement range.

 figure: Fig. 10.

Fig. 10. The composition of the proposed system.

Download Full Size | PDF

4.2 Phase analysis

To further validate the method proposed in this article, we conducted measurements on plaster statue and irregular objects. The main analysis is the phase error between the proposed method and traditional method, which is determined by the absolute value of the phase difference between adjacent pixels. Firstly, perform phase error analysis on the plaster statue, as shown in Fig. 11. Figure 11(a) shows partial stripe image, with the same row of phase as Fig. 1 selected for analysis. The green box represents the phase jump area, and the orange box represents some details. Figure 11(b) shows the corrected phase diagram. Figure 12(a) shows the phase error in the 370th row. Figure 12(b-c) show the phase error of the green and orange boxes. From the results, it can be seen that the traditional method, as shown by the blue line segment, exhibit significant phase jitter in boundary calculations. Although there is not as much boundary noise in areas with rich details, the performance is not stabilized enough. The method proposed in this article, as shown by the red line segment, can effectively suppress noise with obvious boundaries. For areas with rich details, the phase error distribution is more stable. Compared with traditional method, it reduces phase error and the impact of noise on phase accuracy. It could improve the robustness of measurement.

 figure: Fig. 11.

Fig. 11. The stripe pattern and phase distribution of plaster statue. (a) Partial fringe pattern, (b) the phase distribution.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The phase error. (a) The phase error in the 370th row, (b) the green box’s phase error, (b) the orange box’s phase error.

Download Full Size | PDF

To verify the proposed method in complex scenes with multiple objects, we tested two irregular objects. We placed the test objects in a testing environment surrounded by a black screen and compared the phase error between traditional method and the method proposed in this paper. Figure 13(a) shows partial fringe pattern. Figure 13(b) shows the image of fringe synthesis. Figure 13(c-d) show the error distribution of truncated phase and phase degree. Figure 14(a) shows the phase error in the 600th row, while Fig. 14(c-d) shows the phase error of the green and orange boxes. The green box is located in the gap between two objects, with a significant difference in phase error. This is because the distance between the objects is relatively close, and the camera can simultaneously capture the background and some of the side faces of the objects. Through the above two experiments, we can see that the method proposed in this article can effectively reduce phase error, improve phase stability and robustness.

 figure: Fig. 13.

Fig. 13. The irregular objects’ stripe pattern and phase distribution of plaster statue. (a) Partial fringe pattern, (b) the image of fringe synthesis, (c) the error distribution of truncated phase, (d) the error distribution of phase degree.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. The phase error. (a) The phase error in the 600th row, (b) the green box’s phase error, (b) the orange box’s phase error.

Download Full Size | PDF

4.3 Precision analysis

To analyze the 3-D measurement accuracy of the method proposed in this paper, two standard components, including a flat plate and a dumbbell ball, were used for the measurements. The common plane plate has a flatness of 0.05 mm or better. The diameter of the standard dumbbell ball is 38.1 ± 0.01 mm, and the distance between the centers of the balls is 201.09 ± 0.01 mm. As shown in Fig. 15, Fig. 15(a) and (d) represent the individual streak maps. Figure 15(b) and (e) represent the target image synthesized by the streak maps. Figure 15(c) and (f) represent the encapsulation of the point cloud Results. In the measurement of the standard dumbbell ball, the result is segmented to get two independent and complete double balls. Ball fitting is performed to get the 3-D coordinates of the centers of the two balls. The distance between the centers of the two balls is calculated and compared with the true value of 201.09 mm to get the error result. For the measurement of the standard plane, we mainly compare the root mean square error of the fitted plane. To get more accurate results, we divided the plane into nine compartments to fit the plane separately and averaged the nine root-mean-square errors as the measurement results under the position. To verify the stability of the experimental results, we selected a total of 11 positions for measurement within the range of 300∼900 mm. The results are shown in Table 1 and Table 2. The fitting errors after phase correction of the two standard components are significantly reduced. It can be seen that the measurement errors of the more distant positions are more significant. The results are reasonable for the structured light 3-D measurement system. The farther away the target object is, the more efficiently the measurement results are interfered with by noise when the camera and projection module positions are unchanged. We chose the middle position at 600 mm to analyze the local point cloud measurement results. As shown in Fig. 16, Fig. 16(a) and (b) show the fitting results of the ball and plane under this position. Figure 16(c) and (d) show the fitting results of the localized point cloud in the central region. Most of the fitting results for the ball surface and plane lie within 0.09 mm, and the overall error is within 0.04 mm. It is worth noting that the spherical surface shows more spikes than the planar surface. This result is mainly due to the fact that the light source of the MEMS projection system is a laser, which has an unavoidable scattering problem. It is affected by the material and reflectivity of the target object.

 figure: Fig. 15.

Fig. 15. Measurement results of standard components. (a) Individual streak diagrams of standard balls; (b) synthesized image of standard balls; (c) reconstruction result of a standard ball; (d) individual streak map of the standard plane; (e) synthesized image of the standard plane; (f) reconstruction results of the standard plane.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Point cloud fitting results for standardized components. (a) Global roughness of the ball; (b) global roughness of plane surface; (c) local roughness of the ball; (d) local roughness of plane surface.

Download Full Size | PDF

Tables Icon

Table 1. Three-dimensional measurements of standard dumbbell balls at different distances

Tables Icon

Table 2. Three-dimensional measurements of standard plane at different distances

4.4 Reconstruction analysis

In order to compare the effects of phase correction and direct computation of 3D reconstruction without processing the phase, we measured two separate plaster portraits with different roughness. For plaster 1, we measured from three different angles. The front side of the base is taken as the front of the target object, and the left and right sides are rotated by 45 degrees as the other two test angles. Figure 17(a-c) represent the individual streak images of the plaster statue. Figure 17(d-f) represent the target images synthesized by the streak maps. Figure 17(g-i) represent the results of the direct computation of the point cloud reconstruction without phase correction. Figure 17(j-l) represent the reconstruction results.

According to the results, we can find that compared with the traditional method the reconstruction results after phase correction are smoother. The details of the plasters such as hair, eyes, and mouth are basically the same as the traditional method without losing the necessary detail information. Most of the results show some point cloud voids, the reason for this phenomenon is mainly due to the camera's shooting angle and the uneven surface of the target object. The contour of some areas is more obvious, such as the shoulders of plaster 1 and the distance between the eyes and glasses of plaster 2. The structured light measurement system is based on the camera's capture of the streak map to calculate. It can only be reconstructed from the area captured by the camera, under different angles will always some degree of occlusion occur. For the surrounding surface of the smoother and less hollow areas, our method can be a certain degree of point cloud supplementation, such as the neck shown in Fig. 17(m) and the nose shown in Fig. 17(o). To retain the object detail information, the details of the region with more features and larger hollows are not processed.

 figure: Fig. 17.

Fig. 17. Reconstruction results of the plasters from different angles. (a-d) Partial fringe pattern of the plaster statue; (e-h) composite image of the plaster statue; (i-l) reconstruction results of the traditional method; (m-p) reconstruction results of this method.

Download Full Size | PDF

In summary, we conducted two experiments to test the feasibility, stability, and effectiveness of the method proposed in this paper. For the first experiment, we used two standard components for measurement. The accuracy of the algorithm is analyzed by comparing the point cloud's fitting results with the standard parameters’ error. The method is verified by repeating the experiment at different distances within the measurement range. For the second experiment, we used two plaster statues with different roughness. Multiple trials are conducted from some angles within the measurement range, and the point cloud is segmented to encapsulate the reconstruction of the plaster statue results. The validity of the method is verified by comparing the reconstruction effect of the detailed information.

5. Conclusion

In this paper, we propose an adaptive phase segmentation and correction (APSC) method to reduce errors in 3-D measurements. We mainly perform segmentation from grayscale image and phase map and fuse the two results to generate a phase structural mask. The detailed parts will be repeatedly detected to ensure that there are no abnormal points in the stable phase. To make the phase correction more robust, we propose an adaptive correction method. We perform filtering optimization on the stable region. Pixel priority is determined by the phase structure mask and phase. Then the unstable phase is corrected according to the gradient distribution. In order not to affect the calculation speed, we designed templates of the same scale in the filtering and correction processes. Two experiments were conducted to evaluate the performance of our method. Firstly, we measured the standard components and analyze the accuracy. The results show that our method can improve the measurement accuracy compared with traditional methods. For standard dumbbell measurements, the accuracy is improved by 0.25. For standard plane measurements, the accuracy is improved by 0.16. Finally, measurements were performed on plaster figures, demonstrating that our method can perform well in 3-D reconstruction of objects with complex surfaces.

Funding

National Natural Science Foundation of China (U21B2035); Special Project for Research and Development in Key Areas of Guangdong Province (2021B0101410001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. X. Li, Y. Q. Dong, and X. X. Wang, “Fourier transform profilometry using single-pixel detection based on two-dimensional discrete cosine transform,” Opt. Laser Technol. 156, 108570 (2022). [CrossRef]  

2. Y. Z. Liu, Y. J. Fu, Y. H. Zhuan, et al., “High dynamic range realtime 3D measurement based on Fourier transform profilometry,” Opt. Laser Technol. 138, 106833 (2021). [CrossRef]  

3. M.Q. Han and W.J. Chen, “Two-dimensional complex wavelet with directional selectivity used in fringe projection profilometry,” Opt. Lett. 46(15), 3653–3656 (2021). [CrossRef]  

4. C. Jiang, S.H. Jia, J. Dong, et al., “Multi-frequency fringe projection profilometry based on wavelet transform,” Opt. Express 24(11), 1323–1333 (2016). [CrossRef]  

5. S. Burnes, J. Villa, G. Moreno, et al., “Temporal fringe projection profilometry: Modified fringe-frequency range for error reduction,” Opt. Lasers Eng. 149, 106788 (2022). [CrossRef]  

6. M. Zhong, F. Chen, C. Xiao, et al., “3-D surface profilometry based on modulation measurement by applying wavelet transform method,” Opt. Lasers Eng. 88, 243–254 (2017). [CrossRef]  

7. Y.Y. Chen, Y.Y. Chen, W.H. Cheng, et al., “Extraction information of moire fringes based on Gabor wavelet,” Opt. Rev. 29(3), 197–206 (2022). [CrossRef]  

8. Z.J. Wu, W.B. Guo, L.L. Lu, et al., “Generalized phase unwrapping method that avoids jump errors for fringe projection profilometry,” Opt. Express 29(17), 27181–27192 (2021). [CrossRef]  

9. C. Zuo, Q. Chen, G.H. Gu, et al., “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]  

10. J. H. Wang and Y. X. Yang, “Triple N-step phase shift algorithm for phase error compensation in fringe projection profilometry,” IEEE Trans. Instrum. Meas. 70, 7006509 (2021). [CrossRef]  

11. C. Zuo, L. Huang, M.L. Zhang, et al., “Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

12. J.H. Wang, Y.G. Zhou, and Y.X. Yang, “Three-dimensional measurement method for nonuniform reflective objects,” IEEE Trans. Instrum. Meas. 69(11), 9132–9143 (2020). [CrossRef]  

13. C. Rathjen, “Statistical properties of phase-shift algorithms,” J. Opt. Soc. Am. A 12(9), 1997–2008 (1995). [CrossRef]  

14. M. Servin, J. Estrada, J. Quiroga, et al., “Noise in phase shifting interferometry,” Opt. Express 17(11), 8789–8794 (2009). [CrossRef]  

15. C. Zuo, Q. Chen, G. Gu, et al., “Optimized three-step phase shifting profifilometry using the third harmonic injection,” Opt. Appl. 43(2), 393–408 (2013). [CrossRef]  

16. Q. Zhang, X. Su, L. Xiang, et al., “3-D shape measurement based on complementary Gray-code light,” Opt. Laser Eng. 50(4), 574–579 (2012). [CrossRef]  

17. Z. Wu, C. Zuo, W. Guo, et al., “High-speed three-dimensional shape measurement based on cyclic complementary Gray-code light,” Opt. Express 27(2), 1283–1297 (2019). [CrossRef]  

18. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-codelight,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

19. P. Cong, Z. Xiong, Y. Zhang, et al., “Accurate Dynamic 3D Sensing With Fourier-Assisted Phase Shifting,” IEEE J. Sel. Top. Signal Process. 9(3), 396–408 (2015). [CrossRef]  

20. B. Zhang, J. Ziegert, F. Farahi, et al., “In situ surface topography of laser powder bed fusion using fringe projection,” Addit. Manuf. 12, 100–107 (2016). [CrossRef]  

21. H. An, Y. Cao, H. Wu, et al., “Spatial-temporal phase unwrapping algorithm for fringe projection profilometry,” Opt. Express 29(13), 20657–20672 (2021). [CrossRef]  

22. D. Yang, D. Qiao, C. Xia, et al., “Adaptive horizontal scaling method for speckle-assisted fringe projection profilometry,” Opt. Express 31(1), 328–343 (2023). [CrossRef]  

23. H.W. Guo, C. Jiang, and S. Xing, “Fringe harmonics elimination in multi-frequency phase-shifting fringe projection profilometry,” Opt. Express 28(3), 2838–2856 (2020). [CrossRef]  

24. F. Chen, X. Su, and L. Xiang, “Analysis and identification of phase error in phase measuring profilometry,” Opt. Express 18(11), 11300 (2010). [CrossRef]  

25. J. Deng, J. Li, H. Feng, et al., “Edge-preserved fringe-order correction strategy for code-based fringe projection profilometry,” Signal Process 182, 107959 (2021). [CrossRef]  

26. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

27. L. M. Song, X. X. Dong, J. T. Xi, et al., “A new phase unwrapping algorithm based on Three Wavelength Phase Shift Profilometry method,” Opt. Laser Technol. 45, 319–329 (2013). [CrossRef]  

28. S. Zhang, “Phase unwrapping error reduction framework for a multiple-wavelength phase-shifting algorithm,” Opt. Eng. 48(10), 105601 (2009). [CrossRef]  

29. S. Zhang, “Flexible 3-d shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 334(20), 931–933 (2009). [CrossRef]  

30. L. Song, Y. Chang, and G. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641 (2014). [CrossRef]  

31. S. Feng, Q. Chen, F. Zuo, et al., “Automatic identification and removal of outliers for high-speed fringe projection profilometry,” Opt. Eng. 52(1), 013605 (2013). [CrossRef]  

32. D. Zheng, F. Da, and Q. Ke, “Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter,” Opt. Express 25(5), 4700–4713 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Phase noise distribution. (a) Gray code images; (b) phase-shifting images; (c) phase map; (d) phase on line 370; (e) (f) phase of the red frame; (g) phase of the middle green frame.
Fig. 2.
Fig. 2. Schematic diagram of phase. (a) Ideal phase period; (b) concave noise; (c) convex noise.
Fig. 3.
Fig. 3. Schematic of a synthetic grayscale image.
Fig. 4.
Fig. 4. 5 × 5 Cartesian networks and city distances. (a) Neighborhood pixel position; (b)city Distance.
Fig. 5.
Fig. 5. Correlation coefficient template for second-phase segmentation. (a) Neighborhood error weighted direction; (b) horizontal scale template; (c) 45° direction scale template; (d) 135° direction scale template; (e) vertical scale template; (f) 225° direction scale template; (g) 315° direction scale template; (h) gradient weighting factor.
Fig. 6.
Fig. 6. Phase segmentation results. (a) Grayscale segmentation; (b) phase segmentation; (c) segmentation result of fusing grayscale and phase map.
Fig. 7.
Fig. 7. Priority calculation schematic. (a) Position distribution before calculation; (b) position change after calculation.
Fig. 8.
Fig. 8. Error distribution diagram. (a) Phase level, (b) truncated phase, (c) error distribution of phase level, (d) error distribution of truncated phase.
Fig. 9.
Fig. 9. Flow charts of the proposed method
Fig. 10.
Fig. 10. The composition of the proposed system.
Fig. 11.
Fig. 11. The stripe pattern and phase distribution of plaster statue. (a) Partial fringe pattern, (b) the phase distribution.
Fig. 12.
Fig. 12. The phase error. (a) The phase error in the 370th row, (b) the green box’s phase error, (b) the orange box’s phase error.
Fig. 13.
Fig. 13. The irregular objects’ stripe pattern and phase distribution of plaster statue. (a) Partial fringe pattern, (b) the image of fringe synthesis, (c) the error distribution of truncated phase, (d) the error distribution of phase degree.
Fig. 14.
Fig. 14. The phase error. (a) The phase error in the 600th row, (b) the green box’s phase error, (b) the orange box’s phase error.
Fig. 15.
Fig. 15. Measurement results of standard components. (a) Individual streak diagrams of standard balls; (b) synthesized image of standard balls; (c) reconstruction result of a standard ball; (d) individual streak map of the standard plane; (e) synthesized image of the standard plane; (f) reconstruction results of the standard plane.
Fig. 16.
Fig. 16. Point cloud fitting results for standardized components. (a) Global roughness of the ball; (b) global roughness of plane surface; (c) local roughness of the ball; (d) local roughness of plane surface.
Fig. 17.
Fig. 17. Reconstruction results of the plasters from different angles. (a-d) Partial fringe pattern of the plaster statue; (e-h) composite image of the plaster statue; (i-l) reconstruction results of the traditional method; (m-p) reconstruction results of this method.

Tables (2)

Tables Icon

Table 1. Three-dimensional measurements of standard dumbbell balls at different distances

Tables Icon

Table 2. Three-dimensional measurements of standard plane at different distances

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

{ I 1 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) ) I 2 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) + π / 2 ) I 3 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) + π ) I 4 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) + 3 π / 2 )
φ ( x , y ) = arctan ( I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) )
k ( x , y ) = f [ i = 1 N G i ( x , y ) 2 N i ]
G 1 = ( Z 7 Z 19 ) / 4 [ 1 , 1 ] + ( Z 8 Z 18 ) / 2 [ 0 , 1 ] + ( Z 9 Z 17 ) / 4 [ 1 , 1 ] + ( Z 14 Z 12 ) / 2 [ 1 , 0 ]
G 2 = ( Z 1 Z 25 ) / 8 [ 1 , 1 ] + ( Z 3 Z 23 ) / 4 [ 0 , 1 ] + ( Z 5 Z 21 ) / 8 [ 1 , 1 ] + ( Z 15 Z 11 ) / 4 [ 1 , 0 ]
d i r e c t i o n X = [ 3 2 0 2 3 4 6 0 6 4 6 12 0 12 6 4 6 0 6 4 3 2 0 2 3 ]
d i r e c t i o n Y = [ 3 4 6 4 3 2 6 12 6 2 0 0 0 0 0 2 6 12 6 2 3 4 6 4 3 ]
g ( x , y ) = [ | g x ( x , y ) | + | g y ( x , y ) | ] / σ
T = [ i = 1 9 g i ( x , y ) min ( g i ( x , y ) ) max ( g i ( x , y ) ) ] / 7
s i z e ( x , y ) = { r = 1 W = 1 × 3 , 3 n < 5 r = 2 W = 1 × 5 , 5 n < 7 r = 3 W = 1 × 7 , 7 n < 9 r = 4 W = 1 × 9 , 9 n
p h a s e ( x , y ) = 1 n i W p i
C ( x , y ) = { 0 ( x , y ) Ω 1 ( x , y ) ψ
I = Ω + ψ
P ( x , y ) = C ( x , y ) p h a s e ( x , y ) / d ( x , y )
E k c ( x , y ) = m e a n [ i = 0 i = 8 j = 4 j = 4 E k ( x + i , y + j ) ( 1 C ( x + i , y + j ) ) ]
E φ c ( x , y ) = m e a n [ i = 0 i = 8 j = 4 j = 4 E φ ( x + i , y + j ) ( 1 C ( x + i , y + j ) ) ]
k n e w ( x , y ) = k ( x , y ) + E k c ( x , y )
φ n e w ( x , y ) = φ ( x , y ) + E φ c ( x , y )
ϕ 1 c ( x , y ) = k n e w ( x , y ) + φ n e w ( x , y )
E p h a s e m = m e a n [ i = 0 i = 8 j = 4 j = 4 E p h a s e ( x + i , y + j ) ( 1 C ( x + i , y + j ) ) ]
ϕ 2 c ( x , y ) = ϕ ( x , y + d ) + E p h a s e m d
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.