Abstract
An absolute phase retrieval method based on fringe amplitude encoding is proposed. Different from the conventional intensity coding methods which are based on time division multiplexing with multiple additional auxiliary patterns, the proposed fringe order encoding strategy is codeword overlapping interaction based on space division multiplexing. It just directly encodes different fringe amplitudes for different periods in corresponding sinusoidal phase-shifting patterns to generate space division multiplexing composite sinusoidal phase-shifting patterns and quantifies the fringe amplitudes into four levels as encoding strategy, so it can retrieve absolute phase without any additional auxiliary patterns. To improve the anti-interference capability of the proposed method, a codeword extraction method based on image morphological processing is proposed to segment the grayscale. Consequently, both the phase-shifting sinusoidal deformed patterns and the single frame space division multiplexing four gray-level codewords for fringe order recognition can be extracted respectively from the captured composite deformed patterns. Then, a half-period single-connected domain correction method is also proposed to correct the codewords. Moreover, in order to suppress the effect of jump errors, the phase zero points are constructed to segment the positive and negative ranges of the phase, making the phase unwrapping process segmented. The experimental results demonstrate the feasibility and effectivity of the proposed method.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Fringe projection profilometry (FPP) [1–3] enables contactless reconstruction of 3D shape accompanying with the advantages of high accuracy and fast measurement. It has been extensively used in industrial quality inspection, cultural relics protection, dentistry diagnosis and treatment and so on [4]. Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP) are two major FPP techniques, both of which reconstruct the 3D shape by projecting sinusoidal fringe patterns [5–7]. Full-period equal-step phase-shifting algorithm and arbitrary equal-step phase-shifting algorithms, such as Carré algorithm [8] and Stoilov algorithm [9], are the main algorithms utilized by the PSP. Since the phase is typically calculated by employing the arctangent operation, the phase is wrapped within [-π, π). Consequently, the phase unwrapping is essential to solve phase ambiguity problem and obtain the continuous absolute phase.
The proposed phase unwrapping so far can be broadly divided into spatial phase unwrapping (SPU) [10–13] and temporal phase unwrapping (TPU) [14–16]. SPU relies on the phase continuity of neighboring pixel on the unwrapping path to unwrap phase, and the choice of unwrapping path determines the reliability of the 3D reconstruction results. It is required that the phase jump on the path cannot be greater than 2π and the fringe order is sequential [17–20]. Due to the inevitable phase unwrapping error propagation, SPU is not suitable for the measurement of complex isolated objects. In contrast, TPU locates the fringe order by projecting additional encoded patterns [21], and each wrapped phase period has a separate integer order, avoiding the propagation of phase unwrapping errors. By projecting N frames of sinusoidal fringes and M frames of additional patterns, TPU achieves high phase quality and measurement robustness through guaranteeing the sinusoidal nature of the fringes. Among established TPU methods, Gray-code encoding strategy [22–24], phase-code encoding strategy [25,26], three-frequency TPU strategy [27] and two-frequency heterodyne TPU strategy [28] are the most widely used, and each of these techniques requires additional patterns to solve the fringe order. The number of additional patterns for the three-frequency, two-frequency heterodyne and the phase-code methods is the number of three, two, and one sets of phase-shifted patterns, respectively, and the codeword blurring of these three methods increases with the number of fringe periods in the projected patterns. The Gray-code method encodes two grey levels as intensity domain codewords, the quantization error of the codewords is small and the robustness is high, but as the increase of fringe frequency, the projection efficiency decreases greatly. The periodic marking of high-density fringes is a challenge for N + M coding patterns [29,30]. Unfortunately, the number of additional patterns increases significantly to expand the range of fringe orders identified by codewords and reduce the probability of codeword errors in coding methods. To this day, many absolute phase retrieval methods [31,32] have been extensively investigated to reduce the additional encoding patterns in the expectation of improving measuring efficiency. An et al. [33] obtained six codewords from only one coding pattern by rotating the calculation sequences of the phase-shifting patterns within each fringe cycle, which can then achieve a wide range of absolute phase measurements by employing geometric constraints. Noting that the calculation of phase step offsets in the Carré algorithm has point-to-point independence, Zhang et al. [34] employed the phase shift step offsets in the phase period as the basis for determining the stage period, and proposed an absolute phase retrieval based on the equal-step algorithm that can obtain both the wrapped phase and the stage from the projected four frames of phase-shifting fringes. To significantly reduce the number of projected frames in the light intensity domain, grey-level coding strategy [35–37] uses multiple grey levels as codewords to achieve a robust large-scale 3D measurement with only few measurement frames. Porras-Aguilar proposed a new grey-level coding method [36] based on a space filling curve design to overcome typical defocus errors, which leads to a large improvement in the robustness of grey-level coding methods. Further, the proposed method differs from existing grey-level coding strategy by encoding the grey levels directly into the fringe amplitude intensities, eliminating the need for additional auxiliary patterns.
In this paper, an absolute phase retrieval method based on fringe amplitude encoding without any additional patterns is proposed for the full-period equal-step phase shift algorithm. Different from the traditional TPU based on time division multiplexing, the coding method directly modulates the four-level grey code used to extract fringe order onto the amplitude of the phase shifted sinusoidal fringe pattern to form a composite phase-shifting fringe patterns for space division multiplexing. It just directly encodes different fringe amplitudes for different periods and quantifies the fringe amplitudes into four levels as codeword encoding strategy. The normalized modulation can then be constructed from the captured composite deformed patterns, and then the fringe order information can be demodulated directly from the normalized modulation. While the encoded parameter of the proposed method is amplitude of the phase-shifting sinusoidal fringe pattern, different from the traditional intensity coding method, the proposed method has better noise immunity for shaded regions and does not need any additional auxiliary pattern. Furthermore, a phase segmentation unwrapping method with applicability is employed to correct the jump error.
2. Principle
2.1 Phase-shifting profilometry
In general, phase-shifting profilometry can obtain the changes of the object surface height by comparing phase changes of the sinusoidal fringes in the projected and photographed directions, typically using a full-period equal step algorithm to solve for phase information modulated by object surface height. For the N-step PSP algorithm, the deformation fringes captured by the camera can be expressed as:
where n represents the phase-shift index n=0, 1, 2, …, N-1, $(x,y)$ is the pixel coordinate in the camera space, $r(x,y)$ is the surface reflectance distribution, $ar(x,y)$ represents the intensity of background light, $br(x,y)$ represents the modulation, and $\varphi (x,y)$ is the phase caused by the height of the object surface. It can be extracted by the following equation:Due to the finiteness of the results obtained from the calculation of the arctangent operation in Eq. (2), the phase is wrapped between [-π, π). Consequently, a phase-unwrapping algorithm is required to obtain an absolute phase $\phi (x,y)$. The relationship between $\varphi (x,y)$ and $\phi (x,y)$ can be expressed as:
where $k(x,y)$ represents the distribution of the fringe order. Conventionally, $k(x,y)$ is determined by projecting some additional encoding patterns in various absolute phase unwrapping algorithms. In this paper, an absolute phase retrieval method based on fringe amplitude encoding is proposed to improve the measuring efficiency, which can simultaneously obtain both $\varphi (x,y)$ and $k(x,y)$ from the captured composite deformed patterns without any additional auxiliary patterns.2.2 Proposed encoding principle
In N-step PSP, higher measuring accuracy and better error immunity are obtained when N is bigger enough. To locate the fringe order, the amplitudes of the N phase-shifting sinusoidal fringe patterns are used to encode into four-level codewords in the intensity domain. For sake of the measuring efficiency, N = 4 is selected in this paper to illustrate the basic principles of the proposed method. The corresponding four projected composite fringe patterns $I_n^p$ can be assigned as:
where ${f^p}$ represents the frequency of the sinusoidal fringe pattern, a is the background light intensity, and $b({x^p})$ is the amplitude distribution along the phase shifting direction, which is encoded as four levels as shown in Fig. 1(d). Figure 1(a) illustrates the normal sinusoidal fringe amplitude, and Fig. 1(b)-(c) show the traditional sinusoidal fringe projection pattern and its cutaway view, respectively. And Fig. 1(e)-(f) illustrate the composite projection pattern of the proposed method and its cutaway view. As shown in Fig. 1, the encoded parameter of the proposed method is the amplitude of the phase-shifting sinusoidal fringe pattern, and compared with the traditional projected fringe pattern, the fringe amplitude of the proposed method varies with the period.The encoding strategy of the codewords is the critical factor in identifying the fringe order. Different from the conventional intensity coding methods which are based on time division multiplexing with multiple additional auxiliary patterns, the proposed encoding strategy is codeword overlapping interaction based on space division multiplexing. The current codeword relies on the next adjacent codewords to determine its location in the fringe order as long as it ensures that any two adjacent amplitude levels in the above four-level amplitude encoding method are different. If the amplitude levels are encoded one by one according to the code elements LUT as shown in Fig. 2(a), and every four adjacent code bits consist a codeword subsequence. It can be seen in codeword LUT that the first codeword subsequence {4,3,2,1} is appointed as the first fringe order, and the second codeword subsequence {3,2,1,3} is appointed as the second fringe order, and the 25th codeword subsequence {4,3,4,3} is appointed as the 25th fringe order. In this way, since no repeat codewords exist in LUT, all the fringe orders can be appointed as shown in Fig. 2(b). Based on this encoding strategy, the number of fringe orders appointed can be obtained by a simple permutation as follows:
where nk represents the maximum number of fringe orders specified by LUT, L represents the number of amplitude levels and sq is the length of subsequence. In this paper, L and sq are assigned to 4, so the value of nk is 108. Typically, the resolution of the projector is 1140${\times} $ 912 pixels and a single fringe period of around 20 pixels is sufficient to achieve high experimental accuracy, and the number of fringe orders is usually around 60 to account for the full projection pattern, which implicates that nk is satisfactory for large-scale 3D measurements.To improve measuring efficiency, all codewords are modulated onto the required amplitude of the phase-shifting sinusoidal fringe patterns to form the corresponding composite phase-shifting fringe patterns. When the number of fringe orders k in the projection are specified as 32 and 108 respectively, the most common number of projection frames required for different conventional TPU methods are listed in Table 1, where grey-level coding method uses 8 grey levels. Theoretically, for M frames with additional auxiliary patterns, Gray-code can locate 2 M fringe orders, and grey-level coding with p grey levels can locate pM fringe orders. And the number of additional auxiliary patterns for phase-coding, three-frequency and two-frequency heterodyne method are determined by the patterns for solving phase. It can be known from Table 1 that the additional auxiliary pattern of the proposed method is zero and the total number of projection patterns are significantly reduced compared to the other five conventional TPU methods. One of the composite phase-shifting patterns with 24 fringe order is shown in Fig. 2(c).
Note that even though the last fringe near the edge of the valid measuring region has no corresponding codewords sequence, it can rely on the previous four consecutive codewords to solve the fringe order. There are usually sharp height variations at object edges, which can make the proposed method less robust to codewords near these regions. During the decoding of fringe order close to the edge, the current fringe order is obtained by adding or subtracting one from the adjacent fringe order on the side away from the edge. In addition, the decoding correctness in this paper requires that at least five consecutive codewords within the measurement range can be correctly decoded, implying that the length of the codeword sequence to be looked up in LUT is greater than the length of the encoded codeword subsequence. If five consecutive codewords can find the corresponding order in the lookup table, the decoding priority of these codewords is higher than that of the neighboring codewords, even if the neighboring codewords are wrong, we can correct them according to the continuity of the codewords.
The four frames of the composite phase-shifting fringe patterns are then projected respectively onto the object and the camera is used to capture the corresponding deformed patterns modulated by the object, which can be expressed as:
The average intensity $ar(x,y)$ and the modulation distribution $b(x)r(x,y)$ can be extracted as follows:
To mitigate the effect of surface reflectivity on decoding, the normalized modulation for suppressed reflectivity is obtained by processing the modulation intensity using the average background intensity, and the normalized modulation is processed as follows:
Since the reflectivity factor is removed from the modulation intensity according to the above equation and a is a constant, the $MC(x,y)$ can reflect the feature of the amplitude code $b(x )$. Additionally, the traditional phase-shifting sinusoidal deformed patterns ${I_n}(x,y)$ just as shown in Eq. (1) can be obtained from $I_n^c(x,y)$ by the following equation:
Consequently, the phase-shifting sinusoidal deformed patterns and four gray level codewords containing order information can be extracted simultaneously from captured composite deformed patterns and then the phase $\varphi (x,y)$ caused by the height of the object surface can be obtained with Eq. (2) naturally. Taking the four-step phase shift as an example, Fig. 3 shows the pattern extraction procedure. The four captured composite deformed patterns are shown in Fig. 3(a). The normalized modulation is as shown in Fig. 3(b), which reflects the four levels codewords feature. The extracted traditional sinusoidal deformed patterns are as shown in Fig. 3(c). And the phase $\varphi (x,y)$ caused by the height of the object surface is as shown in Fig. 3(d). As shown in Fig. 4(a-b), the captured composite deformed patterns are transformed by Eq. (10) to obtain the equal-amplitude In.
In the decoding process, by applying an intensity threshold segmentation method based on the grayscale histogram of the $MC(x,y)$, an initial range of four codewords can be extracted. The shadow region makes the regions of different fringe orders become the same connected region, and the shadow region must be removed to get the effective reconstruction region. In addition, in order to cope with the measurement of the object with abrupt height change, the surface contours of the objects can be extracted in advance using Sobel edge detection algorithm, and then the extracted contours are processed by morphological open and closed operations to get the edge binary mask em(x,y). The em(x,y) has the value of 1 at the edge pixel point and 0 at the non-edge point. Therefore, the valid region $vr(x,y)$ can be generated by the following equation:
Due to the uneven reflectivity of the object surface and shadow occlusion, the method of relying on the intensity encoding is prone to order confusion in the decoding process. In addition, some codewords in low-quality pixels caused by sharp surface changes are unreliable. To enhance the robustness of amplitude-intensity coding, this paper proposes a codeword overlap correction method based on image morphological processing techniques.
2.3 Principle of correcting the codeword
To improve the decoding efficiency and reduce the difficulty of morphological processing, this paper uses the threshold th2 to segment the $MC(x,y)$ to generate the binary codeword template $t{c_1}(x,y)$ containing codewords 1 and 2, and then the threshold values th1 and th3 are used to generate the binary codeword template $t{c_2}(x,y)$ containing codewords 2 and 3, wherein the binary codeword templates are generated by the following equations:
Then, A morphological closure operation is employed on the codeword template before correcting operation, using structural elements to smooth the contours of the image and connect isolated areas in $t{c_1}(x,y)$ and $t{c_2}(x,y)$. Taking the codeword template $t{c_1}(x,y)$ as an example, the detailed steps of the correcting method are described as follows:
Step 1. Generate masks and label their single-connected domain. The wrapped phase can be segmented into single-connected regions according to the phase value ranges. Then, masks $bw(x,y)$ and $mw(x,y)$ are generated by the following equations:
Since the masking operation is a pixel-wise operation, the coordinate index $(x,y)$ will be omitted in the mask for simplicity. As shown in Fig. 6, the binary mask $bw$ is the negative region of the wrapped phase, and mw is the central phase region. Then, three half-period masks ymn, n = 1, 2, 3 are obtained by using mask vr to process bw and mw, which can be interpreted by the following equation:
Accordingly, the connected components can be extracted from the binary images ymn, which give different labels to the disconnected areas of different periods.
Step 2. Match the same label region of the mask to the codeword template. The marked ymn are multiplied separately with the $t{c_1}(x,y)$. If the area of each label in the codeword template is greater than half area of the associated domain to which the same label belongs in the labeled $y{m_\textrm{n}}$, the single-connected region of the codeword is considered to match the connected domain of the marks ymn. As shown in Fig. 7, the matching results of ymn are pmn.
Step 3. Verify the correctness to ensure that only one codeword exists for a period. The matching results pm1 and pm2 are independent from each other in a period and both only account for half-period of the phase domain, which may lead to half-period codeword error. To address this problem, the matching result pm3 is constructed as a reference value to correct the code templates processed by the bw. Perform a label region by label region process for ym3, and if pm1 and pm2 have different values at that label region location, set the values of the single connected domain at that location in both to the value of pm3 at that location.
Eventually, two single-connected codeword templates pm1 and pm2 occupying half-period phase domain are added to obtain the corrected codeword template $t{c_1}(x,y)$. Moreover, the closing and hole-filling operation of the codeword templates removes the shadow areas between the different connected domains in each order period and results in the corrected codeword template that contains the codewords 3 and 4. The same operation is performed on the codeword template $t{c_2}(x,y)$ to obtain codewords 2, 3. The codeword distribution $dc(x,y)$ can be extracted according to the following equation:
The half-period single-connected domain correction method unifies the codewords in the one-period connected domain and solves the codeword confusion problem caused by uneven reflectivity. After obtaining the distribution diagram for the four codewords, the fringe order is progressively developed according to the position of current codeword in LUT generated at the time of encoding. As shown in Fig. 8(a)-(c), the final fringe order is successfully resolved by the proposed method. Particularly, to guarantee that the same connected domain is in one period, the order with the most pixels in the current connected domain is used to fill the entire single-connected domain.
In actual measurements, even if the fringe order is corrected by the phase half-period template, the presence of shadow areas or noise in the wrapped phase diagram will cause the period edge of the fringe order change and the period edge of the wrapped phase jump not to be strictly aligned with each other, inevitably resulting in jump error.
2.3 Principle of jump error correction
There are two main types of jump error [21] in absolute phase retrieval that cannot be easily eliminated by median filtering operations: a random error generated by the wrapped phase itself near the 2π jump position, and a mismatch error between the order edge and the phase period edge. In this paper, the second type error is well suppressed by constructing the fringe orders based on filling the half-period single-connected domain of the segmented phase. It is therefore necessary to pay attention to the suppression of the first type. A method of correcting the jump error based on positioning phase zeros is proposed, which holds valid for other phase unwrapping methods as well.
A phase period is divided into two halves using the constructed phase zero. In one fringe order, the phase values in the left (right) half that are greater (less) than zero add (subtract) 2π. Then, fringe order is directly added to the left and right halves of the wrapped phase. As can be seen in Fig. 9, there is an edge phase jump error due to the arctangent calculation in the right half of the fringe order, and this column jump error produces the existence of values less than zero, which can be eliminated by adding 2π. Determining the position of the zero point is crucial for correcting phase jump error. However, locating the phase zero point based on the positive or negative value of the $bw$ jump can easily lead to the 2π jump boundary of the phase period being misclassified as the zero point. Therefore, the position of the zero point is where the $bw$ jump position on the $mw$ has a value of 1, as shown in Fig. 9(a). Furthermore, the presence of phase noise can cause multiple zero points within a fringe order. In this case, the whole pixels between the first and last zero points are treated as one zero point.
Finally, a phase-to-height mapping algorithm [38,39] can be used to obtain 3D reconstruction from the absolute phase.
3. Experiments
To demonstrate the validity of the proposed method, a measurement system is developed using DLP LightCrafter 4500 projector with a resolution of $1140 \times 912$ pixels, a projection rate of 120 fps and the ability to project sinusoidal fringes with 8-bit depth, GEV-B1610M CCD camera with a resolution of $1624 \times 1236$ pixels, as shown in Fig. 10. The projected fringe patterns have a fringe period size of 24 pixels and the phase change direction is vertical. The experiments verify that the proposed method can efficiently achieve absolute phase retrieval in 3D reconstruction without the need for additional patterns, while ensuring high reconstruction accuracy.
3.1 Measurement on continuous objects
Firstly, two isolated continuous objects as shown in Fig. 11 are chosen to be measured with four frames of the traditional PSP method, one frame of the traditional FTP method and four frames of the proposed method for comparison experiments. In this experiment, both the PSP method and the FTP method project sinusoidal fringes with the uniform intensity amplitude of the fringes projected by the proposed method.
Figure 11(a)-(c) show the wrapped phase and the reconstruction results extracted by PSP, FTP, and the proposed method respectively, where the fundamental frequency of FTP is extracted by rectangular window filtering. Figure 11(d)-(f) is the reconstruction results of Fig. 11(a)-(c). Since it is only the accuracy of the phase calculation that needs to be concerned, the selected fringe orders of PSP and FTP are all extracted by the proposed method. The root-mean-square errors (RMSEs) of the difference between the reconstruction results of the proposed method and FTP relative to PSP are calculated, where the RMSE of the proposed method is 0.0875 mm and that of FTP is 0.4038 mm. Figure 12(a) shows the profiles of line row 577 of the results that are reconstructed using different methods and its partial enlargement is shown in Fig. 12(b). It can be clearly seen in Fig. 12(b) that the reconstruction result of the proposed method in the dorsal region of the turtle model is closer to that of PSP comparing to that of FTP. The experimental result demonstrates that the accuracy of the proposed method is close to that of PSP and much greater than that of FTP. For achieving absolute phase retrieval, the proposed method is superior to the method of projecting a sinusoidal fringe pattern plus M-frame encoded patterns, and can achieve absolute measurements without additional auxiliary patterns while ensuring a certain accuracy.
3.2 Measurement on isolated complex objects
Furthermore, several isolated complex objects with abrupt edges as shown in Fig. 13 are chosen to verify the validity of the proposed method. Figure 13(a) displays the four captured composite deformed patterns. Then, the modulation in Fig. 13(b) and the phase-shifting sinusoidal deformed patterns can be simultaneously obtained from the composite deformed patterns by pixel-by-pixel calculation. The wrapped phase in Fig. 13(c) is solved from the phase-shifting sinusoidal deformed patterns. The validity of the method is verified by the fact that the fringe order can be appointed by the position of the codewords in LUT, as illustrated in Fig. 13(d-e). In addition, areas of code confusion can be well corrected. Ultimately, the method proposed in this paper successfully extracts codewords based on modulation decoding from the composite deformed patterns.
Figure 14 shows the 3D shape of the reconstructed objects by different methods. As shown in Fig. 14(a-c), the reconstructed depth maps of the conventional Gray-code encoding strategy clearly reveal many jump errors, which can be corrected by Wang’ Gray-code method [24] and the proposed method. Using the reconstruction depth of Wang’ Gray-code method as quasi-true value, the RMES of conventional Gray-code encoding strategy without jump error correction is 1.7500 mm, while the RMES of the proposed method is 0.0633 mm. Compared to the conventional Gray-code encoding strategy, the proposed method can suppress the jump error efficiently so that the reconstruction result is smoother and its experimental accuracy is closer to the quasi-true value. The low modulation inevitably introduces more random measurement noise, which may reduce the accuracy of the proposed method compared to Wang’ Gray-code method. However, the number of projected patterns in the proposed method is significantly reduced compared to the N + 6 frames required for Wang’ Gray-code method, resulting in a significant increase in absolute measurement efficiency with some compromises on accuracy. In this experiment, when the number of projected phase-shifting patterns is 4 frames, the projection efficiency of the proposed method is improved by a factor of 1.5 compared to Wang's method.
Finally, the robustness is verified by measuring a standard stair, and Fig. 15 shows the corresponding reconstructed process. Figure 15(a)-(f) show one captured pattern, wrapped phase, modulation, template codewords, fringe order and reconstructed result, respectively. Close to the shaded area edge, it is possible that only a small part of the codeword is present. But it can be corrected based on the position of adjacent consecutive codewords in LUT. By just adding or subtracting 1 from the order solved by adjacent consecutive codewords, the fringe order can be corrected efficiently. After the simple order correction, the shape of objects with discontinuous surfaces can be successfully reconstructed. The height of each step of the standard stair is 29 mm, and the reconstruction results using the proposed method demonstrate that the average height of each step is approximately 29.1714 mm. It can be concluded that the proposed encoding strategy based on space division multiplexing is feasible for the measurement of standard stepped objects.
4. Conclusion
An absolute phase retrieval method based on fringe amplitude encoding without any additional auxiliary pattern is proposed. By cleverly utilizing the amplitude modulation technique, the four-level grey code used to extract fringe order is modulated onto the amplitude of the phase-shifting sinusoidal fringe pattern to form a composite phase-shifting fringe pattern. To reduce decoding difficulty, the encoding strategy based on space division multiplexing is chosen to build a lookup table to decode the codewords to achieve absolute phase retrieval. Consequently, four-level codewords can be demodulated from the modulation of the captured composite deformed patterns. Furthermore, morphological processing is employed in the segmentation of the modulation, the result of which is matched and filled to generate a codeword template with period edge that roughly coincides with the wrapped phase by using a half period single-connected phase domain. In practice, the proposed method decodes the codewords accurately based on LUT to obtain the fringe order, which is combined with a segmented phase correction to suppress jump errors. Compared to conventional TPU methods, the proposed method enables efficient 3D measurements by simultaneously obtaining the wrapped phase and fringe order from the composite deformed patterns without projecting any additional auxiliary patterns. The feasibility and effectiveness of the proposed method has been verified with successful measuring multiple isolated continuous objects and multiple isolated complex objects with abrupt edges.
Funding
National Natural Science Foundation of China (No.62375188).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data will be made available on request.
References
1. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]
2. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]
3. S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]
4. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]
5. C. Zuo, S. Feng, L. Huang, et al., “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]
6. X. Su and W. Chen, “Fourier transform profilometry: A review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]
7. Y. Wu, Y. Cao, Z. Huang, et al., “Improved composite Fourier transform profilometry,” Opt. Laser Technol. 44(7), 2037–2042 (2012). [CrossRef]
8. K. Qian, F. Shu, and X. Wu, “Determination of the best phase step of Carr aIgorithm in phase shifting interferometry,” Meas. Sci. Technol. 11(8), 1220–1223 (2000). [CrossRef]
9. G. StoiIov and T. Dragostinov, “Phase-stepping Interferometry; Five-frame AIgorithm with an Arbitrary Step,” Opt. & Lasers in Eng. 28(1), 61–69 (1997). [CrossRef]
10. D. J. Bone, “Fourier fringe analysis: the two-dimensional phase unwrapping problem,” Appl. Opt. 30(25), 3627–3632 (1991). [CrossRef]
11. S. Li, X. Wang, X. Su, et al., “Two-dimensional wavelet transform for reliability-guided phase unwrapping in optical fringe pattern analysis,” Appl. Opt. 51(12), 2026–2034 (2012). [CrossRef]
12. K. Chen, J. T. Xi, and Y. G. Yu, “Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement,” Opt. Commun. 294, 139–147 (2013). [CrossRef]
13. H. An, Y. Cao, H. Wu, et al., “Spatial-temporal phase unwrapping algorithm for fringe projection profilometry,” Opt. Express 29(13), 20657–20672 (2021). [CrossRef]
14. J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993). [CrossRef]
15. Y. Wan, Y. Cao, X. Liu, et al., “High-frequency color-encoded fringe-projection profilometry based on geometry constraint for large depth range,” Opt. Express 28(9), 13043–13058 (2020). [CrossRef]
16. C. Zuo, J. Qian, S. Feng, et al., “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]
17. D. Ghiglia and L. A. Romero, “Minimum Lp-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996). [CrossRef]
18. S. Lian, H. Yang, and H. Kudo, “Simple phase unwrapping method with continuous convex minimization,” Opt. Express 30(18), 33395–33411 (2022). [CrossRef]
19. S. Zhang, X. Li, and S. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]
20. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]
21. C. Zuo, L. Huang, M. Zhang, et al., “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]
22. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]
23. Z. Wu, W. Guo, Y. Li, et al., “High-speed and high-efficiency three-dimensional shape measurement based on Gray-coded light,” Photonics Res. 8(6), 819–829 (2020). [CrossRef]
24. L. Wang, Y. Cao, and H. An, “Gray-code fringe order jump error self-correction based on shifted phase encoding for phase measuring profilometry,” Opt. Commun. 524, 128763 (2022). [CrossRef]
25. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]
26. H. An, Y. Cao, Y. Zhang, et al., “Phase-Shifting Temporal Phase Unwrapping Algorithm for High-Speed Fringe Projection Profilometry,” IEEE Trans. Instrum. Meas. 72, 1–9 (2023). [CrossRef]
27. H. Li, Y. Cao, Y. Wan, et al., “An improved temporal phase unwrapping based on super-grayscale multi-frequency grating projection,” Opt. Lasers Eng. 153, 106990 (2022). [CrossRef]
28. H. Zhang, Y. Cao, H. Li, et al., “Real-time computer-generated frequency-carrier Moire profilometry with three-frequency heterodyne temporal phase unwrapping,” Opt. Lasers Eng. 161(1), 109201 (2023). [CrossRef]
29. Y. Zheng, Y. Jin, M. Duan, et al., “Joint coding strategy of the phase domain and intensity domain for absolute phase retrieval,” IEEE Trans. Instrum. Meas. 70, 1–11 (2021). [CrossRef]
30. H. Wu, Y. Cao, Y. Dai, et al., “Ultra-fast 3D imaging by a big codewords space division multiplexing binary coding,” Opt. Lett. 48(11), 2793–2796 (2023). [CrossRef]
31. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107(1), 28–37 (2018). [CrossRef]
32. Y. Hu, M. Duan, Y. Jin, et al., “Shading-based absolute phase unwrapping,” Opt. Lett. 46(8), 1955–1958 (2021). [CrossRef]
33. H. An, Y. Cao, N. Yang, et al., “Absolute phase retrieval using one coding pattern for the dynamic 3-D measurement,” Opt. Lasers Eng. 159, 107213 (2022). [CrossRef]
34. Y. Zhang, N. Fan, Y. Wu, et al., “Four-pattern, phase-step non-sensitive phase shifting method based on Carré algorithm,” Measurement 171(6), 108762 (2021). [CrossRef]
35. R. Porras-Aguilar, K. Falaggis, and R. Ramos-Garcia, “Optimum projection pattern generation for grey-level coded structured light illumination systems,” Opt. Laser Eng. 91, 242–256 (2017). [CrossRef]
36. R. Porras-Aguilar and K. Falaggis, “Absolute phase recovery in structured light illumination systems: sinusoidal vs. intensity discrete patterns,” Opt. Lasers Eng. 84, 111–119 (2016). [CrossRef]
37. R. Porras-Aguilar, K. Falaggis, and R. Ramos-Garcia, “Error correcting coding-theory for structured light illumination systems,” Opt. Laser Eng. 93, 146–155 (2017). [CrossRef]
38. Q. Ma, Y. Cao, C. Chen, et al., “Intrinsic feature revelation of phase-to-height mapping in phase measuring profilometry,” Opt. Laser Technol. 108, 46–52 (2018). [CrossRef]
39. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]