Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Comparative analysis of circular and linear fringe projection profilometry: from calibration to 3D reconstruction

Open Access Open Access

Abstract

This study compares the accuracy of circular and linear fringe projection profilometry in the aspects of system calibration and 3D reconstruction. We introduce, what we believe to be, a novel calibration method and 3D reconstruction technique using circular and radial fringe patterns. Our approach is compared with the traditional linear phase-shifting method through several 2 × 2 experimental setups. Results indicate that our 3D reconstruction method surpasses the linear phase-shifting approach in performance, although calibration efficiency does not present a superior performance. Further analysis reveals that sensitivity and estimated phase error contribute to the relative underperformance in calibration. This paper offers insights into the potentials and limitations of circular fringe projection profilometry.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP), recognized for its high resolution, high accuracy, and robustness, has been widely applied in various fields for 3D measurement, including entertainment, manufacturing, robotics, and so forth [14]. The measurement accuracy of FPP is largely influenced by system calibration [5] and fringe pattern direction [6,7].

The calibration methods for a typical fringe projection system consisting of a single camera-projector pair generally fall into phase-height mapping [810], least-squares approach [11,12], and triangular stereo models [13,14]. This research primarily concentrates on triangular stereo models. A critical step of these models is enabling the projector to identify feature points like a camera. A prevalent approach is to map feature points detected by the camera to the projector with the assistance of phase. Up to now, many triangular stereo calibration methods [13,15,16] have been investigated to obtain more precise calibration parameters. However, most of these methods rely on horizontal and vertical fringe patterns for phase acquisition and mapping of camera feature points onto the projector. Different from conventional horizontal and vertical fringe patterns for calibration, Li. et al. [17] recommended calibrating the system using the optimal fringe angle to increase measurement accuracy. Given this, it is imperative to consider the fringe pattern direction as an important contributing factor to the calibration and measurement accuracy of FPP.

Over the years, many researchers have started investigating the impact of fringe pattern direction on the accuracy of FPP. Wang et al. [18] proposed to determine the optimal fringe angle by projecting a set of horizontal and vertical fringe patterns onto a step-height object and analyzing the phase difference between the top and the bottom face of the object. Yu et al. [7] employed a similar method to obtain the optimal fringe angle and optimal fringe frequency. Despite the feasibility and simplicity of implementation of the above method, the underlying principles guiding the selection of the optimal fringe angle remain largely unexplored. Drawing from the principles of triangulation and stereo vision, Zhou et al. [19] indicated that the optimal fringe direction is parallel to the baseline connecting the projector and camera in FPP. However, in practice, it is challenging to confirm the direction of the camera-projector baseline. Building upon epipolar geometry, Zhang et al. [6] formulated phase sensitivity as a function of the angle between the fringe direction and the epipolar line. They suggested orienting fringe patterns perpendicular to epipolar lines to maximize phase sensitivity and achieve better measurement accuracy. The perpendicular orientation of the optimal fringe angle to the epipolar line is also supported by the theoretical analysis in further research by Lv et al. [20]. But Lv et al. [20] emphasized a fixed fringe angle might not yield optimal phase sensitivity across all image pixels. To address the limitations of fixed fringe angles, advancements in circular FPP methods have emerged. Ma et al. [21] introduced a high-accuracy, real-time method employing three circular fringe patterns where the fringe angle varies with the image pixel. Further advancing this field, Zhang et al. [22], introduced a novel circular fringe projection profilometry and analyzed its 3D sensitivity utilizing extended epipolar geometry. This method effectively circumvents the need to double pattern numbers while enabling real-time operations. However, it requires the generation of four distinct look-up tables and does not account for the effects of projector calibration. Mandapalli et al.[23] introduced a radially symmetric circular fringe pattern for accurate unambiguous surface profiling of sudden height-discontinuous objects. This approach offered the capability to profile objects with a fourfold increase in dynamic range and at considerably lower fringe frequencies. The existing research demonstrates that circular fringe patterns have advantages in phase sensitivity and suitability for profiling objects with abrupt depth changes, compared to conventional linear fringe patterns.

Furthermore, circular fringe patterns have been applied in uniaxial or parallel structured light systems using telecentric lenses. Zhao et al. [24] proposed a circular fringe projection profilometry (CFPP) method to determine the height of a point by calculating its distance from the optical center of a projector along the optical axis. However, this approach struggles with reconstructing the height at the nominal zero-phase point and has lower accuracy near this point. Addressing this challenge, Zhang et al. [25] enhanced CFPP by employing a 2D ruler-based or plane constraint-based method to detect the zero-phase point, though this increases the method’s complexity. Subsequently, Zhang et al. [26] proposed a simpler approach that mitigates the zero-phase point issue and does not require calibrating the camera and projector in the system.

Given the advantages and flexibility of circular fringe patterns, it is of high importance to investigate its complete technical solutions from system calibration to 3D reconstruction. As circular fringe patterns are used for 3D reconstruction, it is natural to hypothesize that the performance of CFPP may be optimized if the same set of circular fringe patterns are also used for calibration. Based on this hypothesis, this paper aims to develop and evaluate a novel CFPP method incorporating circular fringe patterns for both system calibration and 3D reconstruction. A new projector calibration approach leveraging circular and radial fringe patterns for mapping camera points to projector points is introduced in this paper. In addition, a new 3D reconstruction model for CFPP based on the calibrated camera and projector parameters was developed. We compared our method’s performance with Zhang and Huang’s method [13] and the linear phase-shifting method, conducting several $2\times 2$ experiments on diverse samples for visual assessment. Our findings indicate that circular FPP outperforms traditional linear FPP, while our proposed calibration method does not exhibit a noticeable difference compared to the conventional Zhang and Huang’s method. In the discussion section, we delved into an in-depth analysis to understand the factors leading to this relative underperformance in our calibration method. Through this analysis, we identified and elaborated upon the increasing trends in both sensitivity and estimated phase error. These factors are recognized as the key contributors to the diminished accuracy of our calibration method, providing vital insights for future enhancements in CFPP calibration techniques.

2. Principle of the method

This section will first explain how to calibrate an FPP system using circular and radial fringe patterns. Then, we will introduce the proposed circular fringe projection profilometry for 3D reconstruction.

2.1 System calibration

2.1.1 Pinhole imaging model

In a structured light system, both the camera and projector can be effectively characterized using a conventional pinhole imaging model, as depicted in Fig. 1. The mathematical model of the pinhole imaging model can be explained as:

$$s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} = \begin{bmatrix} f_{u} & \gamma & u_{0} \\ 0 & f_{v} & v_{0} \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_{1} \\ r_{21} & r_{22} & r_{23} & t_{2} \\ r_{31} & r_{32} & r_{33} & t_{3} \end{bmatrix} \begin{bmatrix} x^w \\ y^w \\ z^w \\ 1 \end{bmatrix}.$$

In the model, rotation matrix $\mathbb {R}=[r_{ij}]_{3x3}$ and translation vector $\vec {t} = \left (t_1, t_2, t_3\right )^\top$ transform a point $p^w\left (x^w, y^w, z^w\right )$ in the world coordinate system to a point $p^c(x^c, y^c, z^c)$ in the camera coordinate system, which constitutes the extrinsic parameters of the system. Following the transformation, the point $p^c\left (x^c, y^c, z^c\right )$ is projected to a 2D point $\left (u, v\right )$ on the image plane based on the intrinsic parameters, $f_u$, $f_v$, $\gamma$, and $\left (u_0, v_0\right )$. $f_u$ and $f_v$ are the focal lengths of the camera along $u$ and $v$ directions, $\gamma$ is the skew factor, and $\left (u_0, v_0\right )$ is the principal point on the 2D pixel coordinate. The intrinsic and extrinsic parameters can be further represented with a projection matrix $\mathbb {M}$,

$$\mathbb{M}= \begin{bmatrix} f_{u} & \gamma & u_{0} \\ 0 & f_{v} & v_{0} \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_{1} \\ r_{21} & r_{22} & r_{23} & t_{2} \\ r_{31} & r_{32} & r_{33} & t_{3} \end{bmatrix} = \begin{bmatrix} m_{11} & m_{12} & m_{13} & m_{1} \\ m_{21} & m_{22} & m_{23} & m_{2} \\ m_{31} & m_{32} & m_{33} & m_{3} \end{bmatrix}.$$

The model of the camera can be described as

$$s^c \begin{bmatrix} u^c \\ v^c \\ 1 \end{bmatrix} = \begin{bmatrix} m^c_{11} & m^c_{12} & m^c_{13} & m^c_{14} \\ m^c_{21} & m^c_{22} & m^c_{23} & m^c_{24} \\ m^c_{31} & m^c_{32} & m^c_{33} & m^c_{34} \end{bmatrix} \begin{bmatrix} x^w \\ y^w \\ z^w \\ 1 \end{bmatrix}.$$

Similarly, we consider the projector in a structured light system as an inverse-imaging camera [13], whose mathematical formulation is described as

$$s^p \begin{bmatrix} u^p \\ v^p \\ 1 \end{bmatrix} = \begin{bmatrix} m^p_{11} & m^p_{12} & m^p_{13} & m^p_{14} \\ m^p_{21} & m^p_{22} & m^p_{23} & m^p_{24} \\ m^p_{31} & m^p_{32} & m^p_{33} & m^p_{34} \end{bmatrix} \begin{bmatrix} x^w \\ y^w \\ z^w \\ 1 \end{bmatrix}.$$

 figure: Fig. 1.

Fig. 1. Pinhole imaging model. The figure is reprinted from [27].

Download Full Size | PDF

2.1.2 Camera calibration

In this research, we employed Zhang’s renowned calibration method [28] in conjunction with the OpenCV camera calibration software toolbox [29] to calibrate the camera. Fig. 2(a) illustrates the design of our calibration target, where the centers of the circles act as the feature points. Camera calibration involves two primary components: intrinsic and extrinsic calibrations. The intrinsic calibration of the camera aims to estimate its internal parameters, $f_u$, $f_v$, $\gamma$, and $\left (u_0, v_0\right )$. This process involves capturing images of the calibration target in various poses (an example is depicted in Fig. 2(b)). For each image, feature points (the centers of circles) are extracted and then used in an iterative optimization process to refine the estimation of the camera’s intrinsic parameters, utilizing the OpenCV camera calibration toolbox.

 figure: Fig. 2.

Fig. 2. Camera calibration and target 3D estimation. (a) asymmetric circle grid calibration target, whose size is ($5(row) \times 9(column)$ with 20 mm center-to-center distance); (b) an image of the target at a pose with extracted circle centers; (c) the estimated 3D target orientations.

Download Full Size | PDF

Extrinsic calibration primarily estimates the rotation ($r_{ij}^{tg}$) and translation ($t_{i}^{tg}$) parameters, which transform points from the planar target coordinate ($x^{tg}, y^{tg}, 0$) to the camera coordinate ($x^{c}, y^{c}, z^{c}$). Fig. 2(a) defines the planar target coordinate, with the principal point being at the bottom left circle center. Having aligned the world coordinate with the camera coordinate, the rotation ($r_{ij}^{tg}$) and translation ($t_{i}^{tg}$) parameters convert the planar target points ($x^{tg}, y^{tg}, 0$) into three-dimensional points ($x^{wtg}, y^{wtg}, z^{wtg}$) within the world coordinate system, as detailed in Eq. (5). Additionally, Fig. 2(c) illustrates the estimated 3D orientations of the target in various poses.

$$\begin{bmatrix} x^{wtg} \\ y^{wtg} \\ z^{wtg} \end{bmatrix} = \begin{bmatrix} r^{tg}_{11} & r^{tg}_{12} & r^{tg}_{13} & t^{tg}_{1} \\ r^{tg}_{21} & r^{tg}_{22} & t^{tg}_{23} & t^{tg}_{2} \\ r^{tg}_{31} & r^{tg}_{32} & t^{tg}_{33} & t^{tg}_{3} \end{bmatrix} \begin{bmatrix} x^{tg} \\ y^{tg} \\ 0 \\ 1 \end{bmatrix}.$$

2.1.3 Circular and radial fringe patterns for projector calibration

In calibrating a structured light system, which consists of a camera and a projector, the latter can be treated as an inverse camera. Thus, system calibration hinges on enabling the projector to capture feature points in a way similar to a camera. To this end, phase is employed to establish a correspondence between feature points captured by the camera and those “pseudo-captured" by the projector [5]. Zhang and Huang’s method [13] utilized horizontal and vertical fringe patterns for correspondence mapping. Different from this method, we utilized circular and radial fringe patterns with their center coordinated at the specific point $O^s\left (u_s, v_s\right )$ to build the correspondence. In this research, the center of the circular and radial fringe patterns $\left (u_s,v_s\right )$ is not on the camera and projector image plane, and we assigned $u_s\ =\ width\ +\ height/2$, and $v_s\ =\ height\ +\ height/2$. Here, $width$ and $height$ are the width and height of the fringe patterns, which are 912 and 1140, respectively. An example of circular and radial fringe patterns is illuminated in Fig. 3. Specifically, the circular fringe patterns can be generated based on Eq. (6).

$$I^c_i\left(u_p, v_p\right) = A + B \cos\left(\frac{2\pi}{T} r\left(u_p, v_p\right) + 2\pi \frac{i-1}{N}\right),$$
where $I^c_i$ is the grayscale intensity of the pixel in $i^{th}$ circular fringe patterns at a coordinate $\left (u_p, v_p\right )$ in the projector image plane, $A$ and $B$ are two constants set to $127.5$ (i.e., half of the maximum possible intensity) for 8-bit gray depth, $T$ is the spatial period in the unit of pixel, $i$ and $N$ are the index and total number of the phase shifts, and $r\left (u_p, v_p\right )$ is the radial distance between pixel $\left (u_p, v_p\right )$ and center $\left (u_s, v_s\right )$, as illuminated in Eq. (7),
$$r\left(u_p, v_p\right) = \sqrt{\left(u_p-u_s\right)^2 + \left(v_p-v_s\right)^2}.$$

Similarly, we generated radial fringe patterns as described in Eq. (8),

$$I^r_i\left(u_p, v_p\right) = A + B \cos\left(\frac{2\pi}{\Delta \theta} \theta\left(u_p, v_p\right) + 2\pi \frac{i-1}{N}\right),$$
where $\Delta \theta$ is the angle change per spatial period in the unit of rad, and $\theta \left (u_p, v_p\right )$ is the pixel angle between pixel $\left (u_p, v_p\right )$ and center $\left (u_s, v_s\right )$, as given by
$$\theta(u_p, v_p) = \begin{cases} \arctan\left(\frac{v_p - v_s}{u_p - u_s}\right) & \text{if } u_p < u_s, \\ \frac{\pi}{2} & \text{if } u_p = u_s, \\ \pi - \arctan\left(\frac{v_p - v_s}{u_p - u_s}\right) & \text{otherwise}. \end{cases}$$

Besides, we binarized the circular and radial fringe patterns based on the threshold of average grayscale intensity, as shown in Eq. (10),

$$I^b_i(u_p, v_p) = \begin{cases} 255 & \text{if } I_i(u_p, v_p) > I_{avg},\\ 0 & \text{otherwise}. \end{cases}$$

For circular and radial fringe patterns, the relationship of the circular phase $\Phi _c$ and radial phase $\Phi _r$ with radial distance $r\left (u_p, v_p\right )$ and pixel angle $\theta \left (u_p, v_p\right )$ can be described as

$$\left\{ \begin{aligned} r(u_p, v_p) & = \frac{T}{2\pi}\Phi_c, \\ \theta(u_p, v_p) & = \frac{\Delta \theta}{2\pi}\Phi_r. \end{aligned} \right.$$

The phase of circular and radial fringe patterns $\Phi _c$ and $\Phi _r$ can be obtained using the phase shifting method [30] and Gray coding unwrapping technique [31] based on the captured fringe images from the camera. Thus, we can identify the corresponding feature points on the projector’s image plane through Eq. (12),

$$\left\{ \begin{aligned} u^p & ={-}\frac{T}{2\pi}\Phi_c\cos\left(\frac{\Delta \theta}{2\pi}\Phi_r\right) + u_s, \\ v^p & ={-}\frac{T}{2\pi}\Phi_c\sin\left(\frac{\Delta \theta}{2\pi}\Phi_r\right) + v_s. \end{aligned} \right.$$

Once we identify the projector feature points, we will build the correspondence between feature points on the projector image plane and the camera plane. Then, we utilize OpenCV library [29] for stereo calibration to obtain intrinsic parameters of the camera and projector and the rotation matrix and translation vector of the projector to the camera.

 figure: Fig. 3.

Fig. 3. An example of fringe patterns for system calibration. (a)-(b). A circular and radial fringe pattern centered at $\left (u_s, v_s\right )$. $r\left (u_p, v_p\right )$ is the distance between the point$p\left (u_p, v_p\right )$ and the center $O^s\left (u_s, v_s\right )$, and $\theta \left (u_p, v_p\right )$ is the pixel angle. (c)-(d). A horizontal and vertical fringe pattern employed in Zhang and Huang’s method [13].

Download Full Size | PDF

2.2 Circular FPP for 3D reconstruction

In reconstructing the 3D geometry of samples, we only utilized binarized circular fringe patterns. Combing Eqs. (4), (11) and (12), we get the relationship between phase $\Phi _c$ and projection parameters $\mathbb {M}=[m_{ij}]_{3x4}$,

$$\left\{ \begin{aligned} u^{p} & ={-}\frac{T}{2\pi} \Phi_c \cos\theta + u_s = \frac{m^p_{11}x^w + m^p_{12}y^w + m^p_{13}z^w + m^p_{14}}{m^p_{31}x^w + m^p_{32}y^w + m^p_{33}z^w + m^p_{34}}, \\ v^{p} & ={-}\frac{T}{2\pi} \Phi_c \sin\theta + v_s = \frac{m^p_{21}x^w + m^p_{22}y^w + m^p_{23}z^w + m^p_{24}}{m^p_{31}x^w + m^p_{32}y^w + m^p_{33}z^w + m^p_{34}}, \end{aligned} \right.$$

We simplify the above equation and rewrite it as

$$\begin{aligned} \left(\frac{T}{2\pi}\right)^2 \Phi_c^2 \left(m^p_{31}x^w + m^p_{32}y^w + m^p_{33}z^w + m^p_{34}\right)^2 &= k_1 x^{w^2} + k_2 y^{w^2} + k_3 z^{w^2}\\ &\quad + k_4 + k_5 x^w y^w + k_6 x^w z^w\\ &\quad + k_7 y^w z^w + k_8 x^w + k_9 y^w + k_{10} z^w, \end{aligned}$$
where parameters $k_1,\dots,k_{10}$ are listed below that can be directly obtained based on the projector’s calibration parameters. Though the expressions of parameters $k_i$ are complex, they are actually constants once the projector is calibrated and the center of the circular fringe patterns is fixed.
$$\left\{ \begin{array}{lclcl} k_{1} & = & (m^p_{11} - u_{s}m^p_{31})^2 + (m^p_{21} - v_{s}m^p_{31})^2, \\ k_{2} & = & (m^p_{12} - u_{s}m^p_{32})^2 + (m^p_{22} - v_{s}m^p_{32})^2, \\ k_{3} & = & (m^p_{13} - u_{s}m^p_{33})^2 + (m^p_{23} - v_{s}m^p_{33})^2, \\ k_{4} & = & (m^p_{14} - u_{s}m^p_{34})^2 + (m^p_{24} - v_{s}m^p_{34})^2, \\ k_{5} & = & 2(m^p_{11} - u_{s}m^p_{31})(m^p_{12} - u_{s}m^p_{32}) + 2(m^p_{21} - v_{s}m^p_{31})(m^p_{22} - v_{s}m^p_{32}), \\ k_{6} & = & 2(m^p_{11} - u_{s}m^p_{31})(m^p_{13} - u_{s}m^p_{33}) + 2(m^p_{21} - v_{s}m^p_{31})(m^p_{23} - v_{s}m^p_{33}), \\ k_{7} & = & 2(m^p_{12} - u_{s}m^p_{32})(m^p_{13} - u_{s}m^p_{33}) + 2(m^p_{22} - v_{s}m^p_{32})(m^p_{23} - v_{s}m^p_{33}), \\ k_{8} & = & 2(m^p_{11} - u_{s}m^p_{31})(m^p_{14} - u_{s}m^p_{34}) + 2(m^p_{21} - v_{s}m^p_{31})(m^p_{24} - v_{s}m^p_{34}), \\ k_{9} & = & 2(m^p_{12} - u_{s}m^p_{32})(m^p_{14} - u_{s}m^p_{34}) + 2(m^p_{22} - v_{s}m^p_{32})(m^p_{24} - v_{s}m^p_{34}), \\ k_{10} & = & 2(m^p_{13} - u_{s}m^p_{33})(m^p_{14} - u_{s}m^p_{34}) + 2(m^p_{23} - v_{s}m^p_{33})(m^p_{24} - v_{s}m^p_{34}). \end{array} \right.$$

Equation (14) can be further rewritten as,

$$\vec{l} \vec{x} = 1,$$
where $\vec {x}$ is
$$\vec{x} = {[\Phi_c^2 x^2, \Phi_c^2 y^2, \Phi_c^2 z^2, \Phi_c^2, \Phi_c^2 xy, \Phi_c^2 xz, \Phi_c^2 yz, \Phi_c^2 x, \Phi_c^2 y, \Phi_c^2 z, x^2, y^2, z^2, xy, xz, yz, x, y, z]}^\mathsf{T},$$
and $\vec {l}$ is
$$\vec{l} = [l_1, l_2, l_3,\dots,l_{19}],$$
$$\left\{ \begin{array}{llll} l_{1} = \left(\frac{T}{2\pi}\right)^2\frac{{m^p_{31}}^2}{k_4} & l_{2} = \left(\frac{T}{2\pi}\right)^2\frac{{m^p_{32}}^2}{k_4} & l_{3} =\left(\frac{T}{2\pi}\right)^2\frac{{m^p_{33}}^2}{k_4} & l_{4} = \left(\frac{T}{2\pi}\right)^2\frac{{m^p_{34}}^2}{k_4}, \\ l_{5} = 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{31}m^p_{32}}{k_4} & l_{6}= 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{31}m^p_{33}}{k_4} & l_{7} = 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{32}m^p_{33}}{k_4}, \\ l_{8} = 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{31}m^p_{34}}{k_4} & l_{9} = 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{32}m^p_{34}}{k_4} & l_{10} = 2\left(\frac{T}{2\pi}\right)^2\frac{m^p_{33}m^p_{34}}{k_4}, \\ l_{11} ={-}\frac{k_1}{k_4} & l_{12} ={-}\frac{k_2}{k_4} & l_{13} ={-}\frac{k_3}{k_4}, \\ l_{14} ={-}\frac{k_5}{k_4} & l_{15} ={-}\frac{k_6}{k_4} & l_{16} ={-}\frac{k_7}{k_4}, \\ l_{17} ={-}\frac{k_8}{k_4} & l_{18} ={-}\frac{k_9}{k_4} & l_{19} ={-}\frac{k_10}{k_4}. \end{array} \right.$$

One point to stress is that the parameters $l_1,\dots, l_{19}$ are all constants once we calibrate the projector, though their expressions are complicated. After calculating the vector $\vec {l}=[l_1,\dots, l_{19}]$, we can formulate the 3D reconstruction by combing Eq. (3) as

$$\mathbb{A} \vec{x}^{'} = \vec{b},$$
where the reconstruction matrix $\mathbb {A}$ consists of
$$\mathbb{A} = [\vec{a_1}; \vec{a_2}; \vec{a_3}],$$
$$\vec{a_1} = \left[m^c_{11} - u^c m^c_{31}, m^c_{12} - u^c m^c_{32}, m^c_{13} - u^c m^c_{33}, 0, \ldots, 0 \right],$$
$$\vec{a_2} = \left[m^c_{21} - v^c m^c_{31}, m^c_{22} - v^c m^c_{32}, m^c_{23} - v^c m^c_{33}, 0, \ldots, 0 \right],$$
$$\begin{aligned}\vec{a_3} = \Bigl[ & l_8 \Phi_c^2 + l_{17},l_9 \Phi_c^2 + l_{18}, l_{10}\Phi_c^2 + l_{19},\\ & l_1\Phi_c^2 + l_{11}, l_2\Phi_c^2 + l_{12}, l_3\Phi_c^2 + l_{13},\\ & l_5 \Phi_c^2 + l_{14}, l_6\Phi_c^2 + l_{15}, l_7\Phi_c^2 + l_{16} \Bigr], \end{aligned}$$

The target $\vec {x}^{'}$ is rewritten as

$$\vec{x}^{'} = \left[x^{w}, y^{w}, z^{w}, {x^{w}}^2, {y^{w}}^2, {z^{w}}^2, x^{w}y^{w}, x^{w}z^{w}, y^{w}z^{w}\right]^\mathsf{T},$$
and
$$\vec{b} = \left[ u^c m^c_{34} - m^c_{14},\ v^c m^c_{34} - m^c_{24},\ 1-l_{4} \Phi_c^2 \right]^\mathsf{T}.$$

To solve Eq. (20) and obtain $[x^w, y^w, z^w]$, we first utilize MATLAB Symbolic Toolbox [32] to get the symbolic expression and then calculate the numeric solution based on the obtained phase $\Phi _c$, vector $\vec {l}$, and projection parameters.

3. Experimental results

3.1 Experimental setup

We set up a structured light system as illustrated in Fig. 4. A digital complementary-metal-oxide-semiconductor (CMOS) camera (model: FLIR, Grasshopper3 GS3-U3-41C6C-C) and a digital-light-processing (DLP) projector (model: Texas Instruments, DLP Lightcrafter 4500) are employed to capture and project fringe images. The projector resolution is set as $1140 \times 912$ pixels, while the camera resolution is $960 \times 1280$ pixels. The camera is attached with a lens of 8 mm focal length (model: Computar M0814-MP2).

 figure: Fig. 4.

Fig. 4. Experiment setup.

Download Full Size | PDF

3.2 Comparative testing of measurement accuracy

To compare the performance of the proposed method with Zhang and Huang’s calibration, we calibrate the structured light system by employing conventional horizontal and vertical fringe patterns. Fig. 3 has already depicted fringe patterns employed in our calibration process: circular, radial, horizontal, and vertical fringe patterns. We both utilize 18-step phase-shifting with seven gray-coding for the two calibration methods to get phase information. The spatial period of circular and horizontal are 18, and 36 for the vertical fringe pattern. $\Delta \theta$ of the radial fringe pattern is set at $2\pi /500$. The triangulation errors are estimated by comparing the target points $\left (x^{wtg}, y^{wtg}, z^{wtg}\right )$ and the triangulated 3D points using Zhang and Huang’s method and the proposed method, and we also overlay the target points and triangulated 3D points. The details of the results are illustrated in Fig. 5. The root-mean-square (RMS) error for X, Y, and Z are $0.0128mm$, $0.0108 mm$ and $0.1766 mm$ for Zhang and Huang’s method, and $0.0194 mm$, $0.0159 mm$ and $0.2708 mm$ for our proposed method. Considering the overall calibration volumes $\left (118 mm (X) \times 116 mm (Y)\times 50 mm (Z)\right )$, the triangulation errors of the two methods are both less than $1{\% }$, though the triangulation errors of our proposed method are a little bit larger than that of Zhang and Huang’s method. A detailed discussion regarding the reasons why the proposed method is a little bit worse than Zhang and Huang’s method will be elaborated in the later discussion section.

 figure: Fig. 5.

Fig. 5. Evaluation of triangulation error. (a) Overlay of the target points $\left (x^{wtg}, y^{wtg}, z^{wtg}\right )$ (red points) and triangulation points $\left (x^{w}, y^{w}, z^{w}\right )$ (corners of mesh) using Zhang and Huang’s method; (b) Similar overlay of target points and triangulation points using the proposed method; (c) Error between the target points and triangulation points using Zhang and Huang’s method, and RMS error for X, Y and Z are 0.0128 mm, 0.0108 mm and 0.1766 mm; (d) Similar error using the proposed method, and RMS error for X, Y, and Z are 0.0194 mm, 0.0159 mm and 0.2708 mm.

Download Full Size | PDF

In this research, we also evaluate the measurement accuracies of our proposed calibration and 3D reconstruction method using a standard sphere with a radius of 49.975 mm. To compare these accuracies with Zhang and Huang’s calibration method and linear FPP, we conducted a $2\times 2$ experimental setup. This involved reconstructing the 3D profile of the standard sphere using either the proposed circular FPP or linear FPP, combined with calibration parameters derived from either our proposed calibration method or Zhang and Huang’s method. Fig. 6(a)-(d) displays the overlays of the standard sphere with the 3D reconstructions. Specifically, (a) represents the results using our proposed calibration in combination with circular FPP; (b) shows the results when the proposed circular FPP method is paired with calibration parameters from Zhang and Huang’s method; (c) illustrates the results of linear FPP combined with calibration parameters from our proposed method; and (d) depicts the results of linear FPP with calibration parameters from Zhang and Huang’s method. Error maps corresponding to Fig. 6(a)-(e) are presented in Fig. 6(e)-(f), showing the absolute difference between the standard sphere and 3D reconstructions. The mean-absolute error (MAE) of the error maps are $0.1796 mm$, $0.1794 mm$, $0.2961 mm$, and $0.2632 mm$, respectively, with RMS errors of $0.2307 mm$, $0.2288 mm$, $0.3442 mm$, and $0.3355 mm$. Notably, when the proposed 3D reconstruction method with circular fringe projection is used in combination with Zhang and Huang’s calibration parameters, the MAE and RMS errors are the lowest among all results. Comparisons between Fig. 6(e)-(h) reveal that when the calibration method is fixed, the proposed circular FPP yields lower RMS errors and MAEs than linear FPP. Conversely, using the same 3D reconstruction method with different calibration parameters shows that Zhang and Huang’s method results in smaller RMS errors and MAEs compared to our proposed calibration method, as evidenced by Fig. 6(c), (f), (g), and (h).

 figure: Fig. 6.

Fig. 6. Evaluating measurement accuracy of a standard sphere (radius: 49.975 mm): comparative overlays and error analysis. (a)-(d) comparative overlays of the standard sphere and 3D reconstructions: (a) with our proposed calibration and circular FPP; (b) Zhang and Huang’s calibration method with our proposed circular FPP; (c) our proposed calibration method and linear FPP; (d) Zhang and Huang’s calibration method and linear FPP. (e)-(h) error maps corresponding to (a)-(e), showcasing the deviation from the standard sphere. Mean absolute errors are (e) $0.1796 mm$, (f) $0.1794 mm$, (g) $0.2961 mm$, and (h) $0.2632 mm$. RMS errors are (e) $0.2307 mm$, (f) $0.2288 mm$, (g) $0.3442 mm$, and (h) $0.3355 mm$.

Download Full Size | PDF

3.3 Qualitative testing

We initially applied a similar 2$\times$2 experimental design to a sample of a Graphics Processing Unit (GPU) board with a fan, with the 3D reconstruction results shown in Fig. 7. Fig. 7(b)-(c) and (d)-(e) represent the results obtained using various combinations of calibration and reconstruction methods: (b) employs our proposed calibration and circular FPP; (c) combines the proposed circular FPP with Zhang and Huang’s calibration; (d) uses linear FPP with our proposed calibration; and (e) applies linear FPP with Zhang and Huang’s calibration. Visually, we observed no significant differences among these 3D reconstruction results.

 figure: Fig. 7.

Fig. 7. 3D reconstruction results of a GPU board with the fan. (a) The GPU board with fan; (b)-(c) 3D reconstruction results using the proposed circular FPP with either the proposed calibration or Zhang and Huang’s method; (d)-(e) 3D reconstruction results using linear FPP with either the proposed calibration or Zhang and Huang’s method.

Download Full Size | PDF

Additionally, we extended the $2\times 2$ experimental setup to a sample of a GPU cooler, which consists of many thin fins with abrupt depth changes. The thickness of the fins is about $0.33 mm$ and the depth change between fins is about $20.53 mm$. The 3D reconstruction results are presented in Fig. 8. Similarly, Fig. 8(b)-(c) and (d)-(e) are results by utilizing the proposed calibration and circular FPP, the proposed circular FPP with Zhang and Huang’s calibration, linear FPP with the proposed calibration, and linear FPP with Zhang and Huang’s calibration, respectively. One to stress is that same-size median filters are deployed to remove noises in the process of phase unwrapping. A notable observation from Fig. 8(b)-(c) is that the results using our proposed circular FPP exhibit cleaner reconstruction and fewer noises in the fin area compared to those obtained using linear FPP (Fig. 8(d)-(e)). The abrupt depth changes between thin fins are better presented in Fig. 8(b)-(c) (see the circled area). The reconstruction outcomes of the GPU cooler sample demonstrate that our proposed method can perform on par with or even surpass linear FPP, especially on objects with abrupt depth changes.

 figure: Fig. 8.

Fig. 8. 3D reconstruction results showcasing a GPU cooler with thin fins, each measuring a thickness of $0.33 mm$ and a varying depth of $20.53 mm$ between fins. (a) The GPU cooler; (b)-(c) 3D reconstruction results using the proposed circular FPP with either the proposed calibration or Zhang and Huang’s method; (d)-(e) 3D reconstruction results using linear FPP with either the proposed calibration or Zhang and Huang’s method.

Download Full Size | PDF

4. Discussion

When comparing the triangular error and the standard sphere error, the performance of our proposed calibration method with circular and radial patterns is slightly inferior to that of Zhang and Huang’s approach. This section explores the underlying reasons for this, focusing on sensitivity and error analysis.

4.1 Sensitivity analysis of the projector calibration

The fringe pattern is the only difference between our proposed calibration method and Zhang and Huang’s approach, bringing the difference in the mapping relationship between camera and projector feature points. Based on the relationship of phase and projector feature points of circular and fringe patterns (Eq. (12)), we define the sensitivity of the projector feature points relative to the phase. For the sensitivity along the $u$ axis, denoted as $\mathbf {S^p_{u, \text {proposed}}}$, we define it as:

$$\mathbf{S^p_{u, \text{proposed}}} = \left(\frac{\partial u^p}{\partial \Phi_c}, \frac{\partial u^p}{\partial \Phi_r}\right) = \frac{T}{2\pi}\left(-\cos\left(\frac{\Delta\theta}{2\pi}\Phi_r\right), \frac{\Delta \theta}{2\pi}\Phi_c\sin\left(\frac{\Delta \theta}{2\pi}\Phi_r\right)\right).$$
where $\frac {\partial u^p}{\partial \Phi _c}$ and $\frac {\partial u^p}{\partial \Phi _r}$ represent the partial derivatives of the projector feature points in the $u$ direction with respect to the circular ($\Phi _c$) and radial ($\Phi _r$) phases, respectively. Similarly, for the sensitivity along the $v$ axis, denoted as $\mathbf {S^p_{v, \text {proposed}}}$, it is given by:
$$\mathbf{S^p_{v, \text{proposed}}}=\left(\frac{\partial v^p}{\partial \Phi_c}, \frac{\partial v^p}{\partial \Phi_r}\right) = \frac{T}{2\pi}\left(-\sin\left(\frac{\Delta\theta}{2\pi}\Phi_r\right), -\frac{\Delta \theta}{2\pi}\Phi_c\cos\left(\frac{\Delta \theta}{2\pi}\Phi_r\right)\right).$$
$\mathbf {S^p_{u, \text {proposed}}}$ and $\mathbf {S^p_{v, \text {proposed}}}$ are vectors, and we can have:
$$\left\{ \begin{aligned} \| \mathbf{S^p_{u, \text{proposed}}} \| & = \frac{T}{2\pi} \sqrt{\cos^2\left( \frac{\Delta\theta}{2\pi} \Phi_r \right) + \left( \frac{\Delta \theta}{2\pi} \Phi_c \sin\left( \frac{\Delta \theta}{2\pi} \Phi_r \right) \right)^2}, \\ \| \mathbf{S^p_{v, \text{proposed}}} \| & = \frac{T}{2\pi} \sqrt{\sin^2\left( \frac{\Delta\theta}{2\pi} \Phi_r \right) + \left( \frac{\Delta \theta}{2\pi} \Phi_c \cos\left( \frac{\Delta \theta}{2\pi} \Phi_r \right) \right)^2}. \end{aligned} \right.$$

In circular fringe patterns, the phase $\Phi _c$ increases progressively in the radial direction. Consequently, this increase in $\Phi _c$ directly influences the magnitudes of the sensitivity vectors $\| \mathbf {S^p_{u, \text {proposed}}} \|$ and $\| \mathbf {S^p_{v, \text {proposed}}} \|$ as a pixel moves outward from the center of the pattern. This radial increase in sensitivity reflects that deviations in projector feature points are more pronounced with increasing distance from the central point.

For Zhang and Huang’s method, horizontal and vertical fringe patterns are employed for calibration, so the phase is linear to pixel index $\left (u, v\right )$, and a similar definition of sensitivity in relation to phase can be given by:

$$\left\{ \begin{aligned} \mathbf{S^p_{u, \text{Zhang}}} & =\frac{d u^p}{d\Phi_v} & =\frac{d}{d\Phi_v}\left(\frac{T_v}{2\pi}\Phi_v\right) & =\frac{T_v}{2\pi}, \\ \mathbf{S^p_{v,\text{Zhang}}} & =\frac{d v^p}{d\Phi_h} & =\frac{d}{d\Phi_h}\left(\frac{T_h}{2\pi}\Phi_h\right) & = \frac{T_h}{2\pi}, \end{aligned} \right.$$

In Zhang and Huang’s method, it is observed that the sensitivities of projector feature points remain constant, and the constancy is independent of both the phase and the pixel index. Consequently, it implies that any deviation in the projector feature points remains consistent across different pixel indices.

4.2 Phase estimation error in the projector calibration

When calibrating the projector in a structured light system, a crucial step is identifying the projector feature points corresponding to the camera’s observed feature points. As depicted in Fig. 9, the phases of circular and radial fringe patterns ($\Phi _c$ and $\Phi _r$) are used to establish the mapping relationship between camera and projector feature points, which is also described in Eq. (12).

 figure: Fig. 9.

Fig. 9. Feature point mapping schematic: camera to projector.

Download Full Size | PDF

As illustrated in Fig. 9, the coordinate values $u$ and $v$ of a camera feature point $p\left (u, v\right )$ (e.g., circle centers in this research) are usually non-integer values. Thus, converting the non-integer coordinate values to integers is necessary to obtain the phase of the feature point. To reduce the influence of the converting operation, bilinear interpolation is utilized to estimate the phase based on the phases of the feature point’s four neighboring points (highlighted area in Fig. 9). These neighbors are denoted as $p_{11}\left (u_1^c, v_1^c\right )$, $p_{12}\left (u_1^c, v_2^c\right )$, $p_{21}\left (u_2^c, v_1^c\right )$, and $p_{22}\left (u_2^c, v_2^c\right )$, respectively. Here, $u_1^c$ and $v_1^c$ represent floor values of $u$ and $v$, and $u_2^c$ and $v_2^c$ are their ceiling values. Then, the bilinear interpolation of phases can be given by

$$\widetilde{\Phi}_c(u, v) = \begin{bmatrix} u_2^c - u \\ u - u_1^c \end{bmatrix}^T \begin{bmatrix} \Phi_c(u_1^c, v_1^c) & \Phi_c(u_1^c, v_2^c) \\ \Phi_c(u_2^c, v_1^c) & \Phi_c(u_2^c, v_2^c) \\ \end{bmatrix} \begin{bmatrix} v_2^c - v \\ v - v_1^c \end{bmatrix},$$
$$\widetilde{\Phi}_r(u, v) = \begin{bmatrix} u_2^c - u \\ u - u_1^c \end{bmatrix}^T \begin{bmatrix} \Phi_r(u_1^c, v_1^c) & \Phi_r(u_1^c, v_2^c) \\ \Phi_r(u_2^c, v_1^c) & \Phi_r(u_2^c, v_2^c) \\ \end{bmatrix} \begin{bmatrix} v_2^c - v \\ v - v_1^c \end{bmatrix}.$$
where $\widetilde {\Phi }_c(u, v)$ and $\widetilde {\Phi }_r(u,v)$ are estimated circular and radial phases using bilinear interpolation, and $\Phi _c(u_i^c, v_j^c)$ and $\Phi _r(u_i^c, v_j^c)$ ($i=1,2$ and $j=1,2$) are circular and radial phases of the neighbors. Thus, the bilinear interpolation brings the phase estimation error between the estimated phase and the actual phase of the float feature point. Given by Eq. (11), the maximal phase estimation error caused by the interpolation can be defined as
$$\left\{ \begin{aligned} \max\Delta \Phi_c(u, v) & =\frac{2\pi}{T}(\max(r(u_i^c, v_j^c))-\min(r(u_i^c, v_j^c)))\\ \max\Delta \Phi_r(u, v) & =\frac{2\pi}{\Delta \theta}(\max(\theta(u_i^c, v_j^c))-\min(\theta(u_i^c, v_j^c))) , \end{aligned} \right.$$
where $r(u_i^c, v_j^c)$ is the distance between the center of fringe patterns and the feature point neighbors $p_{ij}(u_i^c, v_j^c)$ and $i,j=1,2$. $\theta (u_i^c, v_i^c)$ is the neighbor point’s pixel angle. In this research, the center of fringe patterns is off the camera image plane and partial derivatives$\frac {\partial }{\partial u}\max \Delta \Phi _c(u,v)$ and $\frac {\partial }{\partial v}\max \Delta \Phi _c(u,v)$ are greater than zero. Similarly, partial derivatives $\frac {\partial }{\partial u}\max \Delta \Phi _r(u,v)$ and $\frac {\partial }{\partial v}\max \Delta \Phi _r(u,v)$ also exceed zero. Therefore, maximal $\Delta \Phi _c(u,v)$ and $\Delta \Phi _r(u,v)$ increase with respect to $u$ and $v$. The subsequent step of the calibration is to obtain feature points in the projector image plane, which rely on the estimated phase values. Based on Eq. (12), the projector feature point $q(\widetilde {u^p}, \widetilde {v^p})$ can be given by,
$$\left\{ \begin{aligned} \widetilde{u^p} & ={-}\frac{T}{2\pi}\widetilde{\Phi}_c(u,v)\cos\left(\frac{\Delta \theta}{2\pi}\widetilde{\Phi}_r(u,v)\right) + u_s, \\ \widetilde{v^p} & ={-}\frac{T}{2\pi}\widetilde{\Phi}_c(u,v)\sin\left(\frac{\Delta \theta}{2\pi}\widetilde{\Phi}_r(u,v)\right) + v_s. \end{aligned} \right.$$

From the above equation, we can find that the phase estimation error caused by the bilinear interpolation will bring feature point error in the projector image plane, which compromises the accuracy of the projector calibration.

For Zhang and Huang’s calibration method, bilinear interpolation is also utilized when mapping feature points from the camera to the projector, as depicted in Fig. 9. Horizontal and vertical fringe patterns are employed in this method, and phases $\Phi _h\left (u, v\right )$ and $\Phi _v\left (u^c, v^c\right )$ are linear to pixel $\left (u^c,v^c\right )$. As a result of the linear relationship, the bilinear interpolation is free from the introduction of additional phase estimation errors. The linear nature of the phase-to-pixel correspondence ensures a high level of accuracy in the mapping, thereby enhancing the overall accuracy of the calibration method.

In the analysis of sensitivity and error in projector calibration, it has been observed that the magnitudes of the sensitivity vectors $\| \mathbf {S^p_{u, \text {proposed}}} \|$ and $\| \mathbf {S^p_{v, \text {proposed}}} \|$ demonstrate an increasing trend in relation to the distance from a point to the center of fringe patterns. Additionally, $\Delta \Phi _c(u,v)$ and $\Delta \Phi _r(u,v)$ also exhibit a rising tendency with respect to the pixel indices $u$ and $v$. Due to the increasing trend of both sensitivity and estimated phase error, the proposed calibration method reveals limitations compared to Zhang and Huang’s method. The latter approach, characterized by its linear phase-to-pixel relationship and absence of estimated phase errors in the mapping process, demonstrates higher accuracy and reliability. Given these findings, it is advisable to position the circle grid calibration target within the central region of the camera and projector’s field of view to mitigate the impact of the growing sensitivity and estimated phase errors. Besides, alternative interpolation methods, such as bicubic interpolation, can be applied for a more accurate phase estimation of the floating feature points. Given the nonlinear nature of circular fringe patterns, employing a specialized nonlinear interpolation method might be more effective in avoiding phase estimation errors.

5. Conclusions

In this research, we introduce a novel circular fringe projection profilometry, including system calibration and 3D reconstruction. The proposed method is rigorously evaluated through a series of $2 \times 2$ experiments, allowing us to draw comprehensive comparisons between circular FPP and linear FPP, as well as between our proposed calibration method and Zhang and Huang’s calibration method. The experiment results indicate that the accuracy of the proposed circular FPP surpasses that of linear FPP techniques, especially on objects with abrupt depth changes. However, when it comes to calibration performance, our calibration method exhibits a slight disadvantage compared to Zhang and Huang’s method. We conducted a detailed analysis of the sensitivity and error in projector calibration. This investigation revealed that the relatively lower performance of our proposed calibration method is primarily attributed to the increased sensitivity and phase estimation errors. Phase estimation errors can be further diminished by nonlinear interpolation methods in future work. This research not only stresses the strengths of the circular FPP method on objects with sharp depth changes but also sheds light on the limitations of using circular and radial patterns for calibration, providing insights for future advancements in circular FPP.

Funding

National Science Foundation (2132773).

Acknowledgment

This work was partially supported by the USA National Science Foundation (NSF) under Grant No. 2132773. The views expressed here are those of the authors and are not necessarily those of the NSF.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

2. C. Zuo, S. Feng, L. Huang, et al., “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

3. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

4. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

5. S. Feng, C. Zuo, L. Zhang, et al., “Calibration of fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 143, 106622 (2021). [CrossRef]  

6. R. Zhang, H. Guo, and A. K. Asundi, “Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry,” Appl. Opt. 55(27), 7675–7687 (2016). [CrossRef]  

7. J. Yu, N. Gao, Z. Zhang, et al., “High sensitivity fringe projection profilometry combining optimal fringe frequency and optimal fringe direction,” Opt. Lasers Eng. 129, 106068 (2020). [CrossRef]  

8. W.-S. Zhou and X.-Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 (1994). [CrossRef]  

9. Z. Zhang, H. Ma, T. Guo, et al., “Simple, flexible calibration of phase calculation-based three-dimensional imaging system,” Opt. Lett. 36(7), 1257–1259 (2011). [CrossRef]  

10. Z. Zhang, S. Huang, S. Meng, et al., “A simple, flexible and automatic 3d calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

11. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef]  

12. M. Vo, Z. Wang, B. Pan, et al., “Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Express 20(15), 16926–16941 (2012). [CrossRef]  

13. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

14. Z. Li, Y. Shi, C. Wang, et al., “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

15. Z. Huang, J. Xi, Y. Yu, et al., “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54(3), 347–356 (2015). [CrossRef]  

16. W. Zhang, W. Li, L. Yu, et al., “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158–19169 (2017). [CrossRef]  

17. B. Li and S. Zhang, “Structured light system calibration method with optimal fringe angle,” Appl. Opt. 53(33), 7942–7950 (2014). [CrossRef]  

18. Y. Wang and S. Zhang, “Optimal fringe angle selection for digital fringe projection technique,” Appl. Opt. 52(29), 7094–7098 (2013). [CrossRef]  

19. P. Zhou, X. Liu, and T. Zhu, “Analysis of the relationship between fringe angle and three-dimensional profilometry system sensitivity,” Appl. Opt. 53(13), 2929–2935 (2014). [CrossRef]  

20. S. Lv, D. Tang, X. Zhang, et al., “Fringe projection profilometry method with high efficiency, precision, and convenience: theoretical analysis and development,” Opt. Express 30(19), 33515–33537 (2022). [CrossRef]  

21. Y. Ma, D. Yin, C. Wei, et al., “Real-time 3-d shape measurement based on radial spatial carrier phase shifting from circular fringe pattern,” Opt. Commun. 450, 6–13 (2019). [CrossRef]  

22. G. Zhang, D. L. Lau, B. Xu, et al., “Circular fringe projection profilometry and 3d sensitivity analysis based on extended epipolar geometry,” Opt. Lasers Eng. 162, 107403 (2023). [CrossRef]  

23. J. K. Mandapalli, V. Ravi, S. S. Gorthi, et al., “Single-shot circular fringe projection for the profiling of objects having surface discontinuities,” J. Opt. Soc. Am. A 38(10), 1471–1482 (2021). [CrossRef]  

24. H. Zhao, C. Zhang, C. Zhou, et al., “Circular fringe projection profilometry,” Opt. Lett. 41(21), 4951–4954 (2016). [CrossRef]  

25. C. Zhang, H. Zhao, J. Qiao, et al., “Three-dimensional measurement based on optimized circular fringe projection technique,” Opt. Express 27(3), 2465–2477 (2019). [CrossRef]  

26. J. Zhang, B. Luo, X. Su, et al., “A convenient 3d reconstruction model based on parallel-axis structured light system,” Opt. Lasers Eng. 138, 106366 (2021). [CrossRef]  

27. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

29. G. Bradski, The OpenCV Library, (Dr. Dobb’s Journal of Software Tools, 2000).

30. D. Malacara, Optical shop testing, vol. 59 (John Wiley & Sons, 2007).

31. Y. Wang, S. Zhang, and J. H. Oliver, “3d shape measurement technique for multiple rapidly moving objects,” Opt. Express 19(9), 8539–8545 (2011). [CrossRef]  

32. I. The MathWorks, Symbolic Math Toolbox, (Natick, Massachusetts, United State, 2019).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Pinhole imaging model. The figure is reprinted from [27].
Fig. 2.
Fig. 2. Camera calibration and target 3D estimation. (a) asymmetric circle grid calibration target, whose size is ($5(row) \times 9(column)$ with 20 mm center-to-center distance); (b) an image of the target at a pose with extracted circle centers; (c) the estimated 3D target orientations.
Fig. 3.
Fig. 3. An example of fringe patterns for system calibration. (a)-(b). A circular and radial fringe pattern centered at $\left (u_s, v_s\right )$. $r\left (u_p, v_p\right )$ is the distance between the point$p\left (u_p, v_p\right )$ and the center $O^s\left (u_s, v_s\right )$, and $\theta \left (u_p, v_p\right )$ is the pixel angle. (c)-(d). A horizontal and vertical fringe pattern employed in Zhang and Huang’s method [13].
Fig. 4.
Fig. 4. Experiment setup.
Fig. 5.
Fig. 5. Evaluation of triangulation error. (a) Overlay of the target points $\left (x^{wtg}, y^{wtg}, z^{wtg}\right )$ (red points) and triangulation points $\left (x^{w}, y^{w}, z^{w}\right )$ (corners of mesh) using Zhang and Huang’s method; (b) Similar overlay of target points and triangulation points using the proposed method; (c) Error between the target points and triangulation points using Zhang and Huang’s method, and RMS error for X, Y and Z are 0.0128 mm, 0.0108 mm and 0.1766 mm; (d) Similar error using the proposed method, and RMS error for X, Y, and Z are 0.0194 mm, 0.0159 mm and 0.2708 mm.
Fig. 6.
Fig. 6. Evaluating measurement accuracy of a standard sphere (radius: 49.975 mm): comparative overlays and error analysis. (a)-(d) comparative overlays of the standard sphere and 3D reconstructions: (a) with our proposed calibration and circular FPP; (b) Zhang and Huang’s calibration method with our proposed circular FPP; (c) our proposed calibration method and linear FPP; (d) Zhang and Huang’s calibration method and linear FPP. (e)-(h) error maps corresponding to (a)-(e), showcasing the deviation from the standard sphere. Mean absolute errors are (e) $0.1796 mm$, (f) $0.1794 mm$, (g) $0.2961 mm$, and (h) $0.2632 mm$. RMS errors are (e) $0.2307 mm$, (f) $0.2288 mm$, (g) $0.3442 mm$, and (h) $0.3355 mm$.
Fig. 7.
Fig. 7. 3D reconstruction results of a GPU board with the fan. (a) The GPU board with fan; (b)-(c) 3D reconstruction results using the proposed circular FPP with either the proposed calibration or Zhang and Huang’s method; (d)-(e) 3D reconstruction results using linear FPP with either the proposed calibration or Zhang and Huang’s method.
Fig. 8.
Fig. 8. 3D reconstruction results showcasing a GPU cooler with thin fins, each measuring a thickness of $0.33 mm$ and a varying depth of $20.53 mm$ between fins. (a) The GPU cooler; (b)-(c) 3D reconstruction results using the proposed circular FPP with either the proposed calibration or Zhang and Huang’s method; (d)-(e) 3D reconstruction results using linear FPP with either the proposed calibration or Zhang and Huang’s method.
Fig. 9.
Fig. 9. Feature point mapping schematic: camera to projector.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

s [ u v 1 ] = [ f u γ u 0 0 f v v 0 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ x w y w z w 1 ] .
M = [ f u γ u 0 0 f v v 0 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] = [ m 11 m 12 m 13 m 1 m 21 m 22 m 23 m 2 m 31 m 32 m 33 m 3 ] .
s c [ u c v c 1 ] = [ m 11 c m 12 c m 13 c m 14 c m 21 c m 22 c m 23 c m 24 c m 31 c m 32 c m 33 c m 34 c ] [ x w y w z w 1 ] .
s p [ u p v p 1 ] = [ m 11 p m 12 p m 13 p m 14 p m 21 p m 22 p m 23 p m 24 p m 31 p m 32 p m 33 p m 34 p ] [ x w y w z w 1 ] .
[ x w t g y w t g z w t g ] = [ r 11 t g r 12 t g r 13 t g t 1 t g r 21 t g r 22 t g t 23 t g t 2 t g r 31 t g r 32 t g t 33 t g t 3 t g ] [ x t g y t g 0 1 ] .
I i c ( u p , v p ) = A + B cos ( 2 π T r ( u p , v p ) + 2 π i 1 N ) ,
r ( u p , v p ) = ( u p u s ) 2 + ( v p v s ) 2 .
I i r ( u p , v p ) = A + B cos ( 2 π Δ θ θ ( u p , v p ) + 2 π i 1 N ) ,
θ ( u p , v p ) = { arctan ( v p v s u p u s ) if  u p < u s , π 2 if  u p = u s , π arctan ( v p v s u p u s ) otherwise .
I i b ( u p , v p ) = { 255 if  I i ( u p , v p ) > I a v g , 0 otherwise .
{ r ( u p , v p ) = T 2 π Φ c , θ ( u p , v p ) = Δ θ 2 π Φ r .
{ u p = T 2 π Φ c cos ( Δ θ 2 π Φ r ) + u s , v p = T 2 π Φ c sin ( Δ θ 2 π Φ r ) + v s .
{ u p = T 2 π Φ c cos θ + u s = m 11 p x w + m 12 p y w + m 13 p z w + m 14 p m 31 p x w + m 32 p y w + m 33 p z w + m 34 p , v p = T 2 π Φ c sin θ + v s = m 21 p x w + m 22 p y w + m 23 p z w + m 24 p m 31 p x w + m 32 p y w + m 33 p z w + m 34 p ,
( T 2 π ) 2 Φ c 2 ( m 31 p x w + m 32 p y w + m 33 p z w + m 34 p ) 2 = k 1 x w 2 + k 2 y w 2 + k 3 z w 2 + k 4 + k 5 x w y w + k 6 x w z w + k 7 y w z w + k 8 x w + k 9 y w + k 10 z w ,
{ k 1 = ( m 11 p u s m 31 p ) 2 + ( m 21 p v s m 31 p ) 2 , k 2 = ( m 12 p u s m 32 p ) 2 + ( m 22 p v s m 32 p ) 2 , k 3 = ( m 13 p u s m 33 p ) 2 + ( m 23 p v s m 33 p ) 2 , k 4 = ( m 14 p u s m 34 p ) 2 + ( m 24 p v s m 34 p ) 2 , k 5 = 2 ( m 11 p u s m 31 p ) ( m 12 p u s m 32 p ) + 2 ( m 21 p v s m 31 p ) ( m 22 p v s m 32 p ) , k 6 = 2 ( m 11 p u s m 31 p ) ( m 13 p u s m 33 p ) + 2 ( m 21 p v s m 31 p ) ( m 23 p v s m 33 p ) , k 7 = 2 ( m 12 p u s m 32 p ) ( m 13 p u s m 33 p ) + 2 ( m 22 p v s m 32 p ) ( m 23 p v s m 33 p ) , k 8 = 2 ( m 11 p u s m 31 p ) ( m 14 p u s m 34 p ) + 2 ( m 21 p v s m 31 p ) ( m 24 p v s m 34 p ) , k 9 = 2 ( m 12 p u s m 32 p ) ( m 14 p u s m 34 p ) + 2 ( m 22 p v s m 32 p ) ( m 24 p v s m 34 p ) , k 10 = 2 ( m 13 p u s m 33 p ) ( m 14 p u s m 34 p ) + 2 ( m 23 p v s m 33 p ) ( m 24 p v s m 34 p ) .
l x = 1 ,
x = [ Φ c 2 x 2 , Φ c 2 y 2 , Φ c 2 z 2 , Φ c 2 , Φ c 2 x y , Φ c 2 x z , Φ c 2 y z , Φ c 2 x , Φ c 2 y , Φ c 2 z , x 2 , y 2 , z 2 , x y , x z , y z , x , y , z ] T ,
l = [ l 1 , l 2 , l 3 , , l 19 ] ,
{ l 1 = ( T 2 π ) 2 m 31 p 2 k 4 l 2 = ( T 2 π ) 2 m 32 p 2 k 4 l 3 = ( T 2 π ) 2 m 33 p 2 k 4 l 4 = ( T 2 π ) 2 m 34 p 2 k 4 , l 5 = 2 ( T 2 π ) 2 m 31 p m 32 p k 4 l 6 = 2 ( T 2 π ) 2 m 31 p m 33 p k 4 l 7 = 2 ( T 2 π ) 2 m 32 p m 33 p k 4 , l 8 = 2 ( T 2 π ) 2 m 31 p m 34 p k 4 l 9 = 2 ( T 2 π ) 2 m 32 p m 34 p k 4 l 10 = 2 ( T 2 π ) 2 m 33 p m 34 p k 4 , l 11 = k 1 k 4 l 12 = k 2 k 4 l 13 = k 3 k 4 , l 14 = k 5 k 4 l 15 = k 6 k 4 l 16 = k 7 k 4 , l 17 = k 8 k 4 l 18 = k 9 k 4 l 19 = k 1 0 k 4 .
A x = b ,
A = [ a 1 ; a 2 ; a 3 ] ,
a 1 = [ m 11 c u c m 31 c , m 12 c u c m 32 c , m 13 c u c m 33 c , 0 , , 0 ] ,
a 2 = [ m 21 c v c m 31 c , m 22 c v c m 32 c , m 23 c v c m 33 c , 0 , , 0 ] ,
a 3 = [ l 8 Φ c 2 + l 17 , l 9 Φ c 2 + l 18 , l 10 Φ c 2 + l 19 , l 1 Φ c 2 + l 11 , l 2 Φ c 2 + l 12 , l 3 Φ c 2 + l 13 , l 5 Φ c 2 + l 14 , l 6 Φ c 2 + l 15 , l 7 Φ c 2 + l 16 ] ,
x = [ x w , y w , z w , x w 2 , y w 2 , z w 2 , x w y w , x w z w , y w z w ] T ,
b = [ u c m 34 c m 14 c ,   v c m 34 c m 24 c ,   1 l 4 Φ c 2 ] T .
S u , proposed p = ( u p Φ c , u p Φ r ) = T 2 π ( cos ( Δ θ 2 π Φ r ) , Δ θ 2 π Φ c sin ( Δ θ 2 π Φ r ) ) .
S v , proposed p = ( v p Φ c , v p Φ r ) = T 2 π ( sin ( Δ θ 2 π Φ r ) , Δ θ 2 π Φ c cos ( Δ θ 2 π Φ r ) ) .
{ S u , proposed p = T 2 π cos 2 ( Δ θ 2 π Φ r ) + ( Δ θ 2 π Φ c sin ( Δ θ 2 π Φ r ) ) 2 , S v , proposed p = T 2 π sin 2 ( Δ θ 2 π Φ r ) + ( Δ θ 2 π Φ c cos ( Δ θ 2 π Φ r ) ) 2 .
{ S u , Zhang p = d u p d Φ v = d d Φ v ( T v 2 π Φ v ) = T v 2 π , S v , Zhang p = d v p d Φ h = d d Φ h ( T h 2 π Φ h ) = T h 2 π ,
Φ ~ c ( u , v ) = [ u 2 c u u u 1 c ] T [ Φ c ( u 1 c , v 1 c ) Φ c ( u 1 c , v 2 c ) Φ c ( u 2 c , v 1 c ) Φ c ( u 2 c , v 2 c ) ] [ v 2 c v v v 1 c ] ,
Φ ~ r ( u , v ) = [ u 2 c u u u 1 c ] T [ Φ r ( u 1 c , v 1 c ) Φ r ( u 1 c , v 2 c ) Φ r ( u 2 c , v 1 c ) Φ r ( u 2 c , v 2 c ) ] [ v 2 c v v v 1 c ] .
{ max Δ Φ c ( u , v ) = 2 π T ( max ( r ( u i c , v j c ) ) min ( r ( u i c , v j c ) ) ) max Δ Φ r ( u , v ) = 2 π Δ θ ( max ( θ ( u i c , v j c ) ) min ( θ ( u i c , v j c ) ) ) ,
{ u p ~ = T 2 π Φ ~ c ( u , v ) cos ( Δ θ 2 π Φ ~ r ( u , v ) ) + u s , v p ~ = T 2 π Φ ~ c ( u , v ) sin ( Δ θ 2 π Φ ~ r ( u , v ) ) + v s .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.