Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correcting projector lens distortion in real time with a scale-offset model for structured light illumination

Open Access Open Access

Abstract

In fringe projection profilometry, inevitable distortion of optical lenses decreases phase accuracy and decreases the quality of 3D point clouds. For camera lens distortion, existing compensation methods include real time look-up tables derived from the related parameters of camera calibration. However, for projector lens distortion, so far, post-undistortion methods iteratively correcting lens distortion are relatively time-consuming while, despite avoiding iteration, pre-distortion methods are not suitable for binary fringe patterns. In this paper, we aim to achieve real-time phase correction for the projector by means of a scale-offset model that characterizes projector distortion by four correction parameters within a small-enough area, and thus we can speed up the post-undistortion by looking up tables. Experiments show that the proposed method can suppress the distortion error by a factor of 20 ×, i.e., the error of root mean square is less than 45 µm/0.7‰, while also proposed improving the computation speed by a factor of 50× over traditional iterative post-undistortion.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a high-accuracy and whole-field [1,2] 3D reconstruction technique, fringe projection profilometry (FPP) is used for making mechanical measurements, industrial monitoring [3,4], dental reconstruction, etc. Aside from thermal noise in the sensor, FPP is also challenged by distortion of optical lenses in the projector that leads to distortions in the phase, which thereby leads to warping of the resulting point clouds most visible in wide field-of-view lenses that typically suffer from radial lens distortion. It is, therefore, crucial that an accurate lens model be used and compensation in the phase reconstructions be applied. For camera lens distortion, look-up tables (LUT) derived from the related parameters of camera calibration [59] can be used to correct the distortion of camera in real time where the elements of the tables are indexed the integer row and column coordinates of the camera’s pixels. However for projector lens distortion, the projector coordinates of phase are real-valued, having infinite precision, and hence the projector lens distortion cannot be corrected as straightforward as correcting camera distortion.

Many studies have explored the correction of projector distortion, which can be classified as either post-undistortion or pre-distortion. In post-undistortion methods, the camera-projector pair are calibrated as a stereovision system to get distortion coefficients. Moreno et al. [10] proposed to project two groups of orthogonal fringe patterns to recover projector pixel coordinates in both directions from which they can correct the distorted phase. Further by using the stereovision model considering lens distortion of both camera and projector, Ma et al. [11] proposed to compute the undistorted 3D coordinates by performing an iterative optimization algorithm to reduce the root mean square (RMS) errors by a factor of eight. Wang et al. [12] proposed to correct phase by subpixel accuracy remapping and interpolation by intuitively representing distortion characteristics of nearly all the pixels of the projector. Lv et al. [13] proposed to correct distorted phase via deep neural networks and improved the measurement accuracy in RMS by $93.52\%$. Although these methods have provided effective correction for projector lens distortion, a common deficiency is that the computation is relatively time-consuming.

In the pre-distortion methods and based on stereovision calibration [14,15], Li et al. [16] proposed to generate pre-distorted fringes that effectively eliminate projector lens distortion. Yang et al. [17] proposed to measure independently projector distortion on each pixel for pre-distorting the patterns, and they reduced the full-field error to a sub-pixel level. For further improving accuracy, Yang et al. [18] and Wang et al. [19] introduced residual compensation to achieve higher accuracy than Li’s [16] method. To simplify the calibration procedure, Peng et al. [20] and Gonzalez et al. [21] proposed different adaptive fringe projection techniques to eliminate bending carrier phase due to projector distortion without complex traditional calibration.

In practice, some types of fringe patterns cannot be pre-distorted conveniently. For instance, binary fringe patterns: line [22,23] structured light, multiple-line [24] structured light, projector defocusing technology [2530], and gray code structured light [3134], etc. Since the binary fringe patterns are not encoded by a continuous function, direct pre-distortion is prone to a sawtooth effect that leads to additional errors; however in some respects, binary fringe patterns produce better performance than other types of patterns such as, for example, coupled with defocusing can overcome the gamma nonlinear effect of the projector. Binary fringe patterns also show stronger robustness when faced with high-reflective surfaces [35] and can scan in high projection speed with digital micromirror devices.

The purpose of this study is to correct the projector lens distortion in real time without pre-distorting the fringe patterns, thereby, avoiding a time-consuming iteration scheme while also being applicable to fringe pattern strategies, like binary gray codes, that cannot be conveniently pre-distorted. In this paper, we propose a novel scale-offset model to characterize projector distortion. First within a small-enough area, we employ four correction parameters to model projector lens distortion and derive closed-form solutions for computing them. Second by gridding the projector space, we compute the correction parameters at each grid center, and store them as LUTs. Finally by rounding off the projector coordinates (transformed from distorted phases) to have integer-valued indices, we directly access the pre-computed correction parameters and efficiently correct phase distortion in real time.

2. Methods

2.1 Scale-offset model for correcting projector lens distortion

As shown in Fig. 1, a 3D point with coordinate, $(X_u^w,~Y_u^w,~Z_u^w)$, is mapped into the camera and projector spaces at $(x_u^c,~y_u^c)$ and $(x_u^p,~y_u^p)$, without lens distortion, and $(x^c,~y^c)$ and $(x^p,~y^p)$, with distortion. To have high-quality 3D point clouds, we intuitively correct distorted $(x^c,~y^c)$ and $(x^p,~y^p)$ into undistorted $(x_u^c,~y_u^c)$ and $(x_u^p,~y_u^p)$, respectively. For the projector, the traditional polynomial distortion model [36], with both radial and tangential distortion, can be expressed as

$$\left\{ \begin{aligned}{{{\tilde x}^p}} = {\left[ {1 + K_1^p{\left({ \tilde r^p_u}\right)}^2 + K_2^p{\left({\tilde r^p_u}\right)}^4} \right]\tilde x_u^p + 2P_1^p\tilde x_u^p\tilde y_u^p + P_2^p\left[ {{\left({\tilde r^p_u}\right)}^2 + 2{{\left( {\tilde x_u^p} \right)}^2}} \right]}\\ {{{\tilde y}^p}} = {\left[ {1 + K_1^p{\left({\tilde r^p_u}\right)}^2 + K_2^p{\left({\tilde r^p_u}\right)}^4} \right]\tilde y_u^p + P_1^p\left[ {{\left({\tilde r^p_u}\right)}^2 + 2{{\left( {\tilde y_u^p} \right)}^2}} \right] + 2 P_2^p\tilde x_u^p\tilde y_u^p} \end{aligned} \right. ,$$
where $K_1^p$ and $K_2^p$ are coefficients of radial distortion; $P_1^p$ and $P_2^p$ are coefficients of tangential distortion; $( {\tilde x_u^p,\tilde y_u^p} )$ and $\left ( {\tilde x^p,\tilde y^p} \right )$ are undistorted and distorted projector image coordinates after normalization; and $\tilde \cdot$ denotes normalization. Moreover, we know that ${\tilde x}^p$, ${\tilde y}^p$, and $\tilde r^p_u$ are related by
$$\left\{ \begin{aligned} \tilde x^p & = (x^p - u^p_0) / f^p_x \\ \tilde y^p & = (y^p - v^p_0) / f^p_y \end{aligned} \right. {,~} \left\{ \begin{aligned} \tilde x^p_u & = (x^p_u - u^p_0) / f^p_x \\ \tilde y^p_u & = (y^p_u - v^p_0) / f^p_y \end{aligned} \right. {,~} {\tilde r^p_u} = \sqrt {{{\left( {\tilde x_u^p} \right)}^2} + {{\left( {\tilde y_u^p} \right)}^2}},$$
where $(u^p_0, v^p_0)$ is the pixel coordinate of the principle point in the projector space while $f_x^p$ and $f_y^p$ are the focal lengths of projector expressed in pixel units along the $x^p$ and $y^p$ directions, respectively.

 figure: Fig. 1.

Fig. 1. Schematic diagram of a structured light illumination system.

Download Full Size | PDF

According to the polynomial model in Eq. (1), we can visualize the distortion of the projector in Fig. 2 with a pseudo color map. It can be seen that the further away from the principle point (located at $(402.1, 639.8)$), the more obvious the error will be. Consequently, distortion in the upper left and right corners of the projector space are particularly severe. Thus in FPP, it is vital to correct the distortion of the projector that has an average-quality optical system; however, the traditional iterative post-undistortion is relatively time-consuming.

 figure: Fig. 2.

Fig. 2. The pixel error induced by projector lens distortion in (a) $x^p$, (b) $y^p$, and (c) Euclidean distance.

Download Full Size | PDF

To avoid iteratively correcting for distortion, we characterize the distortion of $x^p_u$ and $y^p_u$ within a small-enough area as a superposition of scale and offset, for example. As shown in Fig. 3, the undistorted coordinate $\tilde x_u^p$ can be linearized as $\tilde k_x^p$ times $\tilde x^p$ plus an offset $\tilde b_x^p$ with a similar linear transform for $\tilde y_u^p$. Thus, we name it scale-offset model which can be expressed as

$$\left\{ \begin{aligned} \tilde x_u^p = \tilde k_x^p \tilde x^p + \tilde b_x^p \\ \tilde y_u^p = \tilde k_y^p \tilde y^p + \tilde b_y^p \end{aligned} \right.,$$
where $\tilde k_y^p$ and $\tilde k_x^p$ are scale coefficients and $\tilde b_y^p$ and $\tilde b_x^p$ are offset terms. We call these terms collectively as correction parameters in this paper.

 figure: Fig. 3.

Fig. 3. Scale-offset distortion model.

Download Full Size | PDF

Assuming that the distortion is a continuous and differentiable function in the projector space, the four correction parameters can be treated as constants in the neighborhood of $(\tilde x^p,\tilde y^p)$. For computing the correction parameters, we select two points near $( \tilde x^p, \tilde y^p )$ as shown in Fig. 4(a), which are $( {\tilde x_1^p,\tilde y_1^p} ) = ( {\tilde x^p} - 0.5 s_0,{\tilde y^p} - 0.5 s_0 )$ and $( {\tilde x_2^p,\tilde y_2^p} ) = ( {\tilde x^p} + 0.5 s_0, {\tilde y^p} + 0.5 s_0 )$, where $s_0$ is a small-enough number. Once we have the undistorted coordinates $(\tilde x_{u1}^p,\tilde y_{u1}^p)$ and $(\tilde x_{u2}^p,\tilde y_{u2}^p)$ corresponding to the distorted coordinates $(\tilde x_{1}^p, \tilde y_{1}^p)$ and $(\tilde x_{2}^p, \tilde y_{2}^p)$, the four correction parameters can be determined as shown in Fig. 4(b). Accordingly, we can list four linear equations as

$$\left[\begin{array}{c} {\tilde x_{u1}^p} \\ {\tilde x_{u2}^p} \\ {\tilde y_{u1}^p} \\ {\tilde y_{u2}^p} \end{array}\right] = \left[\begin{array}{cccc} {\tilde x_1^p} & 1 & 0 & 0 \\ {\tilde x_2^p} & 1 & 0 & 0 \\ 0 & 0 & {\tilde y_1^p} & 1 \\ 0 & 0 & {\tilde y_2^p} & 1 \end{array}\right] \left[\begin{array}{c} {\tilde k_x^p} \\ {\tilde b_x^p} \\ {\tilde k_y^p} \\ {\tilde b_y^p} \end{array}\right],$$
from which we can compute the four correction parameters. When $s_0$ approaches 0, we further have the closed-form solutions for the unkonwns in Eqs. (3) as
$$\left\{ \begin{aligned} \tilde k_x^p \left(\tilde x^p,\tilde y^p\right) & = \lim_{s_0 \to 0}\frac{\tilde x_{u2}^p - \tilde x_{u1}^p}{\tilde x_2^p - \tilde x_1^p} = \frac{\partial \tilde x_u^p}{\partial \tilde x^p} + \frac{\partial \tilde x_u^p}{\partial \tilde y^p}\\ \tilde k_y^p \left(\tilde x^p,\tilde y^p\right) & = \lim_{s_0 \to 0}\frac{\tilde y_{u2}^p - \tilde y_{u1}^p}{\tilde y_2^p - \tilde y_1^p} = \frac{\partial \tilde y_u^p}{\partial \tilde x^p} + \frac{\partial \tilde y_u^p}{\partial \tilde y^p}\\ \tilde b_x^p \left(\tilde x^p,\tilde y^p\right) & = \lim_{s_0 \to 0}{\left[\tilde x_{u1}^p - \frac{\tilde x_1^p\left( \tilde x_{u2}^p - \tilde x_{u1}^p \right)}{\tilde x_2^p - \tilde x_1^p}\right]} = \tilde x_u^p - \left( {\frac{\partial \tilde x_u^p}{\partial \tilde x^p} + \frac{\partial \tilde x_u^p}{\partial \tilde y^p}} \right)\tilde x^p \hfill \\ \tilde b_y^p \left(\tilde x^p,\tilde y^p\right) & = \lim_{s_0 \to 0}{\left[\tilde y_{u1}^p - \frac{\tilde y_1^p \left( \tilde y_{u2}^p - \tilde y_{u1}^p \right)}{\tilde y_2^p - \tilde y_1^p}\right]} = \tilde y_u^p - \left( {\frac{\partial \tilde y_u^p}{\partial \tilde x^p} + \frac{\partial \tilde y_u^p}{\partial \tilde y^p}} \right)\tilde y^p \hfill \end{aligned} \right..$$
The derivation procedure is listed in the Appendix.

 figure: Fig. 4.

Fig. 4. (a) Neighborhood of $\left (\tilde x^p,\tilde y^p\right )$ and (b) distortion of $\left (\tilde x_{1}^p, \tilde y_{1}^p\right )$ and $\left (\tilde x_{2}^p, \tilde y_{2}^p\right )$.

Download Full Size | PDF

For deriving closed-form expressions for the correction parameters, we only need to have ${\partial \tilde x_u^p}/{\partial \tilde x^p}$, ${\partial \tilde x_u^p}/{\partial \tilde y^p}$, ${\partial \tilde y_u^p}/{\partial \tilde x^p}$ and ${\partial \tilde y_u^p}/{\partial \tilde y^p}$, which exactly compose the Jacobian matrix of Eq. (1) and can be computed as

$$J= \left( {\begin{array}{cc} \frac{\partial \tilde x_u^p}{\partial {\tilde x^p}} & \frac{\partial \tilde x_u^p}{\partial {\tilde y^p}} \\ \frac{\partial \tilde y_u^p}{\partial {\tilde x^p}} & \frac{\partial \tilde y_u^p}{\partial {\tilde y^p}} \end{array}} \right) = \left[ {\begin{array}{cc} \frac{ - {G_y}}{F_xG_y - F_yG_x} & \frac{F_y}{F_xG_y - F_yG_x} \\ \frac{G_x}{ F_xG_y - F_yG_x} & \frac{ - F_x}{F_xG_y - F_yG_x} \end{array}} \right],$$
where
$$\left\{ \begin{aligned} {F_x} & ={-} 1 - K_1^p\left[ {3{{\left( {{\tilde x^p_u}} \right)}^2} + {{\left( {{\tilde y^p_u}} \right)}^2}} \right] - K_2^p\left[ {{{\left( {{\tilde y^p_u}} \right)}^4} + 5{{\left( {{\tilde x^p_u}} \right)}^4} + 6{{\left( {{\tilde x^p_u}{\tilde y^p_u}} \right)}^2}} \right] - 2P_1^p{\tilde y^p_u} - 6P_2^p{\tilde x^p_u}\\ {F_y} & ={-} 2K_1^p \tilde x^p_u \tilde y^p_u - 4K_2^p\left[ {{{{\tilde y^p_u}\left({{\tilde x^p_u}} \right)}^3} + {\tilde x^p_u}{{\left( {{\tilde y^p_u}} \right)}^3}} \right] - 2P_1^p{\tilde x^p_u} - 2P_2^p{\tilde y^p_u}\\ {G_x} & ={-} 2K_1^p \tilde x^p_u \tilde y^p_u - 4K_2^p\left[ {{{{\tilde y^p_u}\left({{\tilde x^p_u}} \right)}^3} + {\tilde x^p_u}{{\left( {{\tilde y^p_u}} \right)}^3}} \right] - 2P_1^p{\tilde x^p_u} - 2P_2^p{\tilde y^p_u}\\ {G_y} & ={-} 1 - K_1^p\left[ {{{\left( {{\tilde x^p_u}} \right)}^2} + 3{{\left( {{\tilde y^p_u}} \right)}^2}} \right] - K_2^p\left[ {5{{\left( {{\tilde y^p_u}} \right)}^4} + {{\left( {{\tilde x^p_u}} \right)}^4} + 6{{\left( {{\tilde x^p_u}{\tilde y^p_u}} \right)}^2}} \right] - 6P_1^p{\tilde y^p_u} - 2P_2^p{\tilde x^p_u} \end{aligned} \right..$$

For an arbitrary projector coordinate $(\tilde x^p, \tilde y^p)$, we can calculate the four correction parameters by putting corresponding undistorted coordinate $(\tilde x^p_u, \tilde y^p_u)$ into Eq. (5). We naturally grid the projector plane with resolution $H^{\text {LUT}} \times W^{\text {LUT}}$, where $H^{\text {LUT}}$ and $W^{\text {LUT}}$ are the dimensions of LUTs and are manually set according to the actual needs where the greater $H^{\text {LUT}}$ and $W^{\text {LUT}}$, the higher the accuracy. In practice, it is usually set to be the same as $H^p$ and $W^p$ because the accuracy is high enough. We then can compute and store LUTs of correction parameters as

$$\left\{ \begin{aligned} {k^p_y}\left( {{I_x},{I_y}} \right) & = \tilde k_y^p\left( {\tilde x_N^p,\tilde y_N^p} \right) \\ {b^p_y}\left( {{I_x},{I_y}} \right) & = f^p_y \tilde b_y^p\left( {\tilde x_N^p,\tilde y_N^p} \right) \end{aligned} \right. ~\text{and/or}~ \left\{ \begin{aligned} {k^p_x}\left( {{I_x},{I_y}} \right) & = \tilde k_x^p\left( {\tilde x_N^p,\tilde y_N^p} \right) \\ {b^p_x}\left( {{I_x},{I_y}} \right) & = f^p_x \tilde b_x^p\left( {\tilde x_N^p,\tilde y_N^p} \right) \end{aligned} \right.,$$
for horizontally and/or vertically shifting fringes, respectively, where $I_x$ and $I_y$ are the integer-valued indices of LUTs and start from 0 to $W^{\text {LUT}}$ and $H^{\text {LUT}}$, respectively. The coordinate $( {x_N^p,y_N^p})$ can then be computed as
$$\left\{ \begin{aligned} \tilde x_N^p = \frac{1}{f^p_x}\left(\frac{W^p I_x}{W^{\text{LUT}}} - u^p_0\right) \\ \tilde y_N^p = \frac{1}{f^p_y}\left(\frac{H^p I_y}{H^{\text{LUT}}} - v^p_0\right) \end{aligned} \right..$$
The four LUTs are visualized in Fig. 5(a), (b), (c), and (d) with pseudo color maps.

 figure: Fig. 5.

Fig. 5. Visualization of the LUTs for the four correction parameters: (a) $k^p_x$, (b) $b^p_x$, (c) $k^p_y$, (d) $b^p_y$.

Download Full Size | PDF

In some specific systems, two-direction scanning [37] is effective to address the directional issue of fringes compared with one-direction scanning. For two-direction scanning and after obtaining $x^p$ and $y^p$ according to Eq. (3), we can directly compute the undistorted coordinate as

$$\left\{ \begin{aligned} x_u^p = {k^p_x}\left( {{I_x},{I_y}} \right) x^p + {b^p_x}\left( {{I_x},{I_y}} \right) \\ y_u^p = {k^p_y}\left( {{I_x},{I_y}} \right) y^p + {b^p_y}\left( {{I_x},{I_y}} \right)\end{aligned} \right.,$$
and the index for looking up tables can be computed as
$$\left( I_x, I_y \right) = \left[ \text{round}\left(\frac{W^{\text{LUT}} x^p}{W^p}\right) ,\text{round}\left(\frac{H^{\text{LUT}} y^p}{H^p}\right) \right],$$
where $\text {round}(\cdot )$ is rounding off function.

In the common case on one-direction scanning, only $x^p$ or $y^p$ will be obtained. Thus, we need to compute another coordinate with the epipolar constraint in Eq. (11) of Ref. [38] for obtaining the indices of the LUTs. For instance if we only have $y^p$, we compute $\hat x^p$ as

$${\hat x^p} = \frac{{\left( {x_0^p - x_e^p} \right)}}{{\left( {y_0^p - y_e^p} \right)}}\left( {{y^p} - y_e^p} \right) + x_e^p,$$
where $(x^p_e, y^p_e)$ and $(x^p_0, y^p_0)$ are the epipole on the projector space and phase pole (defined in extended epipolar [38]), respectively. The error in $\hat x^p$ is automatically suppressed in the subsequent calculation, and the reason will be illustrated in the next subsection. Then we look up the pre-computed tables with index
$$\left( \hat I_x, I_y \right) = \left[ \text{round}\left(\frac{W^{\text{LUT}} \hat x^p}{W^p}\right) ,\text{round}\left(\frac{H^{\text{LUT}} y^p}{H^p}\right) \right],$$
and compute the undistorted coordinate with the second equation in Eq. (10). Alternatively if we only have $x^p$, the same procedure can be implemented. Finally, we typically reconstruct [3739] the undistorted point clouds in high-accuracy. In summary, the procedures of the proposed with one-direction scanning and two-direction scanning are shown in Fig. 6(a) and (b), respectively.

 figure: Fig. 6.

Fig. 6. Flow chart of proposed method for (a) one-direction scanning and (b) two-direction scanning.

Download Full Size | PDF

Notably, the correction parameters of our method can not only be determined by the traditional polynomial model but also by other distortion models. For the global lens distortion models, we calculate the Jacobian matrix through the exact equations and then conveniently obtain the correction parameters to build up the LUTs. For per-pixel or local lens distortion model, there is no global mathematical model for calculating Jacobian matrix, but as long as the distortion characteristics of at least two points in the neighborhood of each grid center can be obtained, we can calculate the distortion correction parameters by using Eq. (4) to build up the LUTs.

2.2 Error analysis compared with the iterative post-undistortion

In practice, the correction parameters obtained by LUTs slightly deviate from their true values; therefore, the undistorted coordinates computed by the correction parameters also slightly deviate from the results of iterative post-undistortion. And the deviation is analytically calculated as

$$\left\{ \begin{aligned} \Delta \tilde x_u^p = \left[ {\tilde k_x^p \left(\tilde x^p, \tilde y^p\right) - \tilde k_x^p \left(\tilde x^p_N, \tilde y^p_N\right) } \right]\tilde x^p + \left[ {\tilde b_x^p \left(\tilde x^p, \tilde y^p\right) - \tilde b_x^p \left(\tilde x^p_N, \tilde y^p_N\right)} \right]\\ \Delta \tilde y_u^p = \left[ {\tilde k_y^p \left(\tilde x^p, \tilde y^p\right) - \tilde k_y^p \left(\tilde x^p_N, \tilde y^p_N\right) } \right]\tilde y^p + \left[ {\tilde b_y^p \left(\tilde x^p, \tilde y^p\right) - \tilde b_y^p \left(\tilde x^p_N, \tilde y^p_N\right)} \right] \end{aligned} \right..$$
We compute the first order Taylor expansion of Eq. (14) as
$$\left\{ \begin{aligned} \Delta \tilde x_u^p & = \left[ \frac{\partial \tilde k_x^p}{\partial \tilde x^p}{\tilde x^p } + \frac{\partial \tilde b_x^p}{\partial \tilde x^p} \right]\Delta \tilde x^p + \left[ {\frac{\partial \tilde k_x^p}{\partial \tilde y^p}{\tilde x^p} + \frac{\partial \tilde b_x^p}{\partial \tilde y^p}} \right]\Delta \tilde y^p \\ \Delta \tilde y_u^p & = \left[ {\frac{\partial \tilde k_y^p}{\partial \tilde x^p}{\tilde y^p} + \frac{\partial \tilde b_y^p}{\partial \tilde x^p}} \right]\Delta \tilde x^p + \left[ {\frac{\partial \tilde k_y^p}{\partial \tilde y^p}{\tilde y^p} + \frac{\partial \tilde b_y^p}{\partial \tilde y^p}} \right]\Delta \tilde y^p \end{aligned} \right.,$$
where $\Delta \tilde y^p = (\tilde y^p_N - \tilde y^p)$ and $\Delta \tilde x^p = (\tilde x^p_N - \tilde x^p)$. Meanwhile, we calculate the partial derivative of Eq. (3) with respect to the variables $x^p$ and $y^p$, respectively, then using Eq. (5), we have
$$\left\{ \begin{aligned} & \frac{\partial \tilde k_x^p}{\partial \tilde x^p}\tilde x^p + \frac{\partial \tilde b_x^p}{\partial \tilde x^p} ={-} \frac{\partial \tilde x_u^p}{\partial \tilde y^p} \\ & \frac{\partial \tilde k_x^p}{\partial \tilde y^p}\tilde x^p + \frac{\partial \tilde b_x^p}{\partial \tilde y^p} = \frac{\partial \tilde x_u^p}{\partial \tilde y^p} \end{aligned} \right. \text{and} \left\{ \begin{aligned} & \frac{\partial \tilde k_y^p}{\partial \tilde x^p}\tilde y^p + \frac{\partial \tilde b_y^p}{\partial \tilde x^p} = \frac{\partial y_u^p}{\partial \tilde x^p}\\ & \frac{\partial \tilde k_y^p}{\partial \tilde y^p}\tilde y^p + \frac{\partial \tilde b_x^p}{\partial \tilde y^p} ={-} \frac{\partial y_u^p}{\partial \tilde x^p} \end{aligned} \right..$$
Therefore, we finally put Eq. (16) into Eq. (15) and concisely have
$$\left\{ \begin{aligned} \Delta x_u^p & = \frac{\partial \tilde x_u^p}{\partial \tilde y^p}\left( {\Delta y^p - \Delta x^p} \right)\\ \Delta y_u^p & = \frac{\partial \tilde y_u^p}{\partial \tilde x^p}\left( {\Delta x^p - \Delta y^p} \right) \end{aligned} \right..$$
Finally, we can compute the theoretical difference as
$$\Delta r^p_u = \sqrt{\left(\Delta x_u^p\right)^2 + \left(\Delta y_u^p\right)^2 } = \sqrt{\left(\frac{\partial \tilde x_u^p}{\partial \tilde y^p}\right)^2 + \left(\frac{\partial \tilde x_u^p}{\partial \tilde y^p}\right)^2 }\vert\Delta x^p - \Delta y^p\vert,$$
in which $\sqrt {(\partial \tilde x_u^p / \partial \tilde y^p)^2 + (\partial \tilde y_u^p/\partial \tilde x^p)^2}$ is commonly less than $10^{-2}$ in practice, hence the error of $\vert \Delta x^p - \Delta y^p\vert$ is automatically suppressed. We have $\vert \Delta x^p - \Delta y^p\vert <1$ if the LUTs of correction parameters are built up with a resolution of $H^p\times W^p$, hence our method can achieve a satisfactory accuracy, i.e., the error is less than $10^{-2}$ pixel.

3. Experiments

Our experimental FPP system, shown in Fig. 7, consists of an AVT Prosilica GC640C camera with $640 \times 480$ resolution and a Casio XJ-M140 projector with $800 \times 600$ resolution. For calibrating the system, a ceramic calibration board, with $8 \times 7$ discrete circles with horizontal and vertical intervals between circle centers of 10 mm, is placed and scanned at 10 different positions from which we obtain 560 point pairs composed of camera and projector coordinates. We then use OpenCV to calibrate the camera and projector as a stereovision system. The re-projection error is $0.0826$ pixel.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

For investigating the performance in accuracy and speed, we compare our method with existing methods in several experiments by conducting 1) a naive reconstruction with two-direction scanning in Ref. [37] (without correcting the projector distortion); 2) iterative post-undistortion [11] (we iteratively calculate $(x^p_u, y^p_u)$ rather than $(X_u^w,~Y_u^w,~Z_u^w)$ as a simplification of Ref. [11]); 3) pre-distortion [16]; 4) the proposed with two-direction scanning; and 5) the proposed with one-direction scanning. To focus on observing projector distortion for all the five methods above, we use the method in Ref. [5] to calibrate the camera and then build up LUTs for compensating the camera distortion. The strategies for computing 3D point clouds are as follows: we use Eq. (7) in Ref. [39] for one-direction scanning and Eq. (5) in Ref. [37] for two-direction scanning.

3.1 Measuring standard planes

For evaluating the accuracy of the our method, we conduct a comprehensive comparison where we employ the five methods mentioned above to scan a small (size of $7~\text {cm}\times 8~\text {cm}$) and a large (size of $20~\text {cm}\times 30~\text {cm}$) standard plane at about 30 cm and 50 cm away from the camera, respectively, and each standard plane is placed at five different positions for scanning. Note that we scan with the fringe patterns of settings $f = \{1, 6, 32\}$ and $N = \{20, 20, 20\}$ in all the subsequent experiments where $f$ lists the spatial frequency in each group of fringes; $N$ lists the total number of the phase shifts in each group of fringes; and we use traditional temporal phase unwrapping [3] method to obtain the absolute phase. The flatness of the small and the large standard plane is higher than 1 $\mathrm {\mu }$m and 10 $\mathrm {\mu }$m, respectively.

The reconstruction results are shown in Fig. 8 and Fig. 9 where it can be seen that the uncorrected point cloud is deformed while all the corrected point clouds are flat. We compute the root mean squared (RMS) error from the point clouds to the datum plane where the datum plane is the least-square fitting plane of each point cloud. Table 1 lists the RMS and peak-to-valley (PV) value of errors in all the five point clouds. The results demonstrate that the RMSs of the errors are significantly reduced by more than 7$\times$ in the four correction methods. Meanwhile, we can see that the result of our method with two-direction scanning is highly consistent with iterative post-undistortion, which proves that our method can achieve the same accuracy as iterative post-undistortion.

 figure: Fig. 8.

Fig. 8. Measured results of a standard plane at a distance of 30 cm: (a) grayscale image, (b) naive reconstruction, (c) iterative post-undistortion [11], (d) pre-distortion [16], (e) the proposed with two-direction scanning, and (f) the proposed with one-direction scanning.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Measured results of a standard plane at a distance of 50 cm: (a) grayscale image, (b) naive reconstruction, (c) iterative post-undistortion [11], (d) pre-distortion [16], (e) the proposed with two-direction scanning, and (f) the proposed with one-direction scanning.

Download Full Size | PDF

Tables Icon

Table 1. Distortion error when measuring standard planes at distances of 30 cm and 50 cm in different locations (unit: $\mathrm {\mu }$m).

We also statistically count the histogram of errors as shown in Fig. 10 and Fig. 11 where the histograms of the four corrected point clouds visually present a normal distribution. In addition, we conduct the Lilliefors test [40] for all five error distributions. For the null hypothesis that the error values come from a distribution in the normal family, the results show that error distributions of the four correction methods do not reject the null hypothesis while the error distribution of the naive method does reject it at the 0.05 significance level. The results indicate that, in the four correction methods, Gaussian noise is the main component of error, which is induced by unstable ambient light, camera/projector flicker, quantization error, and sensor noise of the camera and projector [2]. In other words, the lens distortion error is well suppressed by the four correction methods. Collectively, the performance of the four correction methods is extremely close when measuring the standard plane.

 figure: Fig. 10.

Fig. 10. The histograms of errors when measuring a standard plane at a distance of 30 cm by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The histograms of errors when measuring a standard plane at a distance of 50 cm by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.

Download Full Size | PDF

For investigating the accuracy of dimensional measurement, we employ the five methods to scan a calibration board with a size of $7~\text {cm}\times 8~\text {cm}$ at about 30 cm away from the camera and located at five different positions. The flatness of the standard plane is higher than 1 $\mathrm {\mu }$m. We compute the distance between circle centers with ground truth of 70 mm as shown in Fig. 12 with a total of 35 circle center distances calculated at the five different positions. We compute the mean value and RMS of error for the 35 measured distances as shown in Table 2 where the results indicate that our method can achieve accurate results with errors of less than 45 $\mathrm {\mu }$m/0.7‰ and suppress the error by 20$\times$ in RMS for dimension measurement, which are highly consistent with iterative post-undistortion. Collectively, the performance of all the four correction methods is extremely close in dimension measurement.

 figure: Fig. 12.

Fig. 12. Circle center distances with ground truth of 70 mm.

Download Full Size | PDF

Tables Icon

Table 2. Measuring the distance between circle centers with ground truth of 70 mm for 35 times (unit: mm).

According to the experimental results of measuring standard planes, we conclude that the proposed can: 1) suppress the deform of a measured plane induced by lens distortion for more than 7$\times$ in RMS and 2) reduce the absolute error for more than 20$\times$ and achieve high accurate results with errors of less than 45 $\mathrm {\mu }$m/0.7‰ in RMS for dimension measurement. And for two-direction scanning, the proposed can achieve the same accuracy as iterative post-undistortion. For one-direction scanning, the proposed can also achieve an extremely close effect as the iterative post-undistortion.

3.2 Measuring a standard sphere

For further confirming the accuracy of the our method, we employ the five methods to scan a plaster sphere, with a ground truth of radius is 85 mm, in five different directions at 30 cm away from the camera. For each group of reconstructed point clouds, we fit spheres to compute the mean radius as shown in Table 3, and then we compute the error of the measured radius at each pixel. The results are shown in Fig. 13 where we can see that the error of the measured radius is significantly suppressed by a factor of more than 20$\times$ in all the four correction methods. The measurement results of the proposed with two-direction scanning are highly consistent with the iterative post-undistortion.

 figure: Fig. 13.

Fig. 13. Error maps of measuring radius of standard sphere by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.

Download Full Size | PDF

Tables Icon

Table 3. Measuring a standard sphere in five different directions at a distance of 30 cm (unit: mm).

3.3 Pixel error compared with the iterative post-undistortion

For confirming our error analysis in subsection 2.2, we scan a white wall along both horizontal and vertical directions and then implement the iterative post-undistortion and our method, respectively, from which we compute the discrepancy between the correction result of our method and iterative post-undistortion as shown in Fig. 14(a). Note that we stop the iteration when the step length is less than $10^{-13}$ pixels in subsection 3.1 – 3.3 to ensure the accuracy of the iterative post-undistortion. We further compute the theoretical discrepancy at each pixel by Eq. (18) as shown in Fig. 14(b), accordingly, we can see that the discrepancy distribution in the experiment is consistent with that in theory. Then we compute the RMSs of theoretical discrepancy and experimental discrepancy as $6.54\times 10^{-4}$ pixel and $6.32\times 10^{-4}$ pixel, respectively. The experimental results are in good agreement with the theoretical calculations, which proves that the discrepancy between our method and iterative post-undistortion is less than $10^{-3}$ pixel in RMS. Thus, we confirm that the proposed algorithm can achieve almost the same accuracy as the iterative post-undistortion.

 figure: Fig. 14.

Fig. 14. Discrepancy between correction result of our method and the iterative post-undistortion (a) in experiment and (b) in theory computed by Eq. (18).

Download Full Size | PDF

3.4 Time consumption

For proving our advantage in speed, we implement our method and iterative post-undistortion by using a single-thread C++ programming language, and run the program over an Intel i5-10500 CPU at 3.1 GHz. It should be noted that we use newton iteration to calculate the undistorted coordinate. In subsection 3.3, we proved that our method only $6.32\times 10^{-4}$ pixel deviates from the iterative post-undistortion in RMS, hence we stop the iteration in advance when the step length is less than $10^{-3}$ pixel to ensure the fairness of speed comparison in this experiment.

We run our program to correct the phase maps 3000 times to obtain the average time consumption as shown in Table 4. For the phase map with resolution of $640\times 480$, the time consumptions of the iterative post-undistortion, our method with two-direction scanning, and our method with one-direction scanning are $155.62$ ms, $2.85$ ms, and $2.94$ ms, respectively. For the phase map with resolution of $1280\times 960$ obtained by upsampling, the time consumptions of the iterative post-undistortion, our method with two-direction scanning, and our method with one-direction scanning are $616.88$ ms, $11.83$ ms, and $11.35$ ms, respectively. Thus, we conclude that the proposed improves the iterative post-undistortion by a factor of $50\times$ in terms of speed. Considering a camera with resolution of $1280\times 960$, the iterative post-undistortion takes 616.88 ms to correct 1.23 million phase values. So it is difficult to realize real-time calculation, however, our method only takes 11.83 ms, and it is dispensable to graphics processing units (GPU) to speed up the calculation.

Tables Icon

Table 4. Time consumption of correction.

4. Conclusion and future work

In this paper, we propose a novel scale-offset model to characterize the projector distortion by four correction parameters within a small-enough area, from which we can avoid iteration and achieve real-time post-undistortion for the projector. The experimental results show that our method provides some advantages over others. Compared to iterative post-undistortion, the proposed improves the computation speed by a factor of at least $50\times$ while the same accurate results can be achieved, i.e., the error of RMS is suppressed by a factor of 20$\times$ and is less than 45 $\mathrm {\mu }$m/0.7‰. Comparing with pre-distortion and apart from achieving the same accuracy, the proposed is applicable for some fringe pattern strategies (e.g., binary fringe patterns) which cannot be pre-distorted conveniently. To sum up, our method can achieve high-accuracy and real-time correction for projector lens distortion and, in particular, is suitable for improving the accuracy of FPP systems that suffer from evident lens distortion error and reducing the dependence on GPUs. In the future, we will apply our method to suppress higher orders of residual distortions by refining the accuracy of calibration method to a per-pixel level.

Appendix: correction parameters

According to Eq. (1), we can see that $\tilde x_u^p$ and $\tilde y_u^p$ are determined by $\tilde x^p$ and $\tilde y^p$, thus we have $\tilde x_{u1}^p = \tilde x_u^p(\tilde x^p_1,\tilde y^p_1)$ and $\tilde x_{u2}^p = \tilde x_u^p(\tilde x^p_2,\tilde y^p_2)$. For computing $k^p_x$, we have

$$\begin{aligned}\tilde k_x^p &= \lim_{s_0 \to 0} \dfrac{\tilde x_{u2}^p - \tilde x_{u1}^p}{\tilde x_2^p - \tilde x_1^p} = \lim_{s_0 \to 0} \dfrac{\tilde x_{u}^p \left(\tilde x^p_1 + s_0, \tilde y^p_1 + s_0\right) - \tilde x_{u}^p \left(\tilde x^p_1, \tilde y^p_1 \right)}{\tilde x_1^p + s_0 - \tilde x_1^p} \\ &=\lim_{s_0 \to 0} \dfrac{\tilde x_{u1}^p + \frac{\partial \tilde x_u^p}{\partial \tilde x^p}{s_0} + \frac{\partial \tilde x_u^p}{\partial \tilde y^p}{s_0} + O\left( {s_0^2} \right) - \tilde x_{u1}^p}{s_0} = \frac{\partial \tilde x_u^p}{\partial \tilde x^p} + \frac{\partial \tilde x_u^p}{\partial \tilde y^p}, \end{aligned}$$
similarly, we can compute $k^p_y$.

For computing $b^p_x$, we have

$$\begin{aligned}\tilde b_x^p &= \lim_{s_0 \to 0}{\left[\tilde x_{u1}^p - \frac{\tilde x_1^p\left( {x_{u2}^p - x_{u1}^p} \right)}{x_2^p - x_1^p}\right]} = \lim_{s_0 \to 0} {\left[\tilde x_{u1}^p - \tilde x_1^p \left(\dfrac{\tilde x_{u1}^p + \frac{\partial \tilde x_u^p}{\partial \tilde x^p} s_0 + \frac{\partial \tilde x_u^p}{\partial \tilde y^p} s_0 + O\left( s_0^2 \right) - \tilde x_{u1}^p}{\tilde x_1^p + {s_0} - \tilde x_1^p}\right)\right]} \\ &= \tilde x_{u1}^p - \left( \frac{\partial \tilde x_u^p}{\partial \tilde x^p} + \frac{\partial \tilde x_u^p}{\partial \tilde y^p} \right)\tilde x_1^p, \end{aligned}$$
similarly, we can compute $b^p_y$.

Funding

Sichuan Province Science and Technology Support Program (2022YFG0233); Sichuan University (2020SCUNL204).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

3. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

4. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

5. J. Wang, F. Shi, J. Zhang, and Y. Liu, “A new calibration model of camera lens distortion,” Pattern Recognit. 41(2), 607–615 (2008). [CrossRef]  

6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

7. M. T. El-Melegy and A. A. Farag, “Nonmetric lens distortion calibration: Closed-form solutions, robust estimation and model selection,” in Computer Vision, IEEE International Conference on, vol. 2 (IEEE Computer Society, 2003), p. 554.

8. A. Albarelli, E. Rodolà, and A. Torsello, “Robust camera calibration using inaccurate targets,” IEEE Trans. Pattern Anal. Machine Intell. 31(2), 376–383 (2009). [CrossRef]  

9. M. N. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50(11), 110503 (2011). [CrossRef]  

10. D. Moreno and G. Taubin, “Simple, accurate, and robust projector-camera calibration,” in 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, (IEEE, 2012), pp. 464–471.

11. S. Ma, R. Zhu, C. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012). [CrossRef]  

12. Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016). [CrossRef]  

13. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang, “Projector distortion correction in 3D shape measurement using a structured-light system by deep neural networks,” Opt. Lett. 45(1), 204–207 (2020). [CrossRef]  

14. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

15. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

16. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016). [CrossRef]  

17. S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Flexible digital projector calibration method based on per-pixel distortion measurement and correction,” Opt. Lasers Eng. 92, 29–38 (2017). [CrossRef]  

18. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019). [CrossRef]  

19. J. Wang, Z. Zhang, R. K. Leach, W. Lu, and J. Xu, “Predistorting projected fringes for high-accuracy 3-D phase mapping in fringe projection profilometry,” IEEE Trans. Instrum. Meas. 70, 1–9 (2021). [CrossRef]  

20. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846–21860 (2016). [CrossRef]  

21. A. Gonzalez and J. Meneses, “Accurate calibration method for a fringe projection system by projecting an adaptive fringe pattern,” Appl. Opt. 58(17), 4610–4615 (2019). [CrossRef]  

22. Y. Long, S. Wang, W. Wu, X. Yang, G. Jeon, and K. Liu, “Decoding line structured light patterns by using fourier analysis,” Opt. Eng. 54(7), 073109 (2015). [CrossRef]  

23. Y. Long, S. Wang, W. Wu, and K. Liu, “Robust and efficient decoding scheme for line structured light,” Opt. Lasers Eng. 75, 88–94 (2015). [CrossRef]  

24. H. Bian and K. Liu, “Robustly decoding multiple-line-structured light in temporal fourier domain for fast and accurate three-dimensional reconstruction,” Opt. Eng. 55(9), 093110 (2016). [CrossRef]  

25. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34(20), 3080–3082 (2009). [CrossRef]  

26. S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]  

27. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]  

28. J. Dai, B. Li, and S. Zhang, “High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity,” Opt. Lasers Eng. 52, 195–200 (2014). [CrossRef]  

29. Y. Hu, Q. Chen, S. Feng, T. Tao, H. Li, and C. Zuo, “Real-time microscopic 3d shape measurement based on optimized pulse-width-modulation binary fringe projection,” Meas. Sci. Technol. 28(7), 075010 (2017). [CrossRef]  

30. Y. Shi, C. Chang, X. Liu, N. Gao, Z. Meng, and Z. Zhang, “Infrared phase measuring deflectometry by using defocused binary fringe,” Opt. Lett. 46(13), 3091–3094 (2021). [CrossRef]  

31. R. J. Valkenburg and A. M. McIvor, “Accurate 3D measurement using a structured light system,” Image and Vis. Computing 16(2), 99–110 (1998). [CrossRef]  

32. J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37(4), 827–849 (2004). [CrossRef]  

33. Z. Wu, C. Zuo, W. Guo, T. Tao, and Q. Zhang, “High-speed three-dimensional shape measurement based on cyclic complementary gray-code light,” Opt. Express 27(2), 1283–1297 (2019). [CrossRef]  

34. X. He, D. Zheng, Q. Kemao, and G. Christopoulos, “Quaternary gray-code phase unwrapping for binary fringe projection profilometry,” Opt. Lasers Eng. 121, 358–368 (2019). [CrossRef]  

35. Z. Song, H. Jiang, H. Lin, and S. Tang, “A high dynamic range structured light means for the 3D measurement of specular surface,” Opt. Lasers Eng. 95, 8–16 (2017). [CrossRef]  

36. C. B. Duane, “Close-range camera calibration,” Photogramm. Eng. Remote Sens. 37, 855–866 (1971).

37. K. Liu, J. Song, D. L. Lau, X. Zheng, C. Zhu, and X. Yang, “Reconstructing 3D point clouds in real time with look-up tables for structured light scanning along both horizontal and vertical directions,” Opt. Lett. 44(24), 6029–6032 (2019). [CrossRef]  

38. K. Liu, K. Zhang, J. Wei, J. Song, D. L. Lau, C. Zhu, and B. Xu, “Extending epipolar geometry for real-time structured light illumination,” Opt. Lett. 45(12), 3280–3283 (2020). [CrossRef]  

39. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]  

40. W. Hubert Lilliefors, “On the Kolmogorov-Smirnov test for normality with mean and variance unknown,” J. Am. Stat. Assoc. 62(318), 399–402 (1967). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Schematic diagram of a structured light illumination system.
Fig. 2.
Fig. 2. The pixel error induced by projector lens distortion in (a) $x^p$, (b) $y^p$, and (c) Euclidean distance.
Fig. 3.
Fig. 3. Scale-offset distortion model.
Fig. 4.
Fig. 4. (a) Neighborhood of $\left (\tilde x^p,\tilde y^p\right )$ and (b) distortion of $\left (\tilde x_{1}^p, \tilde y_{1}^p\right )$ and $\left (\tilde x_{2}^p, \tilde y_{2}^p\right )$.
Fig. 5.
Fig. 5. Visualization of the LUTs for the four correction parameters: (a) $k^p_x$, (b) $b^p_x$, (c) $k^p_y$, (d) $b^p_y$.
Fig. 6.
Fig. 6. Flow chart of proposed method for (a) one-direction scanning and (b) two-direction scanning.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. Measured results of a standard plane at a distance of 30 cm: (a) grayscale image, (b) naive reconstruction, (c) iterative post-undistortion [11], (d) pre-distortion [16], (e) the proposed with two-direction scanning, and (f) the proposed with one-direction scanning.
Fig. 9.
Fig. 9. Measured results of a standard plane at a distance of 50 cm: (a) grayscale image, (b) naive reconstruction, (c) iterative post-undistortion [11], (d) pre-distortion [16], (e) the proposed with two-direction scanning, and (f) the proposed with one-direction scanning.
Fig. 10.
Fig. 10. The histograms of errors when measuring a standard plane at a distance of 30 cm by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.
Fig. 11.
Fig. 11. The histograms of errors when measuring a standard plane at a distance of 50 cm by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.
Fig. 12.
Fig. 12. Circle center distances with ground truth of 70 mm.
Fig. 13.
Fig. 13. Error maps of measuring radius of standard sphere by (a) naive reconstruction, (b) iterative post-undistortion [11], (c) pre-distortion [16], (d) the proposed with two-direction scanning, and (e) the proposed with one-direction scanning.
Fig. 14.
Fig. 14. Discrepancy between correction result of our method and the iterative post-undistortion (a) in experiment and (b) in theory computed by Eq. (18).

Tables (4)

Tables Icon

Table 1. Distortion error when measuring standard planes at distances of 30 cm and 50 cm in different locations (unit: μ m).

Tables Icon

Table 2. Measuring the distance between circle centers with ground truth of 70 mm for 35 times (unit: mm).

Tables Icon

Table 3. Measuring a standard sphere in five different directions at a distance of 30 cm (unit: mm).

Tables Icon

Table 4. Time consumption of correction.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

{ x ~ p = [ 1 + K 1 p ( r ~ u p ) 2 + K 2 p ( r ~ u p ) 4 ] x ~ u p + 2 P 1 p x ~ u p y ~ u p + P 2 p [ ( r ~ u p ) 2 + 2 ( x ~ u p ) 2 ] y ~ p = [ 1 + K 1 p ( r ~ u p ) 2 + K 2 p ( r ~ u p ) 4 ] y ~ u p + P 1 p [ ( r ~ u p ) 2 + 2 ( y ~ u p ) 2 ] + 2 P 2 p x ~ u p y ~ u p ,
{ x ~ p = ( x p u 0 p ) / f x p y ~ p = ( y p v 0 p ) / f y p ,   { x ~ u p = ( x u p u 0 p ) / f x p y ~ u p = ( y u p v 0 p ) / f y p ,   r ~ u p = ( x ~ u p ) 2 + ( y ~ u p ) 2 ,
{ x ~ u p = k ~ x p x ~ p + b ~ x p y ~ u p = k ~ y p y ~ p + b ~ y p ,
[ x ~ u 1 p x ~ u 2 p y ~ u 1 p y ~ u 2 p ] = [ x ~ 1 p 1 0 0 x ~ 2 p 1 0 0 0 0 y ~ 1 p 1 0 0 y ~ 2 p 1 ] [ k ~ x p b ~ x p k ~ y p b ~ y p ] ,
{ k ~ x p ( x ~ p , y ~ p ) = lim s 0 0 x ~ u 2 p x ~ u 1 p x ~ 2 p x ~ 1 p = x ~ u p x ~ p + x ~ u p y ~ p k ~ y p ( x ~ p , y ~ p ) = lim s 0 0 y ~ u 2 p y ~ u 1 p y ~ 2 p y ~ 1 p = y ~ u p x ~ p + y ~ u p y ~ p b ~ x p ( x ~ p , y ~ p ) = lim s 0 0 [ x ~ u 1 p x ~ 1 p ( x ~ u 2 p x ~ u 1 p ) x ~ 2 p x ~ 1 p ] = x ~ u p ( x ~ u p x ~ p + x ~ u p y ~ p ) x ~ p b ~ y p ( x ~ p , y ~ p ) = lim s 0 0 [ y ~ u 1 p y ~ 1 p ( y ~ u 2 p y ~ u 1 p ) y ~ 2 p y ~ 1 p ] = y ~ u p ( y ~ u p x ~ p + y ~ u p y ~ p ) y ~ p .
J = ( x ~ u p x ~ p x ~ u p y ~ p y ~ u p x ~ p y ~ u p y ~ p ) = [ G y F x G y F y G x F y F x G y F y G x G x F x G y F y G x F x F x G y F y G x ] ,
{ F x = 1 K 1 p [ 3 ( x ~ u p ) 2 + ( y ~ u p ) 2 ] K 2 p [ ( y ~ u p ) 4 + 5 ( x ~ u p ) 4 + 6 ( x ~ u p y ~ u p ) 2 ] 2 P 1 p y ~ u p 6 P 2 p x ~ u p F y = 2 K 1 p x ~ u p y ~ u p 4 K 2 p [ y ~ u p ( x ~ u p ) 3 + x ~ u p ( y ~ u p ) 3 ] 2 P 1 p x ~ u p 2 P 2 p y ~ u p G x = 2 K 1 p x ~ u p y ~ u p 4 K 2 p [ y ~ u p ( x ~ u p ) 3 + x ~ u p ( y ~ u p ) 3 ] 2 P 1 p x ~ u p 2 P 2 p y ~ u p G y = 1 K 1 p [ ( x ~ u p ) 2 + 3 ( y ~ u p ) 2 ] K 2 p [ 5 ( y ~ u p ) 4 + ( x ~ u p ) 4 + 6 ( x ~ u p y ~ u p ) 2 ] 6 P 1 p y ~ u p 2 P 2 p x ~ u p .
{ k y p ( I x , I y ) = k ~ y p ( x ~ N p , y ~ N p ) b y p ( I x , I y ) = f y p b ~ y p ( x ~ N p , y ~ N p )   and/or   { k x p ( I x , I y ) = k ~ x p ( x ~ N p , y ~ N p ) b x p ( I x , I y ) = f x p b ~ x p ( x ~ N p , y ~ N p ) ,
{ x ~ N p = 1 f x p ( W p I x W LUT u 0 p ) y ~ N p = 1 f y p ( H p I y H LUT v 0 p ) .
{ x u p = k x p ( I x , I y ) x p + b x p ( I x , I y ) y u p = k y p ( I x , I y ) y p + b y p ( I x , I y ) ,
( I x , I y ) = [ round ( W LUT x p W p ) , round ( H LUT y p H p ) ] ,
x ^ p = ( x 0 p x e p ) ( y 0 p y e p ) ( y p y e p ) + x e p ,
( I ^ x , I y ) = [ round ( W LUT x ^ p W p ) , round ( H LUT y p H p ) ] ,
{ Δ x ~ u p = [ k ~ x p ( x ~ p , y ~ p ) k ~ x p ( x ~ N p , y ~ N p ) ] x ~ p + [ b ~ x p ( x ~ p , y ~ p ) b ~ x p ( x ~ N p , y ~ N p ) ] Δ y ~ u p = [ k ~ y p ( x ~ p , y ~ p ) k ~ y p ( x ~ N p , y ~ N p ) ] y ~ p + [ b ~ y p ( x ~ p , y ~ p ) b ~ y p ( x ~ N p , y ~ N p ) ] .
{ Δ x ~ u p = [ k ~ x p x ~ p x ~ p + b ~ x p x ~ p ] Δ x ~ p + [ k ~ x p y ~ p x ~ p + b ~ x p y ~ p ] Δ y ~ p Δ y ~ u p = [ k ~ y p x ~ p y ~ p + b ~ y p x ~ p ] Δ x ~ p + [ k ~ y p y ~ p y ~ p + b ~ y p y ~ p ] Δ y ~ p ,
{ k ~ x p x ~ p x ~ p + b ~ x p x ~ p = x ~ u p y ~ p k ~ x p y ~ p x ~ p + b ~ x p y ~ p = x ~ u p y ~ p and { k ~ y p x ~ p y ~ p + b ~ y p x ~ p = y u p x ~ p k ~ y p y ~ p y ~ p + b ~ x p y ~ p = y u p x ~ p .
{ Δ x u p = x ~ u p y ~ p ( Δ y p Δ x p ) Δ y u p = y ~ u p x ~ p ( Δ x p Δ y p ) .
Δ r u p = ( Δ x u p ) 2 + ( Δ y u p ) 2 = ( x ~ u p y ~ p ) 2 + ( x ~ u p y ~ p ) 2 | Δ x p Δ y p | ,
k ~ x p = lim s 0 0 x ~ u 2 p x ~ u 1 p x ~ 2 p x ~ 1 p = lim s 0 0 x ~ u p ( x ~ 1 p + s 0 , y ~ 1 p + s 0 ) x ~ u p ( x ~ 1 p , y ~ 1 p ) x ~ 1 p + s 0 x ~ 1 p = lim s 0 0 x ~ u 1 p + x ~ u p x ~ p s 0 + x ~ u p y ~ p s 0 + O ( s 0 2 ) x ~ u 1 p s 0 = x ~ u p x ~ p + x ~ u p y ~ p ,
b ~ x p = lim s 0 0 [ x ~ u 1 p x ~ 1 p ( x u 2 p x u 1 p ) x 2 p x 1 p ] = lim s 0 0 [ x ~ u 1 p x ~ 1 p ( x ~ u 1 p + x ~ u p x ~ p s 0 + x ~ u p y ~ p s 0 + O ( s 0 2 ) x ~ u 1 p x ~ 1 p + s 0 x ~ 1 p ) ] = x ~ u 1 p ( x ~ u p x ~ p + x ~ u p y ~ p ) x ~ 1 p ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.