Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

System nonlinearity correction based on a multi-output support vector regression machine

Open Access Open Access

Abstract

In a fringe projection profilometry system, the phase error introduced by the projector's gamma distortion is the main source of errors. To overcome this problem, we present a phase compensation scheme for multi-dimensional harmonic coefficient prediction based on a multi-output support vector regression machine(M-SVR), The scheme first constructs a significant characteristic relationship between phase probability density function (PDF) and phase multi-harmonic coefficients, creates simulation data without a priori knowledge, constructs a data set with a certain sample size, and then trains the M-SVR model. The trained M-SVR model is used to capture the potential features of the experimental distorted phase and output the multi-dimensional harmonic parameters with nonlinear relationships, followed by error compensation of the distorted phase using an immobile point iteration algorithm for the purpose of correcting the system nonlinearity. We demonstrate the validity and stability of the model through simulation and experimental trials. Most importantly, the preprocessed M-SVR model also has the potential to participate in error correction of other measurement experiments with reasonable sample and hyperparameter settings, which greatly saves the time and cost of multiple experiments.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In the field of 3D measurement, fringe projection profilometry (FPP) is a method of obtaining more accurate information about the shape of an object by actively projecting a pattern with “known” information onto the object, thus adding more texture to the object. The encoded structured light is initially generated by a coherent laser, but the structured light pattern is highly susceptible to laser scattering and the phase shift cannot be precisely controlled [1,2]. The advent of LCD and DMD projectors has solved this problem perfectly by using a computer to generate a pattern with any precise phase shift, which can then be fed into a projection device and projected onto the surface of the object to be measured [3]. At the same time, the projector can also project patterns of arbitrary shapes generated by the computer, which makes it possible for the development of other kinds of structured lights (such as circular [4] and triangular patterns [5]) in the field of three-dimensional measurement. Unfortunately, commercial projectors are used and nonlinear errors are introduced into the entire system, which is due to the manufacturer introducing nonlinear relationships between the input and output of the projector to achieve better visual effects [6]. This leads to casting fringes that do not guarantee a high sinusoidal intensity distribution, thus affecting the reconstruction accuracy [7].

In order to reduce the impact of projector nonlinear distortion on phase acquisition, researchers have made various efforts. The designed schemes are divided into two main categories: The first one directly uses a few gray-level patterns to generate sinusoidal fringe patterns to overcome the nonlinearity problem, which we call the direct method. Ayubi et al. [8] generated a time series of binary patterns by fully weighting and averaging the standard fringe images to obtain gray-level sinusoidal patterns with maximum contrast, and the measurement range had to be limited to the depth of field of the system in order to avoid encoding errors due to being out of focus. Later, Ayubi et al. [9] proposed a scheme to generate binary sinusoidal patterns using the sine pulse width modulation (SPWM) technique. Zuo et al. [10] combined the SPWM with the four-step phase shift method to suppress the main harmonics in the system, and simultaneously used the three-pole sinusoidal pulse width modulation technology for the system to obtain an ideal sinusoidal pattern when the system is close-focused. The other category is based on the calibration method, which, due to the different implementations of compensation, is divided into active compensation and passive compensation methods. The active compensation method is achieved mainly through pre-calibrating the γ-value of the whole system before measurement, making the fringes distorted to perform the role of pre-encoding when encoding, and achieving the purpose of offsetting the non-linear effects after the measurement system. Liu et al. [11] derived a mathematical model of phase error and proposed a method to complete phase error compensation by calibrating γ from multiple phase shift patterns. Zheng et al. [12] used least-squares polynomial fitting to obtain function curves on gamma and phase error, and then achieved the optimal gamma value. Hoang et al. [7] expressed the gamma model of the system in terms of two coefficients, and determined the final precoding value by two three-step phase shifts. Zhang et al. [13] developed a generalized gamma model to obtain a quantitative relationship between the harmonic coefficients and the two factors of gamma distortion and bokeh, based on which an accurate estimation of gamma can be obtained, extending the results of Liu's [11] calculation only in special cases.

The passive method of compensation mainly relies on the correspondence between the nonlinearity of the projector and the phase error it introduces in the measurement, and therefore allows a direct compensation for this error. Zhang et al. [14,15] created a look-up table (LUT) for storing the phase error by statistically analyzing the phase pattern for error compensation, without using any mathematical gamma model. Pan et al. [16] applied an iterative method to sum the phase errors one by one until the program terminated when the difference between two adjacent results was below a set threshold, completing the phase compensation, but the parameters were not determined robustly enough. Liu et al. [17,18] used PDF to describe the phase distribution based on a statistical idea, which is another form of histogram [19], to construct the root-mean-square error (RMSE) curves of the ideal and experimental PDF distributions. The corresponding phase compensation effect is the best when the RMSE has a very small value, and thus the preferred harmonic coefficients are obtained; the computational effort is considerable as can be observed from the traversal-type optimization search process.

In the field of 3D measurement, the deep learning model gained from the training of simulation data was first used to remove interference fringe noise by Yan et al. [20], and was subsequently extended by others [21,22]. Feng et al. [23] trained a one-level cascaded deep convolutional neural network (CNN) for streak analysis and obtained the wrapped phase from a single streak map. Wu [24] et al. used a CNN model to learn the feature relationship between the deformed streak map and the ideal phase map, and obtained the corrected wrapped phase map by experiment. In contrast, Suresh [25] et al. used the phase map of Fourier transform contouring (FTP) as input and the ideal phase map as output to train the phase map enhancement network (PMENet) to obtain more desirable results.

Obviously, the aforementioned scholars have achieved more desirable results in streak filtering or phase reconstruction based on deep learning. However, such end-to-end model training for 2D images will be bounded by the localization of feature extraction in addition to consuming immeasurable time and computational costs.

In this paper, we first analyze the theoretical model of the phase contour measurement system, introduce the nonlinearity of the projector into the classical formula, extract the distribution characteristics of the main phase by using the probability density function, and propose a phase error compensation method of the multi-output support vector regression machine (M-SVR). The scheme is divided into two stages in total. The first stage is pre-processing, using the simulation model to obtain a certain amount of sample data set for training the M-SVR model, and finally obtaining a multi-input multi-output M-SVR model that can analyze the phase characteristics. The second stage is phase correction, using the trained model to analyze the PDF curves of the distorted phase, output the highly correlated harmonic coefficients set, and then use the iterative algorithm to compensate for the phase and obtain the optimized phase.

2. Principle

2.1 Phase shift profilometry (PSP)

Usually, a classical phase-shift contour measurement system consists of an industrial camera, a digital projector, and a computer, as shown in Fig. 1. Ideally, the intensity of the fringe patterns projected by the projector is usually expressed as

$$I_n^p(x,y) = A(x,y) + B(x,y)\cos [2\pi fx + {\delta _n}]$$
where A and B are both experimentally preset constants representing the average intensity and modulation intensity of the streak image, respectively; x is the horizontal coordinate of the pixel point; f is the frequency of the fringe patterns; and N is the number of phase shift steps.

 figure: Fig. 1.

Fig. 1. Measurement system.

Download Full Size | PDF

In a real measurement system, the camera and the projector have nonlinearity, but the nonlinearity of the camera can be neglected compared to the effect of the projector on the measurement. However, for the sake of accuracy, the nonlinearity in the latter description refers to the nonlinearity of the entire camera–projector system.

The intensity of the fringe images obtained by projecting the images onto a 3D diffuse object without considering the system nonlinear distortion is:

$$I_n^c(x,y) = A(x,y) + B(x,y)\cos [{\phi _0}(x,y) + {\delta _n}] $$
where ϕ0(x, y) is the main phase value containing the object information and Ic n(x, y) is the light intensity of each pixel captured by the camera. Using the least squares solution, the object principal phase value can be expressed as
$${\phi _0}(x,y) ={-} \arctan \left[ {\frac{{\sum\limits_{n = 0}^{N - 1} {I_n^c\sin ({\delta_n})} }}{{\sum\limits_{n = 0}^{N - 1} {I_n^c\cos ({\delta_n})} }}} \right] $$

Of course, the ideal phase value obtained is an arctangent function with modulus 2π, bound between -π and π. The absolute phase value can be determined by monitoring the phase difference between adjacent probe points, generally by adding an integral multiple of the 2π phase to the continuous phase portion after the jump point.

2.2 Nonlinearity of the camera–projector system

Among the many elements of external disturbances (machine noise, defocus, ambient light, exposure, etc.), the nonlinearity of the system is the most significant factor affecting the experimental measurement results. In the following, we take the gamma effect of the system into account in the derivation of the equation. If the overall nonlinearity of the system is expressed as a nonlinear function, the complex process of pixel intensity change in the projection and acquisition process can be simplified when the intensity of the captured deformation fringe images is approximated to higher harmonics, that is, Eq. (2) in the nonlinear environment is expanded as a Fourier series as follows:

$$\begin{aligned} I_n^{\prime c}(x,y) &= f[I_n^c(x,y)]\\ &\cong {k_0}(x,y) + {k_1}(x,y)\cos [{\phi _0}(x,y) + {\delta _n}]\\ &+ \sum\limits_{i = 2}^m {{k_i}(x,y)\cos \{{i[{\phi_0}(x,y) + {\delta_n}]} \}} \end{aligned} $$
where k0 denotes the DC component, k1 denotes the coefficient of the fundamental component, ki denotes the coefficient of the i -th order harmonic, and m denotes the maximum number of harmonics. Obviously, for standard fringe images, the coefficients of higher-order harmonics are all zero and the rest of the coefficients are not equal to zero, as shown in Fig. 2(b). Similarly, it can be observed from Fig. 2(d) and (f) that the spectrum corresponding to the picture of the fringe with nonlinearity contains high-frequency components of the second-order harmonics and above, and the intensity gradually increases with the increase in gamma value.

 figure: Fig. 2.

Fig. 2. Contrast of the fringe pattern under different nonlinearities. (a) Fringe pattern at gamma = 1. (b) Spectrum of panel (a). (c) Distorted fringe pattern at gamma = 1.6. (d) Spectrum of panel (c). (e) Distorted fringe pattern at gamma = 3.2. (f) Spectrum of panel (e). (g) Cross-sections of fringe pattern in red boxes of (a), (c), and (e). (h) is the folded phase diagram corresponding to (g).

Download Full Size | PDF

Therefore, the true principal phase value of the object corresponds to

$$\phi (x,y) ={-} \arctan \left[ {\frac{{\sum\limits_{n = 0}^{N - 1} {\left\{ {{k_0} + {k_1}\cos [{\phi_0}(x,y) + {\delta_n}] + \sum\limits_{i = 2}^m {{k_i}} \cos \{{i[{\phi_0}(x,y) + {\delta_n}]} \}} \right\}\sin ({\delta_n})} }}{{\sum\limits_{n = 0}^{N - 1} {\left\{ {{k_0} + {k_1}\cos [{\phi_0}(x,y) + {\delta_n}] + \sum\limits_{i = 2}^m {{k_i}} \cos \{{i[{\phi_0}(x,y) + {\delta_n}]} \}} \right\}\cos ({\delta_n})} }}} \right] $$
where ϕ(x, y) is the principal phase value of the object when considering the system’s nonlinearity.

When the time cost is not considered, it is usually possible to use large-step phase shifts to reduce phase errors [11,26], to avoid the effects of higher harmonics due to nonlinearities and even to suppress random noise in some systems. In real applications, however, we are often faced with the need for fast 3D reconstruction, so large-step phase shifts are inappropriate. In experimental studies on nonlinear phase error compensation, the minimum number of phase shift steps (i.e., N = 3) is usually used so that the optimization effect of the algorithm can be fully demonstrated. At the same time, we do not need to take all of the harmonics of all orders into account; harmonics above the fifth order have very little effect on the measurement results and can be ignored [11].

When we consider only harmonics below the sixth order and simplify the phase, we can obtain:

$$\begin{aligned} \phi (x,y) &={-} \arctan \left[ {\frac{{\sum\limits_{n = 0}^{N - 1} {\left\{ {{k_0} + {k_1}\cos [{\phi_0}(x,y) + {\delta_n}] + \sum\limits_{i = 2}^5 {{k_i}} \cos \{{i[{\phi_0}(x,y) + {\delta_n}]} \}} \right\}\sin ({\delta_n})} }}{{\sum\limits_{n = 0}^{N - 1} {\left\{ {{k_0} + {k_1}\cos [{\phi_0}(x,y) + {\delta_n}] + \sum\limits_{i = 2}^5 {{k_i}} \cos \{{i[{\phi_0}(x,y) + {\delta_n}]} \}} \right\}\cos ({\delta_n})} }}} \right]\\ &= \arctan \left[ {\frac{{{k_1}\sin [{\phi_0}(x,y)] - {k_2}\sin [2{\phi_0}(x,y)] + {k_4}\sin [4{\phi_0}(x,y)] - {k_5}\sin [5{\phi_0}(x,y)]}}{{{k_1}\cos [{\phi_0}(x,y)] + {k_2}\cos [2{\phi_0}(x,y)] + {k_4}\cos [4{\phi_0}(x,y)] + {k_5}\cos [5{\phi_0}(x,y)]}}} \right] \end{aligned} $$

According to the nonlinear characteristic, the fringe patterns’ pixel intensity produces a nonlinear error, which can be expressed by the equation

$$\Delta \phi (x,y) = \phi (x,y) - {\phi _0}(x,y) $$

Substituting Eq. (5) and Eq. (6) into Eq. (7) yields

$$\begin{aligned} &\Delta \phi (x,y) = \arctan \left[ {\frac{{ - ({k_2} - {k_4})\sin [3{\phi_0}(x,y)] - {k_5}\sin [6{\phi_0}(x,y)]}}{{{k_1} + ({k_2} + {k_4})\cos [3{\phi_0}(x,y)] + {k_5}\cos [6{\phi_0}(x,y)]}}} \right]\\ &\cong{-} {c_1}\sin [3{\phi _0}(x,y)] - {c_2}\sin [6{\phi _0}(x,y)] \end{aligned} $$
where c1 and c2 represent the first and second harmonic coefficients of the phase error harmonic formula, respectively.

It should be noted that the above results are derived for the phase shift step of N = 3 and the highest harmonic number, m = 5. The derivation can be easily extended to become a general case by assigning different values to N and m. The coefficients in the equation can be expressed as ci, i = 1, 2, …, l.

2.3 Nonlinear correction of the system based on M-SVR

2.3.1 M-SVR nonlinear parameter calibration

According to the literature [17], the formula for the PDF can be expressed as

$$F({m_0}) = P\left\{ {\frac{{2\pi }}{M} \cdot {m_0} - \pi \le \phi < \frac{{2\pi }}{M} \cdot ({m_0}\textrm{ + }1) - \pi } \right\} $$
where P{·} represents the probability of meeting the conditions in parentheses, M represents the number of samples, and m0 represents the number of segment points m0 = 1, 2…M. According to Eq. (3), the ideal wrapping phase is constrained between (-π, π). According to the principle of statistics, the frequency of the phase value appearing in each same sub-constrained interval is the same, as shown in Fig. 3(a). When the system is nonlinear, the phase distribution changes, and the PDF also changes, as shown in Fig. 3(b). The authors of [27] analyze how nonlinearity can change the probability density distribution by affecting the phase distribution.

 figure: Fig. 3.

Fig. 3. Calculation principle of the PDF. (a) Calculation of the ideal-phase PDF. (b) Calculation of the distorted-phase PDF.

Download Full Size | PDF

When the number of phase shift steps is N = 3, the phase error is most predominantly distributed around the phase values of -2π/3, 0 and 2π/3 [27]; thus, in order to fully reflect the distribution characteristics of the phase values, the number of sampling points M = 51 was adopted by the paper, in accordance with the sampling theorem.

From Fig. 4(a), we can observe that the probability density curve distribution becomes gradually steeper as the system’s γ-value increases, and there is a one-to-one correspondence between the magnitude of the system gamma value and the coefficients in Eq. (8). Therefore, the same correspondence exists between the phase error coefficients and the probability density distribution function (PDF). As with many active compensation methods, the relationship between the probability density curve and the nonlinear coefficients can be expressed by specific mathematical model assumptions [11], but the experimental environment has certain effects, such as camera noise, ambient light and defocus, which can make the model calculation lack precision.

 figure: Fig. 4.

Fig. 4. Global distribution of the wrapped-phase PDF and the relationship with gamma. (a) Probability density distribution for γ values from 1 to 3.2 spaced by 0.2. (b) Global distribution of the distortion-wrapped-phase PDF for γ = 3.2.

Download Full Size | PDF

Recently, machine learning models based on nonparametric statistics have become a hot topic of research and can be used to simulate the relationship between inputs and outputs by fitting a flexible model based on the existing data. The trained model can then be used to predict the new nonlinear parameters as the basis for subsequent phase compensations. Currently, researchers prefer to use trained neural networks [26,28] to obtain a “black box” between two or even more variables, but the performance of neural networks is low in the case of insufficient labeled sample points, and they are prone to overfitting or achieving local optimality in the simulation process.

Support vector machines are developed on the basis of small-sample statistical theory and the minimization of structural risk, and are another tool for solving linear and nonlinear inputs and outputs, compensating for some of the deficiencies of neural networks. In the 1960s, Vapnik started his research on statistical learning theory; he summarized and systematically proposed support vector machines in 1995 [29]. Initially, support vector machines were used to solve the problem of binary classification, but as the research progressed, they were later extended to its regression algorithm, making SVMs with the functions of data prediction and function fitting.

The goal of the standard support vector regression machine (SVR) [30] is to find a function $f(x )= {\Phi ^T}(x )w + b$ that minimizes the risk of structuring the N-dimensional input vector ${x_i} \in {R^N}$ and an observable output ${y_i} \in R$ in a given set of independent samples. The classical structure of the support vector regression machine can only solve the one-dimensional regression estimation problem, especially in the case of multiple outputs being involved in the general use of separate training of multiple vector regression machines. This experiment undoubtedly ignores the nonlinear relationship between the output variables (harmonic coefficients ${c_i}$), which makes the prediction results deviate seriously from the true value. Therefore, we extended the single-output support vector regression machine to a multi-output support vector regression machine by introducing an ε-insensitive cost function.

The value of the PDF curve was used as the input to the multi-output support vector machine, denoted as ${x_j},j = 1,2,\ldots ,n$; its corresponding harmonic coefficients were used as the real output vector, denoted as ${y_j},j = 1,2,\ldots ,n$. Then, the training sample set of this multi-output support vector regression machine can be expressed as

$$\{{({{x_j},{y_j}} )\in {R^M} \times {R^l},j = 1,2,\ldots ,n} \} $$
where M is the dimension of the input variable ${x_j}$, i.e., the number of PDF sampling points; l is the dimension of the output variable, i.e., the number of nonlinear coefficients to be predicted; and n is the total number of training samples. Since the input samples were linearly inseparable in the sample space, it was necessary to map the input samples into a high-dimensional, approximately linear separable feature space,
$$\Phi :{x_j} \to \Phi ({{x_j}} ) $$
where $\Phi ({\cdot} )$ represents the nonlinear transformation of the high-dimensional Hilbert space H, which is complicated for us. By Mercer's theorem, when Κ is a kernel satisfying Mercer's condition, we can avoid the specific nonlinear mapping to represent the nonlinear transformation as an inner product of the form: $K({{x_j},{x_k}} )= \Phi ({{x_j}} ),\Phi ({{x_k}} )$ . For a multi-output support vector regression machine, it is necessary to find a function
$$f({{x_j}} )= {\Phi ^T}({{x_j}} )W + B $$
to minimize the structural risk ${R_{reg}}(f )$, where $W = {w^k} = [{w^1},{w^2},\ldots ,{w^l}],$ $B = {b^k} = [{b^1},{b^2},\ldots ,{b^l}]$, ${w^k}$ is the weight vector of the $k$-th output, and BK is the bias of the $k$-th output, $k = 1,2,\ldots ,l$.

Therefore, we can equate the multi-dimensional output M-SVR to a conditional constraint optimization problem based on the support vector principle, i.e., the objective function is

$$\begin{aligned} &L({W,B} )= \frac{1}{2}{\sum\limits_{k = 1}^l {||{{w^k}} ||} ^2} + C\sum\limits_{j = 1}^n {L({\triangle {u_j}} )} \\ &s.t.||{{y_j} - {\Phi ^T}({{x_j}} )W - B} ||\le \varepsilon + {\xi _j}\\ &{\xi _j} \ge 0\textrm{ j} = 1,2,\ldots n \end{aligned} $$
where C is the regularization parameter that balances model complexity and error frequency, ${\xi _j}$ is the slack variable, and
$$\begin{aligned} \Delta {u_j} &= ||{{e_j}} ||= \sqrt {({{e_j}^T{e_j}} )} ,\\ {e_j}^T &= y_j^T - \varphi ({{x_j}} )W - {B^T} \end{aligned}$$
where $L({\cdot} )$ is the ε-insensitive loss function. In particular, in single-output support vector machines, we usually use a loss function that is insensitive to ε. For the multi-output support vector regression machine, we adapted the insensitive loss function by extending it to multiple dimensions and using an L2-based paradigm, such that each dimension is uniquely restricted [31],
$$L({\Delta u} )= \left\{ {\begin{array}{cc} 0&{\textrm{ }\Delta u < \varepsilon \textrm{ }}\\ {{({\Delta u - \varepsilon } )}^2}&{\Delta u \ge \varepsilon \textrm{ }} \end{array}} \right. $$

It is worth noting that, when ε=0 in Eq. (14), the optimization problem is equivalent to solving a ridge regression problem with a single nonlinear input; when $\varepsilon \ne 0$, the function will have considered each dimension of the output at the same time, and the relationship between the output variables will be included in the process of regression optimization, with separate support vectors being created separately to obtain a reliable prediction output.

In order to solve Eq. (13) to find the optimal regression variables W and B such that the output is minimized, we introduced an iterative reweighted least-squares (IRWLS) method based on the proposed Newton method [32], which allows solving the optimal vector solution with a smaller number of iterations, undoubtedly reducing the computational effort.

To create the conditions for iteration, we introduced the second-order Taylor expansion of the cost function. Thus, Eq. (13) can be expressed as

$$L^{\prime\prime}({W,B} )= \frac{1}{2}{\sum\limits_{k = 1}^l {||{{w^k}} ||} ^2} + \frac{1}{2}\sum\limits_{j = 1}^n {{a_\textrm{j}}\triangle u_j^2} + CV,{a_\textrm{j}} = \left\{ {\begin{array}{cc} 0&{\triangle u_j^t < \varepsilon }\\ \frac{{2C({\triangle u_j^t - \varepsilon } )}}{{\triangle u_j^t}}\textrm{ }&{\triangle u_j^t \ge \varepsilon \textrm{ }} \end{array}\textrm{ }} \right. $$
where the superscript T is the number of iterations of the algorithm and CV denotes a constant independent of W and B.

Equation (15) is equivalent to a weighted least-squares problem, and to optimize, the use of the IRWLS process is required, which is mainly based on searching linearly in the previous solution, in its descent direction, for the next solution. Thus, the goal of the iterative process is transformed into finding the optimal W and B. The specific steps are as follows:

Step 1: Initialization: set $t = 0$, ${W^t} = 0$, ${B^t} = 0$; calculate $\triangle u_j^t$ and ${a_j}$.

Step 2: Calculate the value of Eq. (15) and label it as ${W^s}$ and ${B^s}$. Determine the direction of descent of Eq. (13) as ${P^k} = \left[ {\begin{array}{c} {({{W^s} - {W^t}} )}\\ {({{B^s} - {B^t}} )} \end{array}} \right]$.

Step 3: Solve the next step of the solution $\left[ {\begin{array}{c} {{W^{t + 1}}}\\ {{{({{B^{t + 1}}} )}^T}} \end{array}} \right] = \left[ {\begin{array}{c} {{W^t}}\\ {{{({{B^t}} )}^T}} \end{array}} \right] + {\eta ^t}{P^t}$ and calculate the step size ${\eta ^t}$ using the backtracking algorithm.

Step 4: Calculate $\triangle u_j^{t + 1}$ and ${a_\textrm{j}}$, and make $t = t + 1$ ; return to step 2 until convergence, then substitute the calculated W and B into Eq. (12) to obtain the predicted value. Since $\triangle u_j^t$ is solved iteratively by each dimension of the input variable ${y_j}$, each regression variable contains all the output information, which algorithmically improves the accuracy of the prediction.

2.3.2 Gamma phase error compensation

According to the theoretical analysis in the previous section, we replaced the process of harmonic coefficient prediction with the objective function under the optimization constraint and used the IRWLS method to obtain the optimal results. After obtaining the coefficients ${\hat{c}_i} = \{ {\hat{c}_1},{\hat{c}_2},{\hat{c}_3}\ldots {\hat{c}_l},i = 1,2,\ldots ,l\}$ of each order of harmonics of the distorted phase, the relationship between before and after phase optimization needs to be established so that the distorted phase is optimally compensated with the following iterative algorithm.

$${\phi _i}^{(t^{\prime} + 1)}(x,y) = {\phi _{i - 1}}(x,y) + {\hat{c}_i}\sin [{({3i} ){\phi_i}^{(t^{\prime})}(x,y)} ] $$
where $t^{\prime}$ denotes the $t^{\prime}$-th iteration of the phase. When $i = 1$, let the initial phase ${\phi _0}(x,y)$ value be equal to ${\phi _1}(x,y)$, with ${\phi _1}(x,y) = {\phi _0}(x,y)$, where $\phi (x,y)$ is the parcel phase to be corrected.

Integrating the above model derivation, we can build the whole framework of M-SVR nonlinear compensation, as shown in Fig. 5, and the steps of the process can be described as follows:

 figure: Fig. 5.

Fig. 5. Framework of M-SVR nonlinear compensation algorithm.

Download Full Size | PDF

Step 1: Training samples are created to train the M-SVR model. We can generate the corresponding dimensional harmonic coefficient training samples according to the accuracy requirements and calculate the corresponding parcel phase PDF vector. Of course, the larger the dimension of the target coefficient, the greater the vector machine training intensity. Using the PDF of M sampling points as features and the corresponding harmonic coefficients as labels, we obtain the training samples $\{{({{x_j},{y_j}} )\in {R^M} \times {R^l},j = 1,2,\ldots ,n} \}$. At the same time, an appropriate proportion of random noise can be added to the training samples to avoid overfitting problems.

Step 2: Select the same number of sampling points M and calculate the PDF of the experimental folded phase.

Step 3: Phase error harmonic coefficient prediction. The M dimensional features sampled from the experimental phase PDF are input into a multi-output support vector machine to obtain the harmonic coefficient prediction results ${\hat{c}_i}$.

Step 4: Phase compensation. Using the immobile point iteration, the constraint relationship before and after the phase compensation is constructed, and the optimized phase is obtained by setting the number of iterations or the threshold value of the phase change between the before and after iterations, thus terminating the iterative algorithm.

Thus, this concludes the compensation of the phase error caused by the nonlinearity of the projector using the M-SVR algorithm framework.

3. Simulation

In the previous section, we theoretically showed that M-SVR can be used for the prediction of harmonic coefficients and that the obtained harmonic coefficients can be phase-compensated by an iterative method. Therefore, in this section, we demonstrate through simulation that the use of M-SVR can effectively correct the nonlinearity of the 3D measurement system.

Before we evaluated the performance of the M-SVR, we allowed the algorithm to learn the correspondence between the input and output vectors. First, we established a training sample dataset $D = \{{({{x_j},{y_j}} )\in {R^M} \times {R^l},j = 1,2,\ldots ,n} \}$, where the dimension of the input sample ${x_j}$ vector was set to M = 51 to avoid an excessive data volume. To consider the final compensation effect, the vector dimension l of the sample label set was set to 3. According to the relationship between the magnitude of harmonic coefficients and our prior experience, the range of harmonic coefficients was set to ${c_1} \in [0.01,0.5]$;${c_2} \in [\textrm{ - 0}\textrm{.1},0.1]$, whose number of sampling points is 20; and ${c_3} \in [0,0.01]$, whose number of sampling points is 40, so the total number of training samples is n = 16000. In summary, the training sample set was $D = \{{({{x_j},{y_j}} )\in {R^{51}} \times {R^3},j = 1,2,\ldots ,16000} \}$, and this grid-based training sample design not only met the requirement of selection de-randomization, but also ensured that the resulting training regressors were applicable to the different experimental settings.

The generalization ability of support vector regression algorithms is highly correlated with the kernel function and the choice of parameters. In the absence of a priori knowledge, the Radius Basis Function (RBF) kernel usually offers better prediction results [33,34], and Table 1 demonstrates that, for the experimental sample in this paper, the radial basis kernel had the smallest root-mean-square error and the test-time values were closer to the true values. After selecting the kernel function, we needed to determine the optimal RBF with the three hyperparameters of kernel function width σ, insensitivity coefficient ε, and regularization parameter C. The kernel function width σ mainly reflects the distribution characteristics of the training data samples and describes the width of the input sample space: the larger sample width describes a large σ and the insensitivity coefficient ε mainly reflects the width of the insensitive region. In selecting specific parameters, we used the K-fold cross-validation algorithm to randomly divide the training samples into K different subsets, and used K-1 of them as the training set and the remaining one as the test set. After K rounds of repeated training, we averaged the root-mean-square error obtained from K times of validation to obtain $RMSE = \frac{1}{K}\sum\limits_{k = 1}^K {\sqrt {\sum\limits_{i = 1}^{{{n^{\prime}}_k}} {\sum\limits_{j = 1}^l {{{({{y_{ij}} - {{\hat{y}}_{ij}}} )}^2}} } } }$, and the parameter model corresponding to the minimum RMSE had the optimal generalization ability. Typically, we set the number of collapsed validations K to 5. When specifying the hyperparameter sample size, we used a Bayesian optimization algorithm to ensure that the hyperparameter pair with the best effect was found with the least amount of computation possible. In addition to reduce the impact of training data correlation on the experimental results, normalizing the samples in advance is also recommended in this paper.

Tables Icon

Table 1. Estimated performance of different kernel functions.

To confirm the validity and generality of the method, we used the computer to generate the object shown in Fig. 6(a), and the projection of the encoded stripe on it generated the modulated sinusoidal stripe image shown in Fig. 6(b). when setting the nonlinear γ=2.1 in the system, it can be observed in the plot shown in Fig. 6(c) that the nonlinearity makes the object phase produce periodic undulation, after calculating the corresponding PDF. After the corresponding PDF values were calculated, the parameters were estimated, at this point, using the trained multi-output support vector machine, and the resulting phase error harmonic parameters were ${c_1} = 0.269195$ ; ${c_2} = \textrm{ - }0.038527$; and ${c_3} = 0.010511$. Figure 6(d) and (e) show the phase comparisons of the before and after compensation for different objects, respectively, and from Fig. 6(f), it is known that the maximum phase error value of the object decreases from 0.27229 before compensation to 0.005196. Therefore, the method shows a better compensation effect for different objects in the simulation. In addition, since the number of harmonic coefficients set by the method is a compromise between performance and time cost, this limits the optimal results of the method to a certain extent.

 figure: Fig. 6.

Fig. 6. Phase compensation of the object using the M-SVR nonlinear calibration frame. (a) Object to be measured; (b) distorted sinusoidal fringe images after modulation. (c) The unfolded phase of the object distortion. (d) Comparison of the before and after phase correction of the 250th row. (e) Comparison of the before and after phase correction of the 600th row. (f) Phase error of the object before and after correction.

Download Full Size | PDF

To demonstrate the superiority of the algorithm in terms of accuracy, i.e., the ability of M-SVR to accurately predict the coefficients in multiple dimensions, we created training samples with dimensions varying from 1 to 3, i.e., the maximum harmonic level considered is well above the threshold level at which the harmonics can be ignored, so the simulation under this condition is informative. We considered the prediction data and the compensation effect in different harmonic dimensions for the environments with gamma values of 1.2, 1.6, 2.1, and 2.8, respectively.

As shown in Table 2, we know that the prediction data of different dimensions compensate the original error to different degrees, and in general, as the dimension of the harmonic coefficients increases, the value of the residual maximum phase error gradually decreases and a better compensation effect is obtained. Additionally, it is known from the comparison of groups with different gamma environments that the used iterative algorithm [16] has a better convergence performance with a smaller gamma and can keep the residual phase error within a small range.

Tables Icon

Table 2. Compensation results for different phase error parameter dimensions.

In summary, the M-SVR models trained by adding different gamma values to the simulation environment were able to predict the harmonic coefficients and control the phase error within a small range after compensation. In the range of model accuracy, the residual phase error decreases as the dimension of the predicted coefficients increases. Therefore, the method has good prediction and compensation effects for different nonlinear experimental environments.

4. Experiment

In this section, we prepare to build a FPP 3D measurement system with the purpose of experimentally verifying the feasibility of the method. The system is mainly composed of a commercial projector (resolution of 1920 × 1080) of the Light and Shadow W7 series to complete the structured light projection, a CMOS camera (resolution of 4000 × 3036, target size of 1/1.7”, minimum working distance of 100 mm for the lens) of the HIKROBOT MV-CE120-10UM series to complete the image acquisition and an HP LAPTOP-T6S8USRQ computer to process the acquired images and then add the object to be measured. The experimental device is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Experimental device.

Download Full Size | PDF

Firstly, the projection system projected the computer-generated sinusoidal grating fringes, the experimental grating period was 40 pixels, the fringes phase shift step was N = 3, the horizontal distance from the projection device to the acquisition device was 130 mm, and the distance from the acquisition system to the reference plane was 300 mm. The following experiments were conducted in this measurement system. From Section 3, it can be seen that the harmonic coefficient dimension set to 3 is the best choice. We measured a smooth ellipsoidal model with fringes projected onto the model as shown in Fig. 8(a), and the unfolded phase obtained under normal experimental conditions is shown in Fig. 8(b), which has periodic “water ripples” on the phase surface. When we predicted the PDF of the experimental phase using the M-SVR obtained from the previous training, the coefficients obtained were ${c_1} = \textrm{0}\textrm{.188001}$, ${c_2} = \textrm{ - 0}\textrm{.0139432}$, and ${c_3} = \textrm{0}\textrm{.00378175}$, and the compensated object phase is shown in Fig. 8(c), which shows that the distorted phase is greatly improved. In order to test the compensation effect, we used the phase obtained from the eighteen-step phase shift as the ideal phase to obtain the phase error before and after compensation. After calculation, the average phase error of the object before compensation is 0.1853rad, and the proposed method in this paper reduced the phase error to 0.01432 rad.

 figure: Fig. 8.

Fig. 8. Experimental results of the ellipsoidal model. (a) Stripe projection onto the ellipsoidal model; (b) phase before correction; (c) the corrected phase. (d) Comparison of the phase error before and after correction. (e) Phase comparison of the objects before and after correction.

Download Full Size | PDF

In order to demonstrate the robustness of the method, we performed additional nonlinear compensation experiments for more complex models. Figure 9(a) and (b) show the deformation phases of the plaster head and the lion model to be tested, respectively. It should be noted that the experimental settings remained the same for both groups. We followed the measurement method above and measured the results directly as in Fig. 9(b) and (f), predicting the harmonic coefficients of the plaster statue as ${c_1} = \textrm{0}\textrm{.163025}$, ${c_2} = \textrm{ - 0}\textrm{.0128851}$, and ${c_3} = \textrm{0}\textrm{.00293318}$. The harmonic coefficients of the lion model are ${c_1} = \textrm{0}\textrm{.181029}$, ${c_2} = \textrm{ - 0}\textrm{.0103604}$, and ${c_3} = \textrm{0}\textrm{.00454597}$. The phase was subsequently compensated using the iterative method, and the phases of Fig. 9(d) and (h) could then be obtained. After the comparison, it can be observed from the phase details of the objects that the phases become smoother and significantly improved.

 figure: Fig. 9.

Fig. 9. Experimental results for the plaster statue and the lion model. (a) and (e) are the folded phases of the plaster statue and the lion; (b) and (f) are the phase diagrams of the uncompensated objects; (c) and (g) are the phase diagrams of the object after compensation by Pan's method; and (d) and (h) are the phase diagrams after compensation by the method proposed in this paper.

Download Full Size | PDF

In addition, we reproduced Pan's [16] correction scheme and the measured results are shown in Fig. 9(c) and (g). The results show that the correction of our proposed scheme is slightly better than the results of the method under the experimental conditions set in this paper. In order to observe the change in phase error more intuitively, we selected the pixels in the 2000th row of the phase error, as shown in Fig. 10, where (a) and (b) show the comparison of the remaining phase error of the plaster statue and the lion model, respectively. According to the statistics, the average phase error of the plaster head decreased from 0.1641 rad to 0.01498 after compensation by our method, which is about 11 times lower than that before compensation; the phase error of the lion model decreased from 0.1901 to 0.01382, which is more than 13 times lower. Both controlled the phase error of the object within an acceptable range.

 figure: Fig. 10.

Fig. 10. Comparison of the residual phase errors of different methods. Comparison of the phase error before and after correction of the (a) plaster statue and (b) lion model.

Download Full Size | PDF

5. Conclusion and discussion

In 3D measurement experiments with digitally structured light projection, the nonlinearity of the projector can cause errors, which make the final reconstructed object point cloud deviate from the real object 3D coordinate values. In this paper, we introduce a scheme for flexible and reliable nonlinearity compensation. The scheme is pre-processed in the first stage. This stage is mainly responsible for the training of the model. We use PDF curves to describe the phase characteristics and harmonic coefficients of phase errors to quantify the phase deviation, and the changes of both are strongly correlated, so we use them as inputs and outputs of the model. Prior to the experiments we used the simulation model entirely to obtain training samples, thus reducing time and labor costs. Although the preprocessing time is longer compared to other methods [10,13,35,36], this is sensible in view of the potential of the model to be applied to other experiments. The second stage is compensation. We analyze the experimental data with an optimized M-SVR model that has ability to analyze PDF curves and output multidimensional harmonic coefficients, taking into account the nonlinear relationship of each output. We perform iterative operations with the obtained nonlinear coefficients to optimize the phase compensation.

At the same time, the proposed method also has some limitations. The method can theoretically obtain perfect nonlinear compensation results by extending the dimensionality of the higher-order phase error coefficients for training as well as prediction, but this expectation is limited by the prediction accuracy of the vector machine, on the one hand, which is related to the training samples and the hyperparameters obtained from the optimization search. On the other hand, when we performed the theoretical derivation, the difference in the model construction between the actual experimental system nonlinearity and the ideal system nonlinearity also became a major factor affecting the compensation effect. Therefore, our future research efforts will be directed toward improving the prediction accuracy of M-SVR and optimizing the nonlinear parameter model.

Funding

National Key Research and Development Program of China (No. 2022YFB3606300); National Natural Science Foundation of China (No. U2230129).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. Quan, R. Zhu, K. Qian, Q. Song, R. Zhu, A. Asundi, and J. Li, “Three-dimensional shape measurement with wavelength-modulated laser base on fiber optic interferometer projection,” presented at the International Conference on Optics in Precision Engineering and Nanotechnology (icOPEN2013)2013.

2. M. Halioua and H. C. Liu, “Optical three-dimensional sensing by phase measuring profilometry,” Opt. Laser Eng. 11(3), 185–215 (1989). [CrossRef]  

3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser Eng. 48(2), 133–140 (2010). [CrossRef]  

4. H. Zhao, C. Zhang, C. Zhou, K. Jiang, and M. Fang, “Circular fringe projection profilometry,” Opt. Lett. 41(21), 4951–4954 (2016). [CrossRef]  

5. P. Jia, J. Kofman, and C. English, “Error compensation in two-step triangular-pattern phase-shifting profilometry,” Opt. Laser Eng. 46(4), 311–320 (2008). [CrossRef]  

6. Z. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Laser Eng. 48(2), 218–225 (2010). [CrossRef]  

7. T. Hoang, B. Pan, D. Nguyen, and Z. Y. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. 35(12), 1992–1994 (2010). [CrossRef]  

8. G. A. Ayubi, J. M. Di Martino, J. R. Alonso, A. Fernandez, C. D. Perciante, and J. A. Ferrari, “Three-dimensional profiling with binary fringes using phase-shifting interferometry algorithms,” Appl. Opt. 50(2), 147–154 (2011). [CrossRef]  

9. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]  

10. C. Zuo, Q. Chen, S. J. Feng, F. Feng, G. H. Gu, and X. B. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]  

11. K. Liu, Y. C. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010). [CrossRef]  

12. D. Zheng and F. Da, “Gamma correction for two step phase shifting fringe projection profilometry,” Optik 124(13), 1392–1397 (2013). [CrossRef]  

13. X. Zhang, L. M. Zhu, Y. F. Li, and D. W. Tu, “Generic nonsinusoidal fringe model and gamma calibration in phase measuring profilometry,” J. Opt. Soc. Am. A 29(6), 1047–1058 (2012). [CrossRef]  

14. K. G. Harding, S. Zhang, and P. S. Huang, “Phase error compensation for a 3-D shape measurement system based on the phase-shifting method,” presented at the Two- and Three-Dimensional Methods for Inspection and Metrology III2005.

15. S. Zhang and S. T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef]  

16. B. Pan, Q. Kemao, L. Huang, and A. Asundil, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef]  

17. Y. Liu, X. Yu, J. Xue, Q. Zhang, and X. Su, “A flexible phase error compensation method based on probability distribution functions in phase measuring profilometry,” Opt. Laser Technol. 129, 106267 (2020). [CrossRef]  

18. X. Yu, Y. Liu, N. Liu, M. Fan, and X. Su, “Flexible gamma calculation algorithm based on probability distribution function in digital fringe projection system,” Opt. Express 27(22), 32047–32057 (2019). [CrossRef]  

19. H. W. Guo, H. T. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]  

20. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi, “Fringe pattern denoising based on deep learning,” Opt. Commun. 437, 148–152 (2019). [CrossRef]  

21. B. Lin, S. Fu, C. Zhang, F. Wang, and Y. Li, “Optical fringe patterns filtering based on multi-stage convolution neural network,” Opt. Lasers Eng. 126, 105853 (2020). [CrossRef]  

22. W. Xiaoxi, Y. Yingjie, and H. Jianbin, “Aliasing fringe pattern denoising based on deep learning,” Proc. SPIE 12069, 120690S (2021). [CrossRef]  

23. S. Feng, Q. Chen, G. Gu, et al., “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 025001 (2019). [CrossRef]  

24. S. J. Wu and Y. Zhang, “Gamma correction by using deep learning,” Proc. SPIE 11571, 115710V (2020). [CrossRef]  

25. V. Suresh, Y. Zheng, and B. Li, “PMENet: phase map enhancement for Fourier transform profilometry using deep learning,” Meas. Sci. Technol. 32(10), 105001 (2021). [CrossRef]  

26. S. Feng, C. Zuo, L. Zhang, W. Yin, and Q. Chen, “Generalized framework for non-sinusoidal fringe analysis using deep learning,” Photonics Res. 9(6), 1084–1098 (2021). [CrossRef]  

27. X. Yu, S. Lai, Y. Liu, W. Chen, J. Xue, and Q. Zhang, “Generic nonlinear error compensation algorithm for phase measuring profilometry,” Chin. Opt. Lett. 19(10), 101201 (2021). [CrossRef]  

28. Y. Yang, Q. Hou, Y. Li, Z. Cai, X. Liu, J. Xi, and X. Peng, “Phase error compensation based on Tree-Net using deep learning,” Opt. Laser Eng. 143, 106628 (2021). [CrossRef]  

29. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning 20(3), 273–297 (1995). [CrossRef]  

30. A. J. Smola and B. Scholkopf, “A tutorial on support vector regression,” Statist. Comput. 14(3), 199–222 (2004). [CrossRef]  

31. M. Sanchez-Fernadez, M. de-Prado-Cumplido, J. Arenas-Garcia, and F. Perez-Cruz, “SVM multiregression for nonlinear channel estimation in multiple-input multiple-output systems,” IEEE Trans. Signal Process. 52(8), 2298–2307 (2004). [CrossRef]  

32. D. Tuia, J. Verrelst, L. Alonso, F. Perez-Cruz, and G. Camps-Valls, “Multioutput Support Vector Regression for Remote Sensing Biophysical Parameter Estimation,” IEEE Geosci. Remote Sensing Lett. 8(4), 804–808 (2011). [CrossRef]  

33. A. L. Haywood, J. Redshaw, M. W. D. Hanson-Heine, A. Taylor, A. Brown, A. M. Mason, T. Gartner, and J. D. Hirst, “Kernel Methods for Predicting Yields of Chemical Reactions,” J. Chem. Inf. Model. 62(9), 2077–2092 (2022). [CrossRef]  

34. C. J. C. Burges, “A tutorial on Support Vector Machines for pattern recognition,” Data Mining Knowl. Discov. 2(2), 121–167 (1998). [CrossRef]  

35. J. H. Wang and Y. X. Yang, “Triple N-Step Phase Shift Algorithm for Phase Error Compensation in Fringe Projection Profilometry,” IEEE Trans. Instrum. Meas. 70, 1–9 (2021). [CrossRef]  

36. C. Jiang, S. Xing, and H. W. Guo, “Fringe harmonics elimination in multi-frequency phase-shifting fringe projection profilometry,” Opt. Express 28(3), 2838–2856 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Measurement system.
Fig. 2.
Fig. 2. Contrast of the fringe pattern under different nonlinearities. (a) Fringe pattern at gamma = 1. (b) Spectrum of panel (a). (c) Distorted fringe pattern at gamma = 1.6. (d) Spectrum of panel (c). (e) Distorted fringe pattern at gamma = 3.2. (f) Spectrum of panel (e). (g) Cross-sections of fringe pattern in red boxes of (a), (c), and (e). (h) is the folded phase diagram corresponding to (g).
Fig. 3.
Fig. 3. Calculation principle of the PDF. (a) Calculation of the ideal-phase PDF. (b) Calculation of the distorted-phase PDF.
Fig. 4.
Fig. 4. Global distribution of the wrapped-phase PDF and the relationship with gamma. (a) Probability density distribution for γ values from 1 to 3.2 spaced by 0.2. (b) Global distribution of the distortion-wrapped-phase PDF for γ = 3.2.
Fig. 5.
Fig. 5. Framework of M-SVR nonlinear compensation algorithm.
Fig. 6.
Fig. 6. Phase compensation of the object using the M-SVR nonlinear calibration frame. (a) Object to be measured; (b) distorted sinusoidal fringe images after modulation. (c) The unfolded phase of the object distortion. (d) Comparison of the before and after phase correction of the 250th row. (e) Comparison of the before and after phase correction of the 600th row. (f) Phase error of the object before and after correction.
Fig. 7.
Fig. 7. Experimental device.
Fig. 8.
Fig. 8. Experimental results of the ellipsoidal model. (a) Stripe projection onto the ellipsoidal model; (b) phase before correction; (c) the corrected phase. (d) Comparison of the phase error before and after correction. (e) Phase comparison of the objects before and after correction.
Fig. 9.
Fig. 9. Experimental results for the plaster statue and the lion model. (a) and (e) are the folded phases of the plaster statue and the lion; (b) and (f) are the phase diagrams of the uncompensated objects; (c) and (g) are the phase diagrams of the object after compensation by Pan's method; and (d) and (h) are the phase diagrams after compensation by the method proposed in this paper.
Fig. 10.
Fig. 10. Comparison of the residual phase errors of different methods. Comparison of the phase error before and after correction of the (a) plaster statue and (b) lion model.

Tables (2)

Tables Icon

Table 1. Estimated performance of different kernel functions.

Tables Icon

Table 2. Compensation results for different phase error parameter dimensions.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

I n p ( x , y ) = A ( x , y ) + B ( x , y ) cos [ 2 π f x + δ n ]
I n c ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ 0 ( x , y ) + δ n ]
ϕ 0 ( x , y ) = arctan [ n = 0 N 1 I n c sin ( δ n ) n = 0 N 1 I n c cos ( δ n ) ]
I n c ( x , y ) = f [ I n c ( x , y ) ] k 0 ( x , y ) + k 1 ( x , y ) cos [ ϕ 0 ( x , y ) + δ n ] + i = 2 m k i ( x , y ) cos { i [ ϕ 0 ( x , y ) + δ n ] }
ϕ ( x , y ) = arctan [ n = 0 N 1 { k 0 + k 1 cos [ ϕ 0 ( x , y ) + δ n ] + i = 2 m k i cos { i [ ϕ 0 ( x , y ) + δ n ] } } sin ( δ n ) n = 0 N 1 { k 0 + k 1 cos [ ϕ 0 ( x , y ) + δ n ] + i = 2 m k i cos { i [ ϕ 0 ( x , y ) + δ n ] } } cos ( δ n ) ]
ϕ ( x , y ) = arctan [ n = 0 N 1 { k 0 + k 1 cos [ ϕ 0 ( x , y ) + δ n ] + i = 2 5 k i cos { i [ ϕ 0 ( x , y ) + δ n ] } } sin ( δ n ) n = 0 N 1 { k 0 + k 1 cos [ ϕ 0 ( x , y ) + δ n ] + i = 2 5 k i cos { i [ ϕ 0 ( x , y ) + δ n ] } } cos ( δ n ) ] = arctan [ k 1 sin [ ϕ 0 ( x , y ) ] k 2 sin [ 2 ϕ 0 ( x , y ) ] + k 4 sin [ 4 ϕ 0 ( x , y ) ] k 5 sin [ 5 ϕ 0 ( x , y ) ] k 1 cos [ ϕ 0 ( x , y ) ] + k 2 cos [ 2 ϕ 0 ( x , y ) ] + k 4 cos [ 4 ϕ 0 ( x , y ) ] + k 5 cos [ 5 ϕ 0 ( x , y ) ] ]
Δ ϕ ( x , y ) = ϕ ( x , y ) ϕ 0 ( x , y )
Δ ϕ ( x , y ) = arctan [ ( k 2 k 4 ) sin [ 3 ϕ 0 ( x , y ) ] k 5 sin [ 6 ϕ 0 ( x , y ) ] k 1 + ( k 2 + k 4 ) cos [ 3 ϕ 0 ( x , y ) ] + k 5 cos [ 6 ϕ 0 ( x , y ) ] ] c 1 sin [ 3 ϕ 0 ( x , y ) ] c 2 sin [ 6 ϕ 0 ( x , y ) ]
F ( m 0 ) = P { 2 π M m 0 π ϕ < 2 π M ( m 0  +  1 ) π }
{ ( x j , y j ) R M × R l , j = 1 , 2 , , n }
Φ : x j Φ ( x j )
f ( x j ) = Φ T ( x j ) W + B
L ( W , B ) = 1 2 k = 1 l | | w k | | 2 + C j = 1 n L ( u j ) s . t . | | y j Φ T ( x j ) W B | | ε + ξ j ξ j 0  j = 1 , 2 , n
Δ u j = | | e j | | = ( e j T e j ) , e j T = y j T φ ( x j ) W B T
L ( Δ u ) = { 0   Δ u < ε   ( Δ u ε ) 2 Δ u ε  
L ( W , B ) = 1 2 k = 1 l | | w k | | 2 + 1 2 j = 1 n a j u j 2 + C V , a j = { 0 u j t < ε 2 C ( u j t ε ) u j t   u j t ε    
ϕ i ( t + 1 ) ( x , y ) = ϕ i 1 ( x , y ) + c ^ i sin [ ( 3 i ) ϕ i ( t ) ( x , y ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.