Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase error analysis and compensation for motion in high-speed phase measurement profilometry

Open Access Open Access

Abstract

High-speed three-dimensional (3D) measurement is increasingly important in many fields. Phase measurement profilometry (PMP) based on the binary defocusing technique has been applied to the high-speed 3D measurement scene for its higher measurement resolution and precision, and breaking the speed limitations of projector. However, because the PMP needs three phase-shifting (3-PS) patterns, motion error is inevitable to measuring dynamic objects. In this research, we construct a complete high-speed 3-PS PMP system, and re-derive two clearer motion error models than those in Weise’s research [Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1 [CrossRef]  ]. Then, we theoretically analyze the effects of the truncation error on the model accuracy, especially when the motion error is higher. To this end, a polynominal-based motion error model by fitting coefficient matrix of pre-simulation is proposed. Meanwhile, its corresponding error compensation method based on local domain estimation of the Nelder-Mead algorithm is developed. Finally, both simulations and quantitative and qualitative experiments verify the accuracy and effectiveness of the proposed method, as well as demonstrate the proposed method has improvements compared with the Weise’s research.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) plays an important role in some academia and applied fields, such as product inspection, reverse engineering, and computer animation, because of its advantages of being noncontact, nondestructive, full field, and of high precision [15]. Recently, with the development of high speed imaging sensors and digital projection technology (e.g., the digital-light-processing module developed by Texas Instruments), it is possible to reach higher level on the quality and speed [6,7]. For this reason, researchers start to expand the application domain of FPP, e.g., biomechanics, on-line inspection, human-computer interaction, robot navigation, and solid mechanics [8].

In general, FPP has two main kinds of representative methods: Fourier-transform profilometry (FTP) and Phase measurement profilometry (PMP). For FTP, one single fringe pattern is only need to compute phase map, so it well settles for high-speed three-dimensional (3D) reconstruction [9]. Essentially, FTP can be viewed as a band-pass filter to accurately retrieve the phase information of the continuous surface. Unfortunately, in practice, it may produce a large reconstruction error, since the measured surface usually contain discontinuities, sharp edges, or high reflectivity. For PMP, it has the advantages of higher spatial resolution, measurement accuracy, and robustness towards ambient illumination and varying surface reflectivity [1012]. Generally, the more steps used, the better accuracy and stability PMP achieves [13]. If PMP is used for high-speed, it typically requires a minimum of three fringe patterns. Therefore, the phase distortion artifacts induced motion is apparently avoidless for reconstructing dynamic scenes. In other words, when the scanned object is moving, it violates the basic phase-shifting (PS) assumption that corresponding pixels in the three phase images depict the same surface point [14].

For the past few years, dynamic 3D reconstruction with PMP has become one of the hotspots in the 3D reconstruction field, and the strategy generally can be categorized into the hardware methods, the reducing frames methods and the compensation methods. (1) For the hardware methods, researchers focus on a binary defocusing technique, which allows the projector to cast binary fringe patterns, thanks to the development of projector technique. For instance, the digital-light-processing (DLP) module developed by Texas Instruments (TI) not only realizes the precise synchronization for the time modulation of digital-mirror device (DMD) technique, but also alleviates optical nonlinearity and greatly increases refresh rate (kHz [15] or even 10 kHz [16,17]) in the 1-bit mode [18]. (2) The temporal phase unwrapping algorithms of PMP are utilized to recover the absolute phase [19]. That is, the additional patterns (e.g., gray-coding patterns or multi-wavelength fringes patterns) are essential. Normally, the additional information is no contribution to the reconstruction accuracy. On the contrary, the acquisition speed is limited. Therefore, researchers start to study the way to reduce the number of patterns required per reconstruction [2022]. It is noted that the number of patterns required per reconstruction in these proposed methods is still more than three.

As the above, the binary defocusing technique using the high-speed hardware and reducing the number of patterns required per reconstruction, to some extent, assist PMP to reconstruct dynamic scenes. However, the challenge that motion will lead to non-corresponding pixels in the three phase-shifting (3-PS) patterns still remains. To this end, (3) researchers develop the post-processing algorithms to improve the dynamic reconstruction capability. Lu et al. [23] developed an iterative least-squares algorithm to calculate the motion phase error of a rigid object, but the non-rigid object is not covered. Liu et al. [24] proposed a compensation method of motion phase error through utilizing the projector pinhole model. Although the method can apply to both rigid and non-rigid motion, the iterative process is too complicated to be practical. Zhang et al. [25,26] proposed a method that the different strategies correspond to the different regions (i.e., a single-frame method corresponds to a motion region and a multi-frame method corresponds to a static region). This method is goal-oriented, but it can be easily affected by ambient light noise. Li et al. [27] developed a hybrid method that combines FTP with PMP, whereas the accuracy is limited since the phase is unwrapped by FTP. Weise et al. [14] derived two closed-form expressions for motion error according to Taylor series expansion and exploited a least-square fitting to estimate the motion error. However, the truncation error of the introduced Taylor expansion will affect the model accuracy, especially for higher motion error (note: this paper will discuss the influence about the truncation error). Cong et al. [28] proposed a Fourier-assisted PMP approach which corrects the phase error by differentiating the phase maps of two successive fringe images. Since the FTP is used to retrieve the phase, the accuracy is limited. Feng et al. [29] proposed a motion-compensated PMP, which is suitable for the N-step PS algorithm and compensated using the statistical nature of the fringes. The method is a novel compensation strategy, but more than 3-PS patterns are projected to conduct the absolute phase unwrapping since the bi-frequency tripolar pulse-width-modulation fringe projection [21] is adopted. Namely, the real-time performance may be a little limited.

Our paper mainly contains three contributions: (1) we re-derive two motion error models, which are clearer than the Weise’s ones [14]; (2) we theoretically analyze and verify the effects of the truncation error on the model accuracy, especially for higher motion error; (3) a polynominal-based motion error model through fitting coefficient matrix of pre-simulation is proposed and an error compensation method based on local domain estimation of Nelder-Mead algorithm is developed. And, both simulations and quantitative and qualitative experiments verify the accuracy and effectiveness of the proposed method, as well as demonstrate the proposed method has improvements compared with the Weise’s research [14].

This paper is organized as follows: Section 2 explains a complete high-speed 3-PS PMP system. Section 3 analyzes the motion error and describes the compensation method. Section 4 presents the simulations and the experiments. Section 5 concludes this paper.

2. High-speed phase measurement profilometry

In this section, we construct a complete high-speed 3-PS PMP system, and it mainly includes four points: (1) the 3-PS algorithm is used because it’s fast and accurate; (2) a multilevel quality-guided phase unwrapping algorithm is utilized because it only requires the minimum number of recorded images to unwrap phase; (3) the calibration method based on nonlinear phase-to-height mapping equation is introduced since the projector is defocusing; (4) the digital binary defocusing technique is introduced in order to break the speed limitations of projector.

2.1 Three-step phase-shifting algorithm

Figure 1 shows a schematic system of the 3D measurement. The projector projects fringe patterns into the measurement volume. Then, the camera captures the images distorted by surface profile. Meanwhile, these images can be processed to obtain the 3D information. In the existing approaches, PS algorithms have gained great popularity owing to its speed and accuracy [30]. And, these algorithms aim at computing the wrapped phase, since the 3D information can be implied in the phase information. Here, for high-speed applications, we adopt the 3-PS algorithm and its fringe patterns can be mathematically expressed as

$${I_1}(x,y) = a(x,y) + b(x,y)\cos [{\varphi (x,y) - {{2\pi } / 3}} ],$$
$${I_2}(x,y) = a(x,y) + b(x,y)\cos [{\varphi (x,y)} ],$$
$${I_3}(x,y) = a(x,y) + b(x,y)\cos [{\varphi (x,y) + {{2\pi } / 3}} ],$$
where a(x, y) is the average intensity, b(x, y) denotes the intensity modulation [31]. the data modulation r(x, y)= b(x, y)/a(x, y), which is usually used as the map for background masking, represents the quality map and can be expressed as
$$r(x,y) = {{\sqrt {3{{({{I_1} - {I_3}} )}^2} + {{({2{I_2} - {I_1} - {I_3}} )}^2}} } / {({{I_1} + {I_2} + {I_3}} )}}.$$
φ(x, y) indicates the phase which can be calculated by
$$\varphi (x,y) = {\tan ^{ - 1}}\left[ {{{\sqrt 3 ({{I_1} - {I_3}} )} / {({2{I_2} - {I_1} - {I_3}} )}}} \right].$$

 figure: Fig. 1.

Fig. 1. Schematic experimental system.

Download Full Size | PDF

In Eq. (5), it is needed to note that “tan−1” is a four-quadrant inverse tangent, which returns values in the closed interval [-π, π], i.e., the wrapped phase φ(x, y) ranges [–π, +π] and contains 2π discontinuity. Thus, an unwrapping algorithm is needed to obtain a continuous phase (i.e., absolute phase or unwrapped phase).

2.2 Multilevel quality-guided phase unwrapping algorithm

The phase unwrapping is to find integer number n(x, y) in the following:

$$\Phi (x,y) = \varphi (x,y) + 2\pi \cdot n(x,y),$$
where Ф(x, y) denotes the unwrapped phase. Many unwrapping methods have been proposed in the past. Here, since the minimum number of recorded images are easily to realize the high-speed PMP and our proposed model (see Section 3) is based on the 3-PS, we introduce Zhang’s phase unwrapping approach [32]. The algorithm is surprisingly robust in practice without needing any other supplementary information for unwrapping phase, which has a good phase unwrapping effect for human face and so on. However, it is pointed out that the algorithm obtains only one single connected patch.

Firstly, in order to reduce phase unwrapping region, data modulation [see Eq. (4)] is utilized as a mask to remove the background. Secondly, a quality map guides the phase unwrapping path that define the quality or goodness of each phase value. The map can be defined from a maximum phase gradient map, which is determined as

$$Q(i,j) = \max \{{{\Delta ^x},{\Delta ^y}} \}, \textrm{with} \;\;\;Q(i,j) \in [{0,1} ),$$
where Q(x, y) represents the quality map and
$$\begin{array}{l} {\Delta ^x}\textrm{ = max}\left\{ \begin{array}{l} |{{\textbf W}\{{{{[{\varphi (i,j) - \varphi (i,j - 1)} ]} / {2\pi }}} \}} |,\\ |{{\textbf W}\{{{{[{\varphi (i,j + 1) - \varphi (i,j)} ]} / {2\pi }}} \}} |\end{array} \right\},\\ {\Delta ^y}\textrm{ = max}\left\{ \begin{array}{l} |{{\textbf W}\{{{{[{\varphi (i,j) - \varphi (i - 1,j)} ]} / {2\pi }}} \}} |,\\ |{{\textbf W}\{{{{[{\varphi (i + 1,j) - \varphi (i,j)} ]} / {2\pi }}} \}} |\end{array} \right\}. \end{array}$$

In Eq. (8), W is an operator that estimates the true gradient by wrapping the differences of the wrapped phase; Δx and Δy respectively are maximum values of the partial derivatives of the phase in the x and y directions. In Eq. (7), the larger Q(x, y) is, the worse the data quality is. Thirdly, the unwrapping process starts from a high-quality point and its four neighbors are examined, unwrapped and stored in a list. Then, an iterative process of (1) removing the highest quality pixel from the list, (2) unwrapping its four neighbors and (3) inserting them in the list continues until all the pixels have been unwrapped. It is natural to view that the algorithm can be as a region-growing approach or the flood-fill method. In this way, all the high-quality region is unwrapped first, and the low-quality region are unwrapped last.

2.3 Calibration method with a binary defocusing technique

The calibration method with binary defocusing technique usually has a challenge: the projector is defocusing rather than in focus. Thus, our research refers to an accurate calibration method [33], which avoids the calibration of projector. In the method, a reference-plane based on nonlinear phase-to-height mapping [34] is adopted, and the mapping can be described as

$$\frac{1}{{H(x,y)}} = \alpha (x,y) + \frac{{\beta (x,y)}}{{\Delta \Phi (x,y)}} + \frac{{\gamma (x,y)}}{{\Delta {\Phi ^2}(x,y)}},$$
where ΔΦ(x, y) is the relative unwrapped phase from the reference plane, α(x, y), β(x, y) and γ(x, y) represent the calibration parameters, which can be obtained by lifting a flat board over known height. Through Eq. (9), the height H can be calculated, and through camera model, the transverse coordinates also can be obtained. Nevertheless, it is noted that our research only calculates H for simplicity.

2.4 Digital binary defocusing technique

Digital binary defocusing technique can greatly increase projector refresh rate to achieve a high measurement speed in the 1-bit mode. Essentially, the technique is that the computer generates binary patterns and the defocused optical system blurs them to become smooth ones [35]. Mathematically, the defocusing effect can be simplified to a convolution operation,

$$i(x,y) = g(x,y) \otimes {i_d}(x,y),$$
where ⊗ represents convolution, id(x, y) indicates the inputted binary pattern, i(x, y) denotes the outputted smooth pattern, and g(x, y) is the points spread function (PSF), which is determined by a pupil function of the optical system f(u, v) [36],
$$g(x,y)\textrm{ = }{\left|{{{\left\{ {\int {\int_{\textrm{ - }\infty }^{\textrm{ + }\infty } {f(u,v)\textrm{exp} [{i(xu + yv)} ]} } } \right\}} / {(2\pi )}}} \right|^2}.$$

Simply, g(x, y) can be approximated by a circular Gaussian function [22],

$$g(x,y)\textrm{ = }{{\textrm{exp} [{{{ - ({{x^2} + {y^2}} )} / {({2{\sigma^2}} )}}} ]} / {({2\pi {\sigma^2}} )}},$$
where the standard deviation σ represents the defocusing level. That is to say, the defocused optical system can be viewed as a spatial 2D low-pass filter. As shown in Fig. (1), the DMD in projector generates the initial binary structured pattern and the defocused optical system blurs them. Then, in the volume of projector defocusing (i.e., the measurement volume), the patterns will be smoothed to approximate a sinusoidal structure and the measurement result will indicate a good accuracy.

3. Phase error analysis and compensation for motion

Section 2 has constructed a complete high-speed 3-PS PMP system. Three recorded fringe images are required to calculate the phase. For the static scene, the pixels among the 3-PS images are completely corresponding. However, for the dynamic ones, the correspondence among them is violated. The result of phase unwrapping will generate the phase error. In other words, the analysis and compensation of motion error is essentially to analyze and compensate the phase error.

3.1 Phase error analysis for motion

Figure 2 demonstrates a schematic analysis model of phase error analysis for motion. In the Fig. 2(a), a captured light beam l intersects the dynamic surface at point P−1, P0 and P1, and arrives at the point c on the CMOS. Or said differently, the camera captures the point P−1, P0 and P1 on the dynamic surface at three times (t−1, t0 and t1) to form the 3-PS images and in the meanwhile the point P−1, P0 and P1 have a same image point c on the CMOS. Here, two research suppositions are introduced: (1) the relation Δt = t0- t−1= t1- t0 makes sense because the camera is set at a constant frame rate, and (2) the distance |P−1P0| closely approximates |P−1P0| on account of the high-speed performance of system and meanwhile the motion in small area. As shown in the Fig. 2(a), d−1, d0 and d1 are the corresponding DMD points of P−1, P0 and P1, respectively. Therefore, from the viewpoint of geometry, the equation Δd=|d−1d0|≈|d1d0| is true. In addition, for clear analysis, a cross section on the DMD is considered, as shown in Fig. 2(b). the relation between the points (d−1, d0 and d1) and phase (φ−1, φ0 and φ1) can be build. That is, the relation Δθ= φ1-φ0φ0-φ−1 is existent, and there are three mappings: P−1d−1φ0-Δθ, P0d0φ0 and P1d1φ0θ that can be given. Next, the actual captured intensities can be described as follows [note: in order to simplify representation, the (x, y) is omitted in the following equations.]:

$${I_1} = a + b \cdot \cos ({\varphi - \theta + \Delta \theta } ),$$
$${I_2} = a + b \cdot \cos (\varphi ),$$
$${I_\textrm{3}} = a + b \cdot \cos ({\varphi + \theta - \Delta \theta } ),$$
where the shift offset θ is constant at 2π/3. For Eq. (13–15), if Δθ can be estimate, the true undistorted relative phase φt (correct phase) is calculated as follow:
$${\varphi _t} = {\tan ^{ - 1}}\left\{ {\tan \left[ {\frac{{({\theta - \Delta \theta } )}}{2}} \right] \cdot [{{{({{I_1} - {I_3}} )} / {({2{I_2} - {I_1} - {I_3}} )}}} ]} \right\}.$$

Similarly, the distorted phase φf is given by:

$${\varphi _f} = {\tan ^{ - 1}}\{{\tan [{{{(\theta )} / 2}} ]\cdot [{{{({{I_1} - {I_3}} )} / {({2{I_2} - {I_1} - {I_3}} )}}} ]} \}.$$

 figure: Fig. 2.

Fig. 2. Schematic analysis model of motion error. (a) Diagrammatic view of a plane moving towards the camera and its resulting recordings at three steps. (b) A cross section on the DMD.

Download Full Size | PDF

Then, substituting Eqs. (16), (17) and θ=2π/3 into Δφ=φf-φt, where Δφ is the relative phase error, Δφ can be expanded as

$$\begin{array}{c} \Delta \varphi = {\tan ^{ - 1}}\left\{ {\tan \left[ {2\textrm{ + }\frac{3}{{1 + \sqrt 3 \tan ({{{\Delta \theta } / 2}} )}}} \right] \cdot \frac{{({{I_1} - {I_3}} )}}{{({2{I_2} - {I_1} - {I_3}} )}}} \right\} \cdot \\ \frac{1}{{1 - {{\left\{ {{{({{I_1} - {I_3}} )}^2}\left[ {3 - \sqrt 3 \tan ({{{\Delta \theta } / 2}} )} \right]} \right\}} / {\left\{ {{{({2{I_2} - {I_1} - {I_3}} )}^2}\left[ {1 + \sqrt 3 \tan ({{{\Delta \theta } / 2}} )} \right]} \right\}}}}}. \end{array}$$

Equation (18) describes the relative phase error, but the expression is complicated and non-serviceable to estimate error. Thus, we need to replace Eq. (18) with another form.

Firstly, when (I1-I3)/(2I2-I1-I3) is removed in Eq. (16) and Eq. (17), two become one:

$$\frac{{\tan {\varphi _t}}}{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}} = \frac{{\tan {\varphi _f}}}{{\tan [{{\theta / 2}} ]}} \Rightarrow {\tan ^{ - 1}}\left\{ {\tan {\varphi_t}\frac{{\tan ({{\theta / 2}} )}}{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}}} \right\}.$$

Next, substituting Eq. (19) into Δφ=φf-φt, we can express Δφ as

$$\Delta \varphi = {\tan ^{ - 1}}({A \cdot \tan {\varphi_t}} )- {\varphi _t},$$
where A = tan(θ/2)/tan[(θθ)/2].

Secondly, Eq. (20) can be abstracted as f(A), and then f(A) is Taylor expanded at A = A0, the result as shown in the following:

$$f({{A_0}} )= f({{A_0}} )+ f^{\prime}({{A_0}} )({A - {A_0}} )+ \frac{1}{2}f^{\prime\prime}({{A_0}} ){({A - {A_0}} )^2} + {R_2}(A ).$$

Next, specifying Eq. (21) and setting A0=1, we can obtain:

$$\Delta \varphi = \frac{1}{2}\sin ({2{\varphi_t}} )({A - 1} )+ \left[ { - \frac{1}{4}\sin ({2{\varphi_t}} )+ \frac{1}{8}\sin ({4{\varphi_t}} )} \right]{({A - 1} )^2} + {R_2}(A ).$$

Finally, ignoring the truncation error in Eq. (22), we can obtain a one-rank Tailor multinomial model (hereafter referred to as one-rank model, or ORM, for short) [see Eq. (23)] and a two-rank Tailor multinomial model (hereafter referred to as two-rank model, or TRM, for short) [see Eq. (24)].

$$\Delta {\varphi _1} = \frac{1}{2}\sin ({2{\varphi_t}} )\left[ {\frac{{\tan ({{\theta / 2}} )}}{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}} - 1} \right],$$
$$\Delta {\varphi _2} = \Delta {\varphi _1}\textrm{ + }\left[ {\textrm{ - }\frac{\textrm{1}}{\textrm{4}}\sin ({2{\varphi_t}} )+ \frac{1}{8}\sin ({4{\varphi_t}} )} \right]{\left\{ {\frac{{\tan ({{\theta / 2}} )}}{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}} - 1} \right\}^2}.$$
Figure 3 displays the curves of Δφ represented by Eqs. (18), (23) and (24) for θ=2π/3 rad, where Δθ=0.1 rad, 0.3 rad and 0.5 rad. As shown in Fig. 3, Δφ have a same tendency. Specifically, when Δθ=0.1 rad, ORM and TRM all have the better approximation effects; when Δθ=0.3 rad, the TRM is better than the ORM; when Δθ=0.5 rad, their effects are all unsatisfactory. Next, Table 1 shows the statistics and analysis of Fig. 3, i.e., (1) the fitting degrees degrade when increasing Δθ; (2) the lower truncation error, the closer the true curve can be obtained; (3) when Δθ is constant, the complexity of formula increases significantly.

 figure: Fig. 3.

Fig. 3. Phase error against the approximation of ORM and TRM for θ=3π/2 rad. (a) Δθ=0.1 rad. (b) Δθ=0.3 rad. (c) Δθ = 0.5 rad.

Download Full Size | PDF

Tables Icon

Table 1. Statistics and analysis of Fig. 3 (Root-Mean-Square: RMS)

3.2 Phase error compensation for motion

In Eqs. (23) and (24), the relatively phase Δφ is caused by Δθ. Therefore, in order to eliminate Δθ, we need to adopt an optimization strategy to estimate Δθ, i.e., an objective function can be introduced,

$$\mathop {\min }\limits_{\Delta \theta } {\sum {||{{\varphi_f} - [{{\varphi_t} + \Delta \varphi ({{\varphi_t},\Delta \theta } )} ]} ||} ^2},$$
where Δφ(φt, Δθ) expresses the abstraction of Eqs. (23) or (24). However, it is important to note that Eq. (25) used for estimation is not serviceable. The reason is that: φt is unknown. Thus, the Eq. (25) needs to be modified through the following two ways: (1) φt is approximated to a linear relation in a local neighborhood of each pixel along x-coordinate direction (i.e., φt=hm + g, where h and g respectively represent the slope and intercept, and m is the x coordinate of the pixel); (2) Δφ(φt, Δθ) is translated into Δφ(φf, Δθ) by substituting Eq. (19) into Eq. (23) and Eq. (24). We can respectively express Δφ1 and Δφ2 as
$$\Delta {\varphi _1}^\prime ={-} \frac{1}{2}\sin ({2{\varphi_f}} )\left[ {\frac{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}}{{\tan ({{\theta / 2}} )}} - 1} \right],$$
$$\Delta {\varphi _2}^\prime = \Delta {\varphi _1}^\prime \textrm{ - }\left[ {\textrm{ - }\frac{\textrm{1}}{\textrm{4}}\sin ({2{\varphi_f}} )+ \frac{1}{8}\sin ({4{\varphi_f}} )} \right]{\left\{ {\frac{{\tan [{{{({\theta - \Delta \theta } )} / 2}} ]}}{{\tan ({{\theta / 2}} )}} - 1} \right\}^2}.$$

Combining the two ways above, Eq. (25) can be transformed into

$$\mathop {\min }\limits_{h,g,\Delta \theta } {\sum {||{{\varphi_f} - [{({hm + g} )+ \Delta \varphi ({{\varphi_f},\Delta \theta } )} ]} ||} ^2},$$
where Eq. (26) or Eq. (27) are used to instead of Δφ(φf, Δθ) in Eq. (28). Then the parameters h, g and Δθ in Eq. (28) can be estimated through optimization algorithm, e.g., Nelder-Mead method. In addition, note that (1) the initial values of h and g respectively are the slope and intercept by fitting straight line in the local neighborhood, and (2) the initial value of Δθ is set to zero.

As above, combining Eq. (26) or Eq. (27) with Eq. (28) can estimate Δθ. However, as defined in Sec. 3.1, the truncation error is unavoidable since Taylor expansion is applied. As shown in Fig. 3 and Table 1, the truncation error have to be considered when Δθ ≥ 0.3. Therefore, for removing the influence of truncation error, we proposed a novel phase error model based on fitting coefficient matrix (or FCM, for short). The core of this method is that the Δφ’(φf, Δθ) is written in the following equation:

$$\Delta \varphi ^{\prime}({{\varphi_f},\Delta \theta } )= \sum\limits_{k = 0}^N {\left( {\sum\limits_{l = 0}^M {{c_{k,l}} \cdot \Delta {\theta^l}} } \right)} \cdot \varphi _f^k,$$
where ck,l is a constant coefficient matrix. In FCM, we use pre-simulation data of φf, Δθ and Δφ to fit ck,l. First of all, the Δφ-φf curve of each Δθ are computed. The curve when Δθ=1 (with a serious distortion) is taken as an analysis object. The polynomials with different orders are used to fit the curve, and the RMS error of polynomial fitting is calculated. As shown in Fig. 4, the 20th-order polynomial has the smallest error, i.e., N=20. After fitting the Δφ-φf curve of each Δθ, a coefficient matrix can be obtained. Similarly, in order to shrink the matrix, the polynomial fitting is also performed for each column and the RMS error of polynomial fitting is computed. As shown in Fig. 4, the 5th-order polynomial has the smallest error, i.e., M=5. Finally, we can decide that at least N=20 and M=5 are needed to fit ck,l, i.e., the size of the obtained ck,l is 6×21. Then, substituting Eq. (29) into Eq. (28), Δθ can be estimated without truncation error.

 figure: Fig. 4.

Fig. 4. RMS error of polynomial fitting with different order.

Download Full Size | PDF

In practice, the main steps of the proposed FCM method are as following:

Step 1. The range and step size of Δθ are respectively set to [−1, 1] and 0.01. And, the corresponding Δφ-φf curve of each Δθ can be calculated by Eq. (18) [note that: the space domain of Eq. (18) needs to be converted into the phase domain]. Then, a 20th-order polynomial is used to fit the φfφ curve of each Δθ.

Step 2. A 5th-order polynomial is utilized to fit the coefficients obtained by Step 1, and ultimately the coefficient matrix ck,l is obtained.

Step 3. In the measurement, Δθ of each pixel is estimated by the Nelder-Mead algorithm and the following objective formula:

$$\mathop {\min }\limits_{h,g,\Delta \theta } {\sum {\left\|{{\varphi_f} - \left[ {({hm + g} )+ \sum\limits_{k = 0}^5 {\left( {\sum\limits_{l = 0}^{20} {{c_{k,l}} \cdot \Delta {\theta^l}} } \right)} \cdot \varphi_f^k} \right]} \right\|} ^2}.$$
Step 4. Finally, Δθ is substituted into Eq. (29) to obtain Δφ, i.e., φf can be compensated by φt=φfφ.

4. Simulations and experiments

Both simulations and quantitative and qualitative experiments verify the effectiveness and accuracy of the proposed method.

4.1 Simulations

Case 1. Estimation for a constant Δθ

Firstly, a pixel is selected, the estimated Δθ is supposed to 0.1 (i.e., φf is determined), and the initial values of h and g are respectively calculated as 0.0491 and 0. Secondly, substituting Δθ and φf into Eqs. (26), (27) and (29), the phase errors Δφ1, Δφ2 and Δφ’ can be obtained. Next, these phase errors are respectively substituted in Eq. (28) to construct three objective functions. Finally, combining the obtained functions above with the fminsearch function in Optimization-Tool Toolbox of MATLAB, we can get the estimation results, as listed in Table 2. From Table 2, (1) our proposed FCM works the best when TRM works better than the ORM; (2) the three models are basically the same number of iterations (one-rank model: 321; two-rank model: 326; our proposed model: 335).

Tables Icon

Table 2. Estimation for a constant Δθ (Actual values: h=0.0491; g=0; Δθ=0.1).

Case 2. Estimation for different Δθ

For this case, most of the process looks like Case 1. The main difference is that a series of different Δθ (±0.1, ±0.2, ±0.3, ±0.4, ±0.5 and ±0.6 rad) are estimated by using ORM, TRM and our proposed FCM, respectively. In Table 3, the first column lists the actual values of Δθ; the columns from second to forth show the calculated initial values of h and g, as well as the initial value of Δθ is set to zero; the columns from fifth to seventh display the estimation values of Δθ with ORM, TRM and our proposed FCM. From Table 3, they are shown that (1) the actual values and the estimation values of Δθ with the three methods have the same variation trend; (2) for different Δθ, the estimation values of Δθ with our proposed FCM are nearest to the actual values of Δθ when TRM works better than ORM. To see the trend of Table 3 more clearly, Fig. 5 illustrates the absolute differences between the actual values and the estimated values of Δθ with the three methods. Similarly, the accuracy and stability of our proposed FCM is superior to other two methods.

 figure: Fig. 5.

Fig. 5. Absolute differences between Δθ and with ORM, TRM and FCM.

Download Full Size | PDF

Tables Icon

Table 3. Estimation for different Δθ.

4.2 Experiments

According to the actual requirement for theoretical verification, an experimental system was established. The system mainly consists of a projector (TI Light Crafter DLP 4500) and a digital high-speed CMOS camera (Optronis CP80-3-m-540). The camera has a 1708×1696 pixels resolution with a frame rate of 2000 Frames/s and attaches a 16 mm focal length 10 Mega-Pixel lens (VOC 16mm10MP). The DLP module has a resolution of 912×1140 with a projection distance of 500–2000mm. In our experiment, the level of defocusing is realized by manually adjusting the focal distance of the projector. A red linear laser is used to determine the initial position of phase unwrapping. A white plate to be a reference plane is attached to a motorized linear translation stage, which has 250 mm travel. A dual channel signal generator (MingHe JDS 6600) is regarded as an external trigger to synchronize the camera and the projector. Moreover, the DLP module and the high-speed CMOS camera are set to 2000Hz in binary mode and in 8-bit gray mode, respectively. Specifically, the experimental system and main devices arrangement pictures are shown in Fig. 6. In addition, it is to note that 5×5 spatial Gaussian smoothing is applied to the resulting geometry.

 figure: Fig. 6.

Fig. 6. Experimental system and arrangement picture.

Download Full Size | PDF

Quantitative evaluation: Reconstruction of the simple motion scenes

For quantitative evaluation, a 250×250 mm2 flat surface was introduced. Here, there are three tests.

  • (1) The flat surface at 19 positions of about 850 mm to 1100 mm are measured. At each position, 3-PS patterns are projected onto the flat surface, as shown in Fig. 7. And, the compensated (with FCM) and uncompensated RMS phase error values of the residuals are calculated, as shown in Fig. 9(a). The results show that our proposed FCM does not affect the reconstruction accuracy for the static scenes, and the values are relatively stable.
  • (2) The 3-PS patterns (I1, I2 and I3) are respectively cased onto the flat surfaces in three positions, which is to be a group. In each group, there is a same interval between I1-I2 and I2-I3. We selected 11 intervals from 0 to 25 mm, i.e., there exist 11 groups, as shown in Fig. 8. Then, the compensated (with FCM) and uncompensated RMS phase errors values are obtained, as shown in Fig. 9(b). From the figure, we can analyze that: (a) at the interval of 0 mm, the results are not disturbed by compensation; (b) for the intervals from 2.5 mm to 22.5 mm, our method can effectively remove the effect of motion; (c) at the interval 25 mm, our method starts invalid.
  • (3) Similarly, we performed a comparative experiment with our method, Weise’s method [14], Cong’s method [28] and uncompensated result. Five intervals from 0 to 10 mm are selected. Figure 9(c) compares the results. For the compensation methods, generally, all of them can effectively suppress the effect of motion, and the phase errors for motion increase with the rise of the intervals. In addition, the compensation effect of our method is better than that of the other two methods, and with the increase of the intervals, the difference of compensation effect is more obvious.

 figure: Fig. 7.

Fig. 7. 3-PS patterns projected onto the flat surface at 19 positions of about 850 mm to 1100 mm.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. 3-PS patterns cased onto the flat surfaces in three positions with a same interval.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Result of quantitative evaluation. (a) Compensated and uncompensated phase error of the flat surface at 19 positions. (b) Compensated and uncompensated phase error of the flat surface moving 0 to 25 mm in intervals of 2.5 mm. (c) Comparison of our method, Weise’s method [14], Cong’s method [28] and uncompensated result.

Download Full Size | PDF

Then, a fast-moving tablet and a fluttering tissue are captured in the measured volume. We captured a comment and reconstructed the compensated (with FCM) and uncompensated results, as shown in Figs. 10 and 11, respectively. Figure 10(a) displays the test moment of the fast-moving tablet (note: the picture was captured with a mobile phone). In Fig. 10(a), the red, yellow and green dashed box represent the tablet in the captured 3-PS patterns, respectively. And, the wrapped phase is shown in the top right-hand corner. Figure 10(b) and (c) show the sections A-B in Fig. 10(a); Fig. 10(d) and (e) show the sections C-D in Fig. 10(a). For the section A-B, the pixel distance is 100 pixels from point A (272, 528) to point B (371, 528), where about 2.43 fringes are contained. Thus, the phase change from point A to point B is approximately 2.43×2π≈15.27 rad. In the Fig. 10(b), the uncompensated phase curve obviously shows wave shape; on the contrary, the compensated phase curve is smooth. In addition, based on Fig. 10(c), the compensated and uncompensated RMS height error values are respectively calculated: 0.287 mm and 0.099 mm, which demonstrated the effectiveness and accuracy of our proposed compensation method. Similarly, for the section C-D, the pixel distance is 50 pixels from point C (591, 690) to point D (640, 690), where the phase change is approximately 1.10×2π≈6.91 rad. In the Fig. 10(d), the compensated and uncompensated curves have no noticeable change since the section C-D is on the static background. Meanwhile, In the Fig. 10(e), the two RMS height error values (0.035 mm and 0.023 mm) are almost the same. Moreover, Fig. 11(b) shows the test moment of the fluttering tissue, and the dashed box represents the ROI (Region of Interest). Figure 11(a) and (c) respectively show the uncompensated and compensated results of the ROI. it can be clearly seen that our FCM was proved to be effective, i.e., the ripples are significantly reduced.

 figure: Fig. 10.

Fig. 10. Test of a fast-moving tablet. (a) A moment of the fast-moving tablet. (b) Compensated and uncompensated phase curve of section A-B. (c) Compensated and uncompensated height curve of section A-B. (d) Compensated and uncompensated phase curve of section C-D. (e) Compensated and uncompensated height curve of section C-D.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Test of a fluttering tissue. (a) Uncompensated result of ROI. (a) A moment of the fluttering tissue and ROI. (c) Compensated result of ROI.

Download Full Size | PDF

Qualitative evaluation: Reconstruction for the complex motion scenes

For qualitative evaluation, the complex motion scenes are reconstructed. Figure 12 displays a waving cloth. It can be seen that the reconstructed surfaces are relatively smooth, i.e., the motion ripples are effectively compensated. Figure 13 shows a face with expression rapidly changing. Again, less motion artifacts are visible. It is pointed out that some tiny stripes on the reconstructed surface are caused by the incomplete binary defocusing since the static areas (e.g., the forehead of face) also exist tiny stripes.

 figure: Fig. 12.

Fig. 12. Reconstruction of a waving cloth.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Reconstruction of a person with a twisted expression

Download Full Size | PDF

5. Conclusions and discussions

In this paper, firstly, we construct a complete high-speed PMP system, which includes 3-PS algorithm, multilevel quality-guided phase unwrapping algorithm, nonlinear polynomial calibration technique and digital binary defocusing technique. Then, we re-derive two phase error expressions (ORM and TRM), which are clearer than the ones in Ref. [14]. For ORM and TRM, the effect of truncation error affecting accuracy is demonstrated, especially when the phase error for motion is greater than 0.3 rad. On this account, we proposed the polynominal-based motion error model based on fitting coefficient matrix of pre-simulation and developed the error compensation method based on local domain estimation of Nelder-Mead algorithm. Finally, simulation cases demonstrate our proposed FCM is superior to ORM and TRM (i.e., the models in Ref. [14]). In addition, the quantitative and qualitative experiments show the effectiveness and accuracy of our proposed FCM. In practical application, to some extent, the motion objects are reconstructed at high accuracy and the motion ripples are correctly compensated. Future work will consider other more advanced models and attempt to realize real-time and online compensation for motion error through utilizing the graphics processing unit (GPU) calculation.

Acknowledgments

Authors would like to thank Xing Yanfeng in SUES, whose lab supported this research.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Chen, G. M . Brown, and M. Song, “Overview of 3-D shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  

2. F. Bkais, “Review of 20 Years of Range Sensor Development,” J. Electron. Imaging 13(1), 231–243 (2004). [CrossRef]  

3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

4. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

5. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539 (2010). [CrossRef]  

6. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

7. V. D. J. Sam and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

8. J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry,” Opt. Express 27(3), 2713–2731 (2019). [CrossRef]  

9. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72, 156–160 (1982). [CrossRef]  

10. J. L. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003). [CrossRef]  

11. X. Su, G. V. Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Opt. Commun. 98(1-3), 141–150 (1993). [CrossRef]  

12. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

13. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

14. T. Weise, B. Leibe, and L. J. V. Gool, “Fast 3D scanning with automatic motion compensation,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1–8. [CrossRef]  

15. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]  

16. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (µ FTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

17. S. Zhang, D. V. D. Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684 (2010). [CrossRef]  

18. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011). [CrossRef]  

19. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

20. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]  

21. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

22. C. Zuo, Q. Chen, G. Gu, S. Feng, and F. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]  

23. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profifilometry for the measurement of objects in motion,” Opt. Let. 39(23), 6715–6718 (2014). [CrossRef]  

24. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profifilometry,” Opt. Express 26(10), 12632 (2018). [CrossRef]  

25. Z. Yang, Z. Xiong, Y. Zhang, J. Wang, and F. Wu, “Depth Acquisition from Density Modulated Binary Patterns,” In proceedings pf the Conference on Computer Vision & Pattern Recognition (CVPR) (IEEE, 2013), pp. 25.

26. Y. Zhang, Z. Xiong, and F. Wu, “Hybrid structured light for scalable depth sensing,” 19th IEEE Int. Conf. on Image Process. (IEEE, 2012), pp. 17.

27. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289 (2016). [CrossRef]  

28. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3D sensing with Fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9(3), 396–408 (2015). [CrossRef]  

29. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

30. H. Zhao, Z. Wang, H. Jiang, Y. Hu, and C. Dong, “Calibration for stereo vision system based on phase matching and bundle adjustment algorithm,” Opt. Lasers Eng. 68, 203–213 (2015). [CrossRef]  

31. M. Dai, F. Yang, and X. He, “Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities,” Appl. Opt. 51(12), 2062 (2012). [CrossRef]  

32. S. Zhang, X. Li, and S. T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50 (2007). [CrossRef]  

33. L. Merner, Y. Wang, and S. Zhang, “Accurate calibration for 3D shape measurement system using a binary defocusing technique,” Opt. Lasers Eng. 51(5), 514–519 (2013). [CrossRef]  

34. W. Li, X. Su, and Z. Liu, “Large-Scale Three-Dimensional Object Measurement: A Practical Coordinate Mapping and Image Data-Patching Method,” Appl. Opt. 40(20), 3326 (2001). [CrossRef]  

35. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54, 236–246 (2014). [CrossRef]  

36. S. L. Ellenberger, “Influence of defocus on measurements in microscope images,” Doctoral Thesis (TU Delft, 2000).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Schematic experimental system.
Fig. 2.
Fig. 2. Schematic analysis model of motion error. (a) Diagrammatic view of a plane moving towards the camera and its resulting recordings at three steps. (b) A cross section on the DMD.
Fig. 3.
Fig. 3. Phase error against the approximation of ORM and TRM for θ=3π/2 rad. (a) Δθ=0.1 rad. (b) Δθ=0.3 rad. (c) Δθ = 0.5 rad.
Fig. 4.
Fig. 4. RMS error of polynomial fitting with different order.
Fig. 5.
Fig. 5. Absolute differences between Δθ and with ORM, TRM and FCM.
Fig. 6.
Fig. 6. Experimental system and arrangement picture.
Fig. 7.
Fig. 7. 3-PS patterns projected onto the flat surface at 19 positions of about 850 mm to 1100 mm.
Fig. 8.
Fig. 8. 3-PS patterns cased onto the flat surfaces in three positions with a same interval.
Fig. 9.
Fig. 9. Result of quantitative evaluation. (a) Compensated and uncompensated phase error of the flat surface at 19 positions. (b) Compensated and uncompensated phase error of the flat surface moving 0 to 25 mm in intervals of 2.5 mm. (c) Comparison of our method, Weise’s method [14], Cong’s method [28] and uncompensated result.
Fig. 10.
Fig. 10. Test of a fast-moving tablet. (a) A moment of the fast-moving tablet. (b) Compensated and uncompensated phase curve of section A-B. (c) Compensated and uncompensated height curve of section A-B. (d) Compensated and uncompensated phase curve of section C-D. (e) Compensated and uncompensated height curve of section C-D.
Fig. 11.
Fig. 11. Test of a fluttering tissue. (a) Uncompensated result of ROI. (a) A moment of the fluttering tissue and ROI. (c) Compensated result of ROI.
Fig. 12.
Fig. 12. Reconstruction of a waving cloth.
Fig. 13.
Fig. 13. Reconstruction of a person with a twisted expression

Tables (3)

Tables Icon

Table 1. Statistics and analysis of Fig. 3 (Root-Mean-Square: RMS)

Tables Icon

Table 2. Estimation for a constant Δθ (Actual values: h=0.0491; g=0; Δθ=0.1).

Tables Icon

Table 3. Estimation for different Δθ.

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

I 1 ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) 2 π / 3 ] ,
I 2 ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) ] ,
I 3 ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) + 2 π / 3 ] ,
r ( x , y ) = 3 ( I 1 I 3 ) 2 + ( 2 I 2 I 1 I 3 ) 2 / ( I 1 + I 2 + I 3 ) .
φ ( x , y ) = tan 1 [ 3 ( I 1 I 3 ) / ( 2 I 2 I 1 I 3 ) ] .
Φ ( x , y ) = φ ( x , y ) + 2 π n ( x , y ) ,
Q ( i , j ) = max { Δ x , Δ y } , with Q ( i , j ) [ 0 , 1 ) ,
Δ x  = max { | W { [ φ ( i , j ) φ ( i , j 1 ) ] / 2 π } | , | W { [ φ ( i , j + 1 ) φ ( i , j ) ] / 2 π } | } , Δ y  = max { | W { [ φ ( i , j ) φ ( i 1 , j ) ] / 2 π } | , | W { [ φ ( i + 1 , j ) φ ( i , j ) ] / 2 π } | } .
1 H ( x , y ) = α ( x , y ) + β ( x , y ) Δ Φ ( x , y ) + γ ( x , y ) Δ Φ 2 ( x , y ) ,
i ( x , y ) = g ( x , y ) i d ( x , y ) ,
g ( x , y )  =  | {  -   +  f ( u , v ) exp [ i ( x u + y v ) ] } / ( 2 π ) | 2 .
g ( x , y )  =  exp [ ( x 2 + y 2 ) / ( 2 σ 2 ) ] / ( 2 π σ 2 ) ,
I 1 = a + b cos ( φ θ + Δ θ ) ,
I 2 = a + b cos ( φ ) ,
I 3 = a + b cos ( φ + θ Δ θ ) ,
φ t = tan 1 { tan [ ( θ Δ θ ) 2 ] [ ( I 1 I 3 ) / ( 2 I 2 I 1 I 3 ) ] } .
φ f = tan 1 { tan [ ( θ ) / 2 ] [ ( I 1 I 3 ) / ( 2 I 2 I 1 I 3 ) ] } .
Δ φ = tan 1 { tan [ 2  +  3 1 + 3 tan ( Δ θ / 2 ) ] ( I 1 I 3 ) ( 2 I 2 I 1 I 3 ) } 1 1 { ( I 1 I 3 ) 2 [ 3 3 tan ( Δ θ / 2 ) ] } / { ( 2 I 2 I 1 I 3 ) 2 [ 1 + 3 tan ( Δ θ / 2 ) ] } .
tan φ t tan [ ( θ Δ θ ) / 2 ] = tan φ f tan [ θ / 2 ] tan 1 { tan φ t tan ( θ / 2 ) tan [ ( θ Δ θ ) / 2 ] } .
Δ φ = tan 1 ( A tan φ t ) φ t ,
f ( A 0 ) = f ( A 0 ) + f ( A 0 ) ( A A 0 ) + 1 2 f ( A 0 ) ( A A 0 ) 2 + R 2 ( A ) .
Δ φ = 1 2 sin ( 2 φ t ) ( A 1 ) + [ 1 4 sin ( 2 φ t ) + 1 8 sin ( 4 φ t ) ] ( A 1 ) 2 + R 2 ( A ) .
Δ φ 1 = 1 2 sin ( 2 φ t ) [ tan ( θ / 2 ) tan [ ( θ Δ θ ) / 2 ] 1 ] ,
Δ φ 2 = Δ φ 1  +  [  -  1 4 sin ( 2 φ t ) + 1 8 sin ( 4 φ t ) ] { tan ( θ / 2 ) tan [ ( θ Δ θ ) / 2 ] 1 } 2 .
min Δ θ | | φ f [ φ t + Δ φ ( φ t , Δ θ ) ] | | 2 ,
Δ φ 1 = 1 2 sin ( 2 φ f ) [ tan [ ( θ Δ θ ) / 2 ] tan ( θ / 2 ) 1 ] ,
Δ φ 2 = Δ φ 1  -  [  -  1 4 sin ( 2 φ f ) + 1 8 sin ( 4 φ f ) ] { tan [ ( θ Δ θ ) / 2 ] tan ( θ / 2 ) 1 } 2 .
min h , g , Δ θ | | φ f [ ( h m + g ) + Δ φ ( φ f , Δ θ ) ] | | 2 ,
Δ φ ( φ f , Δ θ ) = k = 0 N ( l = 0 M c k , l Δ θ l ) φ f k ,
min h , g , Δ θ φ f [ ( h m + g ) + k = 0 5 ( l = 0 20 c k , l Δ θ l ) φ f k ] 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.