Abstract

Acquiring complete and accurate 3D shape measurement results of translucent objects by fringe projection profilometry (FPP) is difficult because of the subsurface scattering effect. The phase offset introduces geometric errors, and the degraded image contrast leads to incomplete measurement results and random errors. In this research, a high-accuracy 3D shape measurement method for translucent objects based on phase-shifting FPP is proposed. The relationship between fringe period and phase error is investigated to determine the fringe periods. Random errors are suppressed by temporal noise reduction, and the robustness of multi-frequency heterodyne phase unwrapping is improved by increasing the interval of fringe periods along with temporal noise reduction. Geometric errors are compensated for by projecting multi-frequency fringe patterns to establish the relationship between fringe period and depth offset. Experimental results show that the proposed method can acquire complete measurement results and significantly reduce the overall error for measuring translucent objects.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical 3D shape measurement is widely applied in several areas [1]. Fringe projection profilometry (FPP) [2,3], which is one of the most popular 3D shape measurement methods, can acquire full-field, high-resolution, and high-precision 3D point cloud. However, objects composed of translucent materials, such as marbles, synthetic resins, and biological tissues, are difficult to be accurately measured by FPP because of the subsurface scattering effect. The incident light penetrates the surface, scatters, and then exits at different locations around the incident point for translucent materials. Subsurface scattering degrades the modulation of the fringe pattern, which produces incomplete measurement results and large random errors. In addition, subsurface scattering introduces phase offset, which induces geometric errors.

In recent years, optical 3D shape measurement methods for translucent objects are widely investigated. Polarizers can be used to separate surface reflection from subsurface scattering to eliminate geometric errors [4]. High-frequency illumination can be utilized to separate direct and indirect illumination because low-frequency indirect illumination, such as subsurface scattering, becomes nearly constant under high-frequency illumination [5]. Low-frequency phase-shifting fringe patterns can be modulated by high-frequency patterns to separate surface reflection from subsurface scattering [6]. Phase-shifting fringe patterns can be limited in a narrow high-frequency band to reduce geometric errors [7].

However, high-frequency illumination reduces the contrast of the projected patterns, which leads to higher phase errors for FPP and binarization errors for binary pattern projection method. Therefore, conventional Gray codes can be substituted with minimum stripe-width codes to achieve robust binarization [8], but large width stripe increases geometric errors. Sparse dashed lines can measure translucent objects with high accuracy [9]. The frequency of dashed lines is sufficiently high in the vertical direction to reduce geometric errors. In the horizontal direction, a large stride is used to maintain the image contrast.

Geometric errors can be compensated for by measuring an etalon with the same material as the measured object from different directions [10]. A statistical error model can be fitted by measuring an object with the same material to the measured object before and after coating to compensate for the geometric errors [11]. Another error compensation method requires the bidirectional surface scattering reflectance distribution function of the object and can only handle short-range scattering effects [12]. Epipolar imaging can significantly reduce the impact of subsurface scattering [13–15], but geometric errors cannot be fully eliminated because indirect illumination transports in the epipolar plane remains. The influence of light wavelength on measuring translucent objects is investigated [16,17], and the results indicate that short-wave light can reduce geometric errors.

In this research, a high-accuracy 3D shape measurement method for translucent objects based on phase-shifting FPP is presented. The proposed method copes with measurement errors from different sources. The relationship between fringe period and random errors is established to determine the fringe periods. Random errors induced by modulation degradation are suppressed by temporal noise reduction. The robustness of multi-frequency heterodyne phase unwrapping is improved by increasing the interval of fringe periods along with temporal noise reduction. Geometric errors introduced by the phase offset are compensated for by projecting fringe patterns with multiple fringe periods to fit an empirical error model that describes the relationship between fringe period and depth offset. Experimental results demonstrate that the proposed method can acquire complete 3D shape measurement results of translucent objects and significantly reduce the overall error.

The rest of this paper is organized as follows. Principles of the proposed method, including the relationship between fringe period and phase error, random error suppression, phase unwrapping improvement, and geometric error compensation, are explained in Section 2. Experimental and simulation results are exhibited and discussed in Section 3, and conclusions are presented in Section 4.

2. Principles

2.1 Challenges for measuring translucent objects by FPP

In this research, stereo vision based phase-shifting FPP is utilized [18,19]. The phase-shifting sinusoidal fringe patterns are projected by a projector, and two cameras capture the deformed fringe patterns on the object simultaneously. Phase matching between two cameras is performed to obtain the 3D point cloud [20].

The stereo vison based FPP is resistant to several error sources, such as the nonlinearity of the projector and diffuse interreflections, because the error sources exhibit in the same manner in the two cameras. As exhibited in Fig. 1, the exit point of the light is the same as the incident point when opaque objects are illuminated by the projector. However, when translucent objects are illuminated, the incident light penetrates the surface, scatters in the participating media, and then exits at different locations around the incident point. This phenomenon is called subsurface scattering, which increases the difficulty of accurately measuring translucent objects by FPP. In this research, four-step phase-shifting FPP is utilized to recover the phase. The intensities of sinusoidal fringe patterns Ii (i = 0, 1, 2, 3) for each point are described as

Ii=A+Bcos(ϕ+π2i),
where A is the average intensity and B is the modulation of sinusoidal fringe pattern, which can be obtained by
B=12(I3I1)2+(I0I2)2,
and ϕ is the phase of sinusoidal fringe pattern that can be retrieved by

 figure: Fig. 1

Fig. 1 Schematics of 3D shape measurement of opaque and translucent objects by FPP. (a) 3D shape measurement of opaque objects by FPP; (b) 3D shape measurement of translucent objects by FPP.

Download Full Size | PPT Slide | PDF

ϕ=arctanI3I1I0I2.

The intensity noise of the image captured by the camera is assumed to be additive and zero-mean Gaussian distributed with a standard deviation of σn. The standard deviation of the phase error σϕ can be defined as [3]

σϕ=σn2B.

There are three major challenges to be faced when phase-shifting FPP is utilized to measure translucent objects. An opaque object (ABS resin) and a translucent object (polyamide) are measured for comparison to illustrate the challenges.

  • (1) Random errors: Phase errors caused by intensity noise lead to random errors in the reconstructed 3D point cloud. The random errors for translucent materials are significantly larger than those for opaque materials because the fringe modulation B is degraded by the subsurface scattering effects as exhibited in Fig. 2(a).
  • (2) Phase unwrapping failures: Phase unwrapping is likely to fail because the multi-frequency heterodyne phase unwrapping method [21,22], which is commonly utilized in FPP, is sensitive to intensity noise. Phase unwrapping failure causes incomplete 3D shape measurement results for translucent materials.
  • (3) Geometric errors: Geometric errors are introduced by the phase offset caused by the subsurface scattering effect. Such errors can be regarded as depth offsets that make the 3D point far from the actual position. As shown in Fig. 2(b), the measured depth remains constant across various fringe periods for opaque materials but changes for translucent materials, which indicates that the measured depth for translucent materials has geometric errors. The depth offsets may not affect shape measurement if the depth offsets are constant across the measured surface. However, projection and observation directions are different across the measured surface, and the transparency of the surface may be spatially varying. Therefore, the spatially-varying depth offset changes the shape of the measured object. More details about geometric errors induced by subsurface scattering in stereo vision based FPP can be found in [10,23,24].
 figure: Fig. 2

Fig. 2 Comparison of fringe intensity profile and measured depth between opaque and translucent materials. The data are from the measurement results of an opaque object (ABS resin) and a translucent object (polyamide). (a) Fringe intensity profile. The modulation for translucent materials is significantly degraded; (b) Measured depth across various fringe periods. The measured depth is constant for opaque materials but variational for translucent materials (the depth is averaged in a 10 × 10 window to suppress random errors).

Download Full Size | PPT Slide | PDF

In previous research, high-frequency fringe patterns can be used to reduce geometric errors [5–7], but the degradation of fringe modulation increases random errors. The reduction of random or geometric errors is a pair of contradictions. In addition, the fringe frequency cannot be infinite; hence, geometric errors cannot be completely eliminated. In this research, fringe patterns with relatively low frequencies are combined with a novel geometric error compensation method to reduce random and geometric errors simultaneously.

2.2 Relationship between fringe period and phase error

The relationship between fringe period λ and phase error is investigated to determine the fringe periods. The wrapped phase ϕ, which is computed by Eq. (3), should be unwrapped to remove phase ambiguity. The range of the unwrapped phase Φ is [0, 2πF], where F is defined as the total number of fringe periods in the projected pattern as follows

F=Wpf=Wpλ,
where Wp is the horizontal resolution of the projector and f is the spatial frequency of the fringe pattern. If the unwrapped phase is scaled to the same range [0, 2π], the standard deviation of the normalized unwrapped phase error σΦ can be expressed as [3]

σΦ=σn2BF.

The modulation B is not constant and is affected by multiple factors, such as the resolution of the optical system, defocusing, and the subsurface scattering effect of translucent materials. Thus, the modulation B can be regarded as a function b(λ) with a variable fringe period λ. Then, Eq. (6) can be rewritten as

σΦ(λ)=σnλ2b(λ)Wp.

For opaque materials, the modulation B changes slightly with the growth of the fringe period. Thus, random errors can be reduced by decreasing the fringe period. However, decreasing the fringe period also degrades the modulation for translucent materials. Hence, confirming whether random errors are reduced is difficult.

To acquire the modulation function b(λ), the line spread function (LSF) ℒ [25], which can be assumed to be symmetric, can be measured and convolved with the original sinusoidal fringe patterns as follows [26]

g(x)=f(x)h(x)=[A+Bcos(ω0x)]L(x),
where g(x) is the convolved fringe pattern, f(x) is the original sinusoidal fringe pattern, h(x) is the convolution kernel, ℒ(x) is the LSF, is the convolution operator, and ω0 = 2π / λ is the angular frequency of the fringe pattern. According to the convolution theorem, g(x) can be also obtained by
g(x)=F1[F(ω)H(ω)],
where ℱ denotes the Fourier transform and ℱ−1 is the inverse Fourier transform, F(ω) and H(ω) are the Fourier spectra of the fringe pattern and the LSF, respectively, and ω is the angular frequency. In the frequency domain, the Fourier spectrum of the fringe pattern F(ω) can be expressed as

F(ω)=2πAδ(ω)+πB[δ(ωω0)+δ(ω+ω0)].

Therefore, the Fourier spectrum of the convolved fringe pattern G(ω) can be obtained by

G(ω)=F(ω)H(ω)=2πAδ(ω)H(0)+πB[δ(ωω0)+δ(ω+ω0)]H(ω0).

Consequently, the convolved fringe pattern g(x) is given by

g(x)=F1[G(ω)]=H(0)A+H(ω0)Bcos(ω0x).

Hence, the modulation function b(λ) can be expressed as

b(λ)=H(ω0)B.

However, the analytical expression of H(ω) is difficult to acquire because the LSF is measured from the discrete image data and the LSF of translucent materials cannot be well modelled by common degradation models, such as the Gaussian kernel. Alternatively, the discrete expression of H(ω) can be obtained by

H(k)=DFT[L(n)],
where DFT denotes the discrete Fourier transform, ℒ(n) (n = 0, 1, 2,…, N − 1) is the measured discrete LSF where N is the number of sample points, and H(k) (k = 0, 1, 2,…, N − 1) is the discrete expression of H(ω). Thus, H(ω0) can be approximated by
H(ω0)|H(N1λc)|,
where | | denotes the amplitude of the complex spectrum, and λc is the fringe period in the camera image, which can be obtained by
λc=fcfpλ,
where fc and fp are the focal lengths expressed in pixel units (the product of the focal length in world units and the number of pixels per world unit) from the intrinsic parameters of the camera and the projector, respectively. Therefore, Eq. (7) can be rewritten according to Eqs. (13) and (15) as follows
σΦ(λ)σnλ2|H(N1λc)|BWp,
and the standard deviation of the normalized unwrapped phase error σΦ(λ) across various fringe periods can be simulated by Eq. (17).

2.3 Random error suppression by temporal noise reduction

For translucent materials, one of the most effective approaches to suppress random errors is to reduce intensity noise according to Eq. (6). The intensity noise, which is additive and zero-mean, can be reduced by temporal noise reduction. Each fringe pattern is repetitively captured for K times. Both the measured object and the measurement system should remain stationary. For each point, the intensities of all captured images I1,2,...,i are averaged to obtain the mean intensity I¯ as follows

I¯=1Ki=1KIi.

The standard deviation of intensity noise can be reduced as follows

σn=σnK,
where σn denotes the standard deviation of the intensity noise after temporal noise reduction and σn denotes the standard deviation of the single image intensity noise. According to Eq. (4), the standard deviation of the phase error σϕ is divided by K as well after temporal noise reduction.

2.4 Robustness improvement for multi-frequency heterodyne phase unwrapping

In this research, multi-frequency heterodyne phase unwrapping [21,22] is utilized for FPP because the measurement results of different fringe periods can be averaged to further reduce random errors. However, multi-frequency heterodyne phase unwrapping is sensitive to intensity noise because the modulation of the fringe pattern is low for translucent materials, which increases the phase error. Suppose that two-frequency fringe patterns with two fringe periods λ1 and λ2 (λ1 < λ2) are utilized, and ∆λ = λ2λ1 is the interval of fringe periods. Hence, the equivalent fringe period λeq is given by

λeq=λ1λ2λ2λ1,
and the equivalent phase ϕeq corresponding to λeq can be obtained by
ϕeq={ϕ1ϕ2,ϕ1ϕ2>0ϕ1ϕ2+2π,ϕ1ϕ2<0,
where ϕ1 is the phase corresponding to the fringe period λ1, and ϕ2 is the phase corresponding to λ2. The equivalent phase ϕeq is unambiguous in the entire range of the projector, but it cannot be directly regarded as the unwrapped phase because it is noisy. ϕeq is only used to assist phase unwrapping. Hence, the unwrapped phase of λ1 can be obtained by

Φ1=ϕ1+2πround[12π(λeqλ1ϕeqϕ1)].

In Eq. (22), the round operation can eliminate errors less than 0.5. To correctly unwrap the phase, the deviation of the equivalent phase Δϕeq=ϕ^eqϕeq, where ϕ^eq is the measured equivalent phase and ϕeq is the actual equivalent phase, should satisfy

λeqλ1Δϕeq=λ2λ2λ1Δϕeq<π.
 figure: Fig. 3

Fig. 3 Part of the unwrapped phase maps of different intervals of fringe periods ∆λ. The phase maps are from the measurement results of a jade statue. (a) ∆λ = 1 pixel (fringe periods are 13, 14, and 15 pixels), K = 12; (b) ∆λ = 4 pixels (fringe periods are 16, 20, 24, 28, 32, and 36 pixels), K = 12.

Download Full Size | PPT Slide | PDF

Multi-frequency heterodyne phase unwrapping is sensitive to intensity noise because the deviation of the equivalent phase ∆ϕeq is multiplied by λ2 / (λ2λ1) in Eq. (23), which makes the errors difficult to be eliminated by the round operation in Eq. (22). The conclusion is similar for three or more frequencies [27]. In general, adjacent natural numbers are selected to be the fringe periods because fewer fringe periods are required to cover the entire range of the projector. For instance, 13, 14, and 15 are chosen to be the fringe periods when the horizontal resolution of the projector Wp = 1280 (the equivalent fringe period λeq = 1365). In this case, ∆ϕeq is multiplied by 14 and 15 during phase unwrapping. For opaque materials, it may not be a problem because the phase error is low. However, the phase error is large when translucent objects are measured; thus, the interval between λ1 and λ2 should be large enough to correctly unwrap the phase. In summary, increasing the interval of fringe periods ∆λ can improve the robustness of multi-frequency heterodyne phase unwrapping for translucent materials as demonstrated in Fig. 3. In addition, the temporal noise reduction described in Section 2.3 can improve the robustness of phase unwrapping because intensity noise is reduced.

2.5 Geometric error compensation

In this research, a novel geometric error compensation method that does not require any etalons or known material parameters is proposed. Fringe patterns with multiple fringe periods are projected to establish the relationship between fringe period and depth offset. Meanwhile, the measured object and the measurement system remain stationary during the projection. For each point of the object, the depth offset ε(λ) corresponding to the fringe period λ is defined as

ε(λ)=z^(λ)z,
where z^(λ) is the measured depth that contains depth offset, and z is the ground truth depth. On the basis of the measured depth as displayed in Fig. 2(b), the depth offset ε(λ) can be approximated by the following empirical expression
ε(λ)=p1λ2+p2λ,
where p1 and p2 are the coefficients of the second-order polynomial. The depth offset ε(λ) cannot be directly obtained by Eq. (24) because the ground truth depth is unknown. Nevertheless, the ground truth depth can be eliminated by subtracting the depth offset of a fringe period from that of another fringe period to acquire the relative depth offset, which can also be expressed as the depth difference between two fringe periods as follows
ε(λ)ε(λ1)=[z^(λ)z][z^(λ1)z]=z^(λ)z^(λ1),
where λ1 is the fringe period of the highest frequency fringe pattern in the multi-frequency fringe patterns. The relative depth offsets can be obtained from the measurement results of various fringe periods. According to Eq. (25), the relative depth offset can be also expressed as

ε(λ)ε(λ1)=p1(λ2λ12)+p2(λλ1).

Combining Eq. (26) with Eq. (27), p1 and p2 can be estimated by projecting multi-frequency fringe patterns and solving the following linear least-squares problem

[λ22λ12λ2λ1λ32λ12λ3λ1λN2λ12λNλ1][p1p2]=[z^(λ2)z^(λ1)z^(λ3)z^(λ1)z^(λN)z^(λ1)],
where λi represents the ith fringe period (i = 2, 3,…, N, N > 2). The fringe periods utilized in multi-frequency heterodyne phase unwrapping can be directly applied to solve Eq. (28). Hence, no additional fringe patterns are required to be projected. Combining with Eq. (24), the depth offsets of all fringe periods are compensated for and averaged to obtain the compensated depth zʹ as follows
z=1Ni=1N[z^(λi)ε(λi)]=1Ni=1N[z^(λi)p^1λi2p^2λi],
where p^1 and p^2 are the estimated values of p1 and p2 by solving the linear least-squares problem. The compensated depth of each fringe period is computed separately, and then all of them are averaged to further reduce random errors. In practice, the relative depth offsets (depth differences) are smoothed to mitigate the impact of random errors on geometric error compensation.

3. Experiments

The experimental system consists of a digital projector with a resolution of 1280 × 800 and two monochrome CCD cameras (Basler acA1600-20gm) with a resolution of 1626 × 1236. The projector is equipped with a blue LED light source at 465 nm. A sphere made by polyamide (nylon), which is a type of synthetic resin, and a jade statue are investigated in the experiment as exhibited in Fig. 4. The nominal diameter of the sphere is 25.4 mm, and the material of the region in the red circle is near opaque as shown in Fig. 4(b).

 figure: Fig. 4

Fig. 4 Measured objects. (a) Polyamide sphere; (b) Jade statue (the material of the region in the red circle is near opaque).

Download Full Size | PPT Slide | PDF

The LSFs of the measured translucent materials are first acquired to establish the relationship between fringe period and phase error for translucent materials. To measure the LSF, a light stripe with a width of 1 pixel is projected onto the object, and an image is captured by the camera. To eliminate the impact of ambient illumination, an image without the projected light stripe is captured and subtracted from the image with the projected light stripe to acquire the difference image. The intensity profile of a scanline in the difference image is regarded as the measured discrete LSF ℒ(n), and each pixel of the scanline is a sample point of LSF. The sample points far away from the peak are discarded. The LSF of opaque material is also measured for comparison. The measured LSFs and their DFT spectra are shown in Fig. 5. The number of sample points N is 21 for opaque material, 141 for polyamide, and 201 for jade.

 figure: Fig. 5

Fig. 5 Measured LSFs and their DFT spectra. (a) LSF of opaque material; (b) LSF of polyamide; (c) LSF of jade; (d) DFT spectrum of (a); (e) DFT spectrum of (b); (f) DFT spectrum of (c).

Download Full Size | PPT Slide | PDF

Then, the modulation function b(λ) and the unwrapped phase error can be simulated. The modulation function can be simulated by Eqs. (13) and (15) (λ = 10,11,12,...,70), and the undegraded modulation B is set at 1 because the value of B is unknown. fc is 1898.61 and fp is 1753.89 in Eq. (16). To validate the simulation of modulation function, the modulation function is directly measured by projecting four-step phase-shifting fringe patterns with different fringe periods onto the translucent objects and then compute the modulation corresponding to a specific fringe period by Eq. (2). To compare the simulated modulation with the measured modulation, the simulated and measured modulations are divided by the respective maximum modulations of various fringe periods to acquire the relative modulations as depicted in Fig. 6(a).

 figure: Fig. 6

Fig. 6 Comparison of the modulation and normalized unwrapped phase error between measurement and simulation. The modulations are divided by the maximum, and the phase errors are divided by the minimum to acquire the relative modulations and phase errors for comparison between measurement and simulation. (a) Measured and simulated relative modulations across various fringe periods; (b) Measured and simulated relative normalized unwrapped phase errors across various fringe periods.

Download Full Size | PPT Slide | PDF

Once the simulated modulation is acquired, the standard deviation of the simulated normalized unwrapped phase error can be computed by Eq. (7) and the standard deviation of the intensity noise σn is set at 1 because σn is not measured in the experiment. The simulated phase error is compared with the measured phase error, which can be represented by the repeatability of normalized unwrapped phase. To compare the simulated phase error with the measured phase error, the simulated and measured phase errors are divided by the respective minimum phase errors of various fringe periods to acquire the relative phase errors as displayed in Fig. 6(b).

According to the measurement and simulation results displayed in Fig. 6, the normalized unwrapped phase error always increases for opaque materials. For translucent materials, the modulation is approximately proportional to the fringe period. However, the phase error shows no obvious regularity, and the ratio of the maximum phase error to the minimum phase error is quite small. Considering that the objects with both translucent and opaque materials may be measured, the fringe period should not be very large. Therefore, the fringe periods are chosen to be 16, 20, 24, 28, 32, and 36 pixels in the following experiment. Hence, ∆λ = 4 pixels and the equivalent fringe period λeq = 2016 pixels.

To prove that temporal noise reduction can suppress random errors, the phase error corresponding to various numbers of averaged frames K (K = 1, 2, 3,…, 12) is acquired by measuring the repeatability of phase between two independent measurements. The fringe period λ = 16 pixels. The standard deviation of the simulated phase error is computed by Eq. (19). Given that the standard deviation of the intensity noise σn is unknown, the measured and simulated phase errors are divided by the respective maximum phase errors of various K to acquire the relative phase errors as displayed in Fig. 7. The measurement and simulation results in Fig. 7 demonstrate that temporal noise reduction can significantly reduce random errors.

 figure: Fig. 7

Fig. 7 Measured and simulated relative phase errors across various K.

Download Full Size | PPT Slide | PDF

To verify that increasing the interval of fringe periods and temporal noise reduction can improve the robustness of multi-frequency heterodyne phase unwrapping, the jade statue is measured by using different intervals of fringe periods ∆λ and number of averaged frames K as exhibited in Fig. 8. The intervals of fringe periods ∆λ are 1 pixel (fringes periods are 13, 14, and 15 pixels) and 4 pixels (fringes periods are 16, 20, 24, 28, 32, and 36 pixels), and the numbers of averaged frames K are 1 and 12. The measurement results demonstrate that increasing either the interval of fringe periods or the number of averaged frames can improve the robustness of phase unwrapping. Moreover, several regions in Figs. 8(a)–8(c) can be measured because the materials are nearly opaque as shown in Fig. 4(b).

 figure: Fig. 8

Fig. 8 3D shape measurement results of the jade statue by different ∆λ and K. (a) ∆λ = 1 pixel, K = 1; (b) ∆λ = 4 pixels, K = 1; (c) ∆λ = 1 pixel, K = 12; (d) ∆λ = 4 pixels, K = 12.

Download Full Size | PPT Slide | PDF

To evaluate the effectiveness of geometric error compensation, the polyamide sphere and the jade statue are measured by the proposed method. Multi-frequency four-step phase-shifting fringe patterns are projected to acquire 3D point clouds separately. The fringe periods are the same as those used in multi-frequency heterodyne phase unwrapping. Hence, N = 6 in Eqs. (28) and (29). The number of averaged frames is 4 for the polyamide sphere and 12 for the jade statue. For each point, p1 and p2 are estimated using Eq. (28), and the depth offsets are compensated by Eq. (29) to acquire the compensated point cloud.

 figure: Fig. 9

Fig. 9 Deviations of sphere fitting. The data in the central region is missing due to the highlight. (a) Deviations of the uncompensated point cloud (λ = 16 pixels); (b) Deviations of the uncompensated point cloud (λ = 36 pixels); (c) Deviations of the uncompensated point cloud (average of all fringe periods); (d) Deviations of the compensated point cloud.

Download Full Size | PPT Slide | PDF

To evaluate the accuracy of measuring the polyamide sphere, the uncompensated and compensated point clouds are fitted by a sphere. The deviations of sphere fitting are exhibited in Fig. 9, and the corresponding diameters of the fitted sphere, mean absolute errors (MAEs), and root mean square errors (RMSEs) of sphere fitting are displayed in Table 1. The data in the central region is missing due to the highlight. The deviations shown in Fig. 9 and the diameters of the fitted spheres displayed in Table 1 demonstrate that the proposed geometric compensation method can significantly reduce the geometric errors and larger fringe period induces larger geometric errors. In addition, random errors are suppressed by averaging the measurement results of multi-frequency fringe patterns by comparing Figs. 9(c) and 9(d) with Figs. 9(a) and 9(b).

Tables Icon

Table 1. Fitting sphere diameter and errors (mm)

 figure: Fig. 10

Fig. 10 Deviations between the measured point clouds and the reference. (a) Deviations between the uncompensated point cloud (λ = 16 pixels) and the reference; (b) Deviations between the uncompensated point cloud (λ = 36 pixels) and the reference; (c) Deviations between the compensated point cloud and the reference. The errors in the opaque regions are larger than those in the translucent regions because the translucent regions occupy most part of the jade statue, and ICP registration is performed between the measurement point cloud and the reference point cloud for comparison.

Download Full Size | PPT Slide | PDF

To evaluate the measurement accuracy of the jade statue, the jade statue is coated with powders to acquire the reference point cloud. Then, uncompensated and compensated point clouds are compared with the reference point cloud to evaluate the accuracy as exhibited in Fig. 10, and the corresponding MAEs and RMSEs are displayed in Table 2. To compare with the reference point cloud, the uncompensated and compensated point clouds are aligned to the reference point cloud by iterative closest point (ICP) algorithm [28,29] because the measured object is removed from the measured position before coating. The errors in the opaque region [red circle in Fig. 4(b)] should be small. However, Figs. 10(a) and 10(b) show that the errors in the opaque regions are larger than those in the translucent regions because the translucent regions occupy most part of the jade statue, and ICP registration between the measurement point cloud and the reference point cloud reduces the deviations in the translucent regions. The deviations shown in Fig. 10 do not express the depth offset but indicate the changes of the shape. The deviations and measurement errors displayed in Fig. 10 and Table 2 demonstrate that the geometric errors are significantly reduced in the compensated point cloud and large fringe period increases the geometric errors.

Tables Icon

Table 2. Measurement errors of the jade statue (mm)

4. Conclusion

This research presents a high-accuracy 3D shape measurement method for translucent objects based on FPP, which copes with measurement errors from different sources. The relationship between fringe period and phase error is investigated to determine the fringe periods. Random errors are suppressed by temporal noise reduction. The robustness of multi-frequency heterodyne phase unwrapping is improved by increasing the interval of fringe periods along with temporal noise reduction. Geometric errors are compensated for by establishing the relationship between fringe period and depth offset. Experimental results demonstrate that the proposed method can obtain complete measurement results and significantly reduce the overall error for measuring translucent objects. Compared with high-frequency pattern projection methods, the proposed method uses relatively lower frequency fringe patterns to maintain fringe modulation. Compared with other geometric error compensation methods, the proposed method does not require etalons with the same material as the measured object or material parameters. In addition, the hardware of the proposed method is the same as that of the conventional stereo vision based FPP. Therefore, the conventional stereo vision based FPP system can be adapted to acquire complete and accurate 3D shape measurement results of translucent objects without modification.

Funding

National Natural Science Foundation of China (NSFC) (61735003, 61875007); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).

References

1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

3. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

4. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836. [CrossRef]  

5. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944. [CrossRef]  

6. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

7. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

8. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013). [CrossRef]  

9. T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197. [CrossRef]  

10. P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011). [CrossRef]  

11. S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145. [CrossRef]  

12. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]   [PubMed]  

13. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253. [CrossRef]  

14. M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015). [CrossRef]  

15. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018). [CrossRef]   [PubMed]  

16. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017). [CrossRef]  

17. S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018). [CrossRef]   [PubMed]  

18. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000). [CrossRef]  

19. X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009). [CrossRef]  

20. D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

21. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

22. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

23. P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012). [CrossRef]  

24. P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015). [CrossRef]  

25. K. Happel, E. Dörsam, and P. Urban, “Measuring isotropic subsurface light transport,” Opt. Express 22(8), 9048–9062 (2014). [CrossRef]   [PubMed]  

26. Y. Wang, H. Zhao, H. Jiang, and X. Li, “Defocusing parameter selection strategies based on PSF measurement for square-binary defocusing fringe projection profilometry,” Opt. Express 26(16), 20351–20367 (2018). [CrossRef]   [PubMed]  

27. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997). [CrossRef]  

28. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152. [CrossRef]  

29. T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
    [Crossref]
  2. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010).
    [Crossref]
  3. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
    [Crossref]
  4. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
    [Crossref]
  5. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
    [Crossref]
  6. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.
  7. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.
  8. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
    [Crossref]
  9. T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
    [Crossref]
  10. P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
    [Crossref]
  11. S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145.
    [Crossref]
  12. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018).
    [Crossref] [PubMed]
  13. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
    [Crossref]
  14. M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
    [Crossref]
  15. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26(6), 7117–7131 (2018).
    [Crossref] [PubMed]
  16. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
    [Crossref]
  17. S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018).
    [Crossref] [PubMed]
  18. C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
    [Crossref]
  19. X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009).
    [Crossref]
  20. D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.
  21. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
    [Crossref]
  22. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
    [Crossref]
  23. P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
    [Crossref]
  24. P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
    [Crossref]
  25. K. Happel, E. Dörsam, and P. Urban, “Measuring isotropic subsurface light transport,” Opt. Express 22(8), 9048–9062 (2014).
    [Crossref] [PubMed]
  26. Y. Wang, H. Zhao, H. Jiang, and X. Li, “Defocusing parameter selection strategies based on PSF measurement for square-binary defocusing fringe projection profilometry,” Opt. Express 26(16), 20351–20367 (2018).
    [Crossref] [PubMed]
  27. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
    [Crossref]
  28. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
    [Crossref]
  29. T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003).
    [Crossref]

2018 (6)

2017 (1)

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

2016 (1)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

2015 (1)

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

2014 (1)

2013 (1)

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

2012 (1)

P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
[Crossref]

2011 (1)

P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
[Crossref]

2010 (1)

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010).
[Crossref]

2009 (1)

X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009).
[Crossref]

2000 (2)

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
[Crossref]

1997 (1)

C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
[Crossref]

Aanæs, H.

S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145.
[Crossref]

Achar, S.

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
[Crossref]

Agrawal, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

Asundi, A.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Breitbarth, A.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

Brown, G. M.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Chen, F.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Chen, Q.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Chen, T.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
[Crossref]

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

Da, F.

Dörsam, E.

Feng, S.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Fuchs, C.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
[Crossref]

Grossberg, M. D.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Gupta, M.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

Han, X.

X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009).
[Crossref]

Happel, K.

Heist, S.

Higo, T.

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Huang, L.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Huang, P.

X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009).
[Crossref]

Jensen, S. N.

S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145.
[Crossref]

Jiang, H.

Katayama, A.

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Kobayashi, K.

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Kobayashi, T.

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Kowarschik, R.

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

Krishnan, G.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Kühmstedt, P.

S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018).
[Crossref] [PubMed]

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
[Crossref]

Kutulakos, K. N.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
[Crossref]

Lensch, H. P. A.

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
[Crossref]

Levoy, M.

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

Li, D.

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

Li, X.

Lutzke, P.

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
[Crossref]

Mather, J.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

Narasimhan, S. G.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
[Crossref]

Nayar, S. K.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Niemann, H.

T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003).
[Crossref]

Notni, G.

S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018).
[Crossref] [PubMed]

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
[Crossref]

P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
[Crossref]

O’Toole, M.

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
[Crossref]

Rao, L.

Raskar, R.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

Reich, C.

C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
[Crossref]

Reichwald, K.

Ritter, R.

C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
[Crossref]

Rosenberger, M.

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

Rusinkiewicz, S.

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

Schmidt, J.

T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003).
[Crossref]

Seidel, H. P.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
[Crossref]

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

Song, M.

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

Tao, T.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Thesing, J.

C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
[Crossref]

Tünnermann, A.

Urban, P.

Veeraraghavan, A.

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

Wang, Y.

Wilm, J.

S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145.
[Crossref]

Xu, Y.

Yamasaki, M.

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Yin, W.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Zhang, C.

S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018).
[Crossref] [PubMed]

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

Zhang, M.

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Zhang, S.

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010).
[Crossref]

Zhao, H.

Zinßer, T.

T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003).
[Crossref]

Zuo, C.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

Appl. Opt. (1)

Int. J. Comput. Vis. (1)

M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1–3), 33–55 (2013).
[Crossref]

Opt. Eng. (4)

P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011).
[Crossref]

F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “3-D shape measurement of complex objects by combining photogrammetry and fringe projection,” Opt. Eng. 39(1), 224–231 (2000).
[Crossref]

P. Lutzke, S. Heist, P. Kühmstedt, R. Kowarschik, and G. Notni, “Monte Carlo simulation of three-dimensional measurements of translucent objects,” Opt. Eng. 54(8), 084111 (2015).
[Crossref]

Opt. Express (4)

Opt. Lasers Eng. (4)

C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).
[Crossref]

S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018).
[Crossref]

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Proc. SPIE (4)

P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U (2012).
[Crossref]

X. Han and P. Huang, “Combined stereovision and phase shifting method: A new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009).
[Crossref]

C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “Wavelength dependency of optical 3D measurements at translucent objects using fringe pattern projection,” Proc. SPIE 10220, 1022007 (2017).
[Crossref]

C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997).
[Crossref]

Other (11)

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

T. Zinßer, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” in Proceedings of International Conference on Image Processing (IEEE), 695–698 (2003).
[Crossref]

M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253.
[Crossref]

M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” in Proceedings of ACM SIGGRAPH (ACM), 35 (2015).
[Crossref]

D. Li, H. Zhao, and H. Jiang, “Fast phase-based stereo matching method for 3D shape measurement,” in Proceedings of International Symposium on Optomechatronic Technologies (IEEE, 2011), pp. 1–5.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
[Crossref]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” in Proceedings of ACM SIGGRAPH (ACM, 2006), pp. 935–944.
[Crossref]

T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

S. N. Jensen, J. Wilm, and H. Aanæs, “An error analysis of structured light scanning of biological tissue,” in Proceedings of the Scandinavian Conference on Image Analysis (Springer, 2017), pp. 135–145.
[Crossref]

T. Kobayashi, T. Higo, M. Yamasaki, K. Kobayashi, and A. Katayama, “Accurate and practical 3D measurement for translucent objects by dashed lines and complementary Gray code projection,” in Proceedings of International Conference on 3D Vision (IEEE, 2015), pp. 189–197.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematics of 3D shape measurement of opaque and translucent objects by FPP. (a) 3D shape measurement of opaque objects by FPP; (b) 3D shape measurement of translucent objects by FPP.
Fig. 2
Fig. 2 Comparison of fringe intensity profile and measured depth between opaque and translucent materials. The data are from the measurement results of an opaque object (ABS resin) and a translucent object (polyamide). (a) Fringe intensity profile. The modulation for translucent materials is significantly degraded; (b) Measured depth across various fringe periods. The measured depth is constant for opaque materials but variational for translucent materials (the depth is averaged in a 10 × 10 window to suppress random errors).
Fig. 3
Fig. 3 Part of the unwrapped phase maps of different intervals of fringe periods ∆λ. The phase maps are from the measurement results of a jade statue. (a) ∆λ = 1 pixel (fringe periods are 13, 14, and 15 pixels), K = 12; (b) ∆λ = 4 pixels (fringe periods are 16, 20, 24, 28, 32, and 36 pixels), K = 12.
Fig. 4
Fig. 4 Measured objects. (a) Polyamide sphere; (b) Jade statue (the material of the region in the red circle is near opaque).
Fig. 5
Fig. 5 Measured LSFs and their DFT spectra. (a) LSF of opaque material; (b) LSF of polyamide; (c) LSF of jade; (d) DFT spectrum of (a); (e) DFT spectrum of (b); (f) DFT spectrum of (c).
Fig. 6
Fig. 6 Comparison of the modulation and normalized unwrapped phase error between measurement and simulation. The modulations are divided by the maximum, and the phase errors are divided by the minimum to acquire the relative modulations and phase errors for comparison between measurement and simulation. (a) Measured and simulated relative modulations across various fringe periods; (b) Measured and simulated relative normalized unwrapped phase errors across various fringe periods.
Fig. 7
Fig. 7 Measured and simulated relative phase errors across various K.
Fig. 8
Fig. 8 3D shape measurement results of the jade statue by different ∆λ and K. (a) ∆λ = 1 pixel, K = 1; (b) ∆λ = 4 pixels, K = 1; (c) ∆λ = 1 pixel, K = 12; (d) ∆λ = 4 pixels, K = 12.
Fig. 9
Fig. 9 Deviations of sphere fitting. The data in the central region is missing due to the highlight. (a) Deviations of the uncompensated point cloud (λ = 16 pixels); (b) Deviations of the uncompensated point cloud (λ = 36 pixels); (c) Deviations of the uncompensated point cloud (average of all fringe periods); (d) Deviations of the compensated point cloud.
Fig. 10
Fig. 10 Deviations between the measured point clouds and the reference. (a) Deviations between the uncompensated point cloud (λ = 16 pixels) and the reference; (b) Deviations between the uncompensated point cloud (λ = 36 pixels) and the reference; (c) Deviations between the compensated point cloud and the reference. The errors in the opaque regions are larger than those in the translucent regions because the translucent regions occupy most part of the jade statue, and ICP registration is performed between the measurement point cloud and the reference point cloud for comparison.

Tables (2)

Tables Icon

Table 1 Fitting sphere diameter and errors (mm)

Tables Icon

Table 2 Measurement errors of the jade statue (mm)

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

I i = A + B cos ( ϕ + π 2 i ) ,
B = 1 2 ( I 3 I 1 ) 2 + ( I 0 I 2 ) 2 ,
ϕ = arc tan I 3 I 1 I 0 I 2 .
σ ϕ = σ n 2 B .
F = W p f = W p λ ,
σ Φ = σ n 2 B F .
σ Φ ( λ ) = σ n λ 2 b ( λ ) W p .
g ( x ) = f ( x ) h ( x ) = [ A + B cos ( ω 0 x ) ] L ( x ) ,
g ( x ) = F 1 [ F ( ω ) H ( ω ) ] ,
F ( ω ) = 2 π A δ ( ω ) + π B [ δ ( ω ω 0 ) + δ ( ω + ω 0 ) ] .
G ( ω ) = F ( ω ) H ( ω ) = 2 π A δ ( ω ) H ( 0 ) + π B [ δ ( ω ω 0 ) + δ ( ω + ω 0 ) ] H ( ω 0 ) .
g ( x ) = F 1 [ G ( ω ) ] = H ( 0 ) A + H ( ω 0 ) B cos ( ω 0 x ) .
b ( λ ) = H ( ω 0 ) B .
H ( k ) = DFT[ L ( n )] ,
H ( ω 0 ) | H ( N 1 λ c ) | ,
λ c = f c f p λ ,
σ Φ ( λ ) σ n λ 2 | H ( N 1 λ c ) | B W p ,
I ¯ = 1 K i = 1 K I i .
σ n = σ n K ,
λ eq = λ 1 λ 2 λ 2 λ 1 ,
ϕ eq = { ϕ 1 ϕ 2 , ϕ 1 ϕ 2 > 0 ϕ 1 ϕ 2 + 2 π , ϕ 1 ϕ 2 < 0 ,
Φ 1 = ϕ 1 + 2 π round [ 1 2 π ( λ eq λ 1 ϕ eq ϕ 1 ) ] .
λ eq λ 1 Δ ϕ eq = λ 2 λ 2 λ 1 Δ ϕ eq < π .
ε ( λ ) = z ^ ( λ ) z ,
ε ( λ ) = p 1 λ 2 + p 2 λ ,
ε ( λ ) ε ( λ 1 ) = [ z ^ ( λ ) z ] [ z ^ ( λ 1 ) z ] = z ^ ( λ ) z ^ ( λ 1 ) ,
ε ( λ ) ε ( λ 1 ) = p 1 ( λ 2 λ 1 2 ) + p 2 ( λ λ 1 ) .
[ λ 2 2 λ 1 2 λ 2 λ 1 λ 3 2 λ 1 2 λ 3 λ 1 λ N 2 λ 1 2 λ N λ 1 ] [ p 1 p 2 ] = [ z ^ ( λ 2 ) z ^ ( λ 1 ) z ^ ( λ 3 ) z ^ ( λ 1 ) z ^ ( λ N ) z ^ ( λ 1 ) ] ,
z = 1 N i = 1 N [ z ^ ( λ i ) ε ( λ i ) ] = 1 N i = 1 N [ z ^ ( λ i ) p ^ 1 λ i 2 p ^ 2 λ i ] ,

Metrics