Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system

Open Access Open Access

Abstract

3D shape measurement by structured light is a popular technique for recovering object surfaces. However, structured light technique assumes that scene points are directly illuminated by the light source(s). Consequently, global illumination effects, such as subsurface scattering in translucent objects, may cause measurement errors in recovered 3D shapes. In this research, we propose a 3D shape measurement method of translucent objects based on Fourier single-pixel imaging (FSI) technique. The 3D shapes of the translucent objects are reconstructed through stereo matching of direct illumination light, which is separated from the surface. Experimental results show that the proposed method can separate the direct illumination light and the subsurface scattering light. The feasibility and accuracy of the method are analyzed, and the qualitative and quantitative results of the method are provided.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured light technique is one of the most popular optical 3D shape measurement methods [1]. In structured light technique, a structured light pattern is projected onto the surface of the object and the camera captures the image using a calibrated pair of camera and projector [2]. The 3D information of the object surface can be reconstructed by establishing the correspondence between the camera and projector pixels. However, for translucent objects, the incident light scatters inside the material and the direct reflection signal on the surface is weak, which makes 3D reconstruction of object surfaces extremely difficult.

Several studies on 3D shape measurements of translucent objects have been proposed. In structured light technique, the presence of subsurface scattering and interreflections hinders the detection of the light that interacts with the objects. Nayar et al. used high-frequency illumination to separate the direct and global components of a scene [3]. Several subsequent methods for separating indirect illumination by phase-shifting method [4,5] are built on the basis of this method. These techniques use high-frequency sinusoidal patterns to modulate low-frequency patterns for suppressing subsurface scattering. Nayar et al. studied spatial frequency multiplexing of illumination patterns to separate direct and global light paths of different illumination sources [4]. Chen et al. used this method for 3D scanning by phase-shifting, in which the effect of subsurface scattering is reduced by using high-frequency patterns [5]. Gupta et al. proposed Micro Phase Shifting to avoid the effects of global illumination [6]. All frequencies are placed in a narrow band so that the amplitudes of all frequencies are approximately the same and can be treated as a single unknown. However, high-frequency patterns cannot completely eliminate the effect of subsurface scattering. Thus, these methods can be combined with polarization difference imaging (PDI), which takes advantage of the fact that multiple scattered lights become depolarized [7]. However, PDI adds complexity to the experimental setup.

Meanwhile, the 3D reconstruction errors of translucent objects can be reduced by establishing an error compensation model. Two main situations exist; one is to measure various materials before and after coating and establish error compensation models for different roughness [8], and the other requires the bidirectional surface scattering reflectance distribution function (BSSRDF) of the object [9]. The former is based on the assumption that the measured translucent material is homogeneous whereas the latter can only deal with short-range scattering effects.

In single-pixel imaging (SPI) technique, a programmable spatial light modulator (SLM) is used to display patterns, and a single-pixel detector without spatial resolution is used to capture the modulated information of a scene [10,11]. Zhang et al. presented the Fourier single-pixel imaging (FSI) technique which can obtain high-quality images by using the four-step phase-shifting sinusoidal illumination to acquire the Fourier spectrum of the desired image [12].

In this research, we propose a 3D shape measurement method of translucent objects based on Fourier single-pixel imaging (FSI) technique. We treat each camera pixel as a single pixel detector, which obtains the Fourier spectrum of the corresponding scene point. And the IDFT algorithm is applied to the obtained Fourier spectrum to reconstruct the final image, which is the light transport coefficients. Direct illumination and subsurface scattering light can be separated by the obtained light transport coefficients, and the 3D shape can be reconstructed through stereo matching of direct illumination light. The experimental results show that the proposed method can acquire complete 3D shape measurement results of translucent objects and significantly reduce measurement errors.

The rest of this paper is organized as follows. The principles of the proposed method, including phase analysis for measuring translucent objects by fringe projection profilometry (FPP), calculation of the light transport coefficients of translucent objects by FSI, and separation of direct illumination light and subsurface scattering light on translucent object surface and subpixel localization of direct illumination light coordinates in FSI are explained in Section 2. Experimental results are exhibited and discussed in Section 3. Conclusions are presented in Section 4.

2. Principles

The structured light 3D shape measurement system consists of a camera and a projector. The phase deviation caused by translucency is analyzed by using FPP as an example. The projector pixel directly illuminates the scene point and is imaged at the camera pixels, and the correspondences between camera pixels and projector pixels are established by projecting fringe patterns onto the scene. The phase offsets for translucent objects are introduced on the surface because of subsurface scattering.

2.1 Phase analysis for measuring translucent objects by FPP

In FPP, a calibrated projector-camera system is utilized, the projector projects fringe patterns onto the surface of the object to be measured, and the collected deformed fringe patterns contain the 3D shape information of the object to be measured [1316]. N-step phase-shifting sinusoidal fringe patterns ${P_i}(i = 0,1,\ldots ,N,N \ge 2)$ with spatial frequency f are expressed as

$${P_i}(x,y) = A(x,y) + B(x,y)\cos \left[ {{\phi_f}(x,y) + \frac{{2\pi }}{N}i} \right],$$
where $(x,y)$ represents the 2D Cartesian coordinates in the scene, $A(x,y)$ is the average intensity that is also known as the DC term, $B(x,y)$ is the amplitude, and ${\phi _f}(x,y)$ is the wrapped phase with spatial frequency f.

For opaque objects, as exhibited in Fig. 1(a), the total intensity response on the scene point can be described as

$${I_i}(x,y) = O(x,y) + R(x,y){P_i}(x,y),$$
where $O(x,y)$ represents the effect of environmental illumination and $R(x,y)$ represents the reflectance of the object surface in the scene, which can be retrieved by
$$R(x,y) = \frac{2}{N}\sqrt {{{\left[ {\sum\nolimits_{i = 0}^{N - 1} {{I_i}(x,y)\sin \left( {\frac{{2{\pi }}}{N}i} \right)} } \right]}^2} + {{\left[ {\sum\nolimits_{i = 0}^{N - 1} {{I_i}(x,y)\cos \left( {\frac{{2{\pi }}}{N}i} \right)} } \right]}^2}} ,$$
and the wrapped phase ${\phi _f}(x,y)$, which provides the 3D shape information, can be obtained by
$${\phi _f}(x,y) ={-} \arctan \left[ {\frac{{\sum\nolimits_{i = 0}^{N - 1} {{I_i}(x,y)\sin \left( {\frac{{2{\pi }}}{N}i} \right)} }}{{\sum\nolimits_{i = 0}^{N - 1} {{I_i}(x,y)\cos \left( {\frac{{2{\pi }}}{N}i} \right)} }}} \right],$$
Then the phase ambiguity is removed by phase unwrapping [17,18] and the absolute phase is used for 3D reconstruction [19].

 figure: Fig. 1.

Fig. 1. Schematics of 3D shape measurement of opaque and translucent objects by projector-camera system. (a) For opaque materials, the pixel q on the digital micromirror device (DMD) plane of the projector illuminates point o on the object surface and then the light enters the camera pixel c; (b) For translucent materials, the illumination at points o1 and o3 diffuses inside the translucent object, overlaps with the direct illumination at point o2, and enters the camera pixel c.

Download Full Size | PDF

However, for translucent objects, the incident light penetrates the surface, scatters, and then exits at various points around the point of incidence because of subsurface scattering effect. As exhibited in Fig. 1(b), subsurface scattering causes phase offset, which leads to geometric errors. In this situation, the total intensity response on scene point can be written as

$${I^{\prime}_i}(x,y) = O(x,y) + R(x,y){P_i}(x,y) + \int\!\!\!\int_\Omega {h(x,y;{x_j},{y_j}){P_i}({x_j},{y_j})dxdy} ,$$
where $({x_j},{y_j})$ are the scene points around $(x,y)$ that scatter light to $(x,y)$ due to subsurface scattering effect, $h(x,y;{x_j},{y_j})$ is the light transport coefficient that represents the attenuation effect from scene point $({x_j},{y_j})$ to $(x,y)$. $\Omega $ is the collection of all scene points, ${P_i}$ is 0 if a scene point is not illuminated, and h is 0 if subsurface scattering has no effect on $(x,y)$.

For translucent objects, the phase ${\phi ^{\prime}_f}(x,y)$ can be expressed as

$$\begin{aligned} {{\phi ^{\prime}_f}}(x,y) &= \arctan {\phi _f}(x,y) ={-} \arctan \left[ {\frac{{\sum\nolimits_{i = 0}^{N - 1} {{{I^{\prime}}_i}(x,y)\sin \left( {\frac{{2{\pi }}}{N}i} \right)} }}{{\sum\nolimits_{i = 0}^{N - 1} {{{I^{\prime}}_i}(x,y)\cos \left( {\frac{{2{\pi }}}{N}i} \right)} }}} \right]\\ &\quad = \frac{{R(x,y)\sin {\phi _f}(x,y) + \int\!\!\!\int_\Omega {h(x,y;{x_j},{y_j})\sin {\phi _f}({x_j},{y_j})dxdy} }}{{R(x,y)\cos {\phi _f}(x,y) + \int\!\!\!\int_\Omega {h(x,y;{x_j},{y_j})\cos {\phi _f}({x_j},{y_j})dxdy} }}. \end{aligned}$$
Therefore, when FPP is utilized to measure translucent objects, the calculated phase refers to the aliasing of multiple phases, and further use of multi-frequency heterodyne phase unwrapping is likely to fail. The failure of phase unwrapping causes incomplete 3D shape measurement of the translucent objects. The phase offset makes the 3D point far from the actual position. In this research, FSI technique is introduced to solve the phase aliasing problem.

2.2 Calculation of the light transport coefficients for translucent objects by FSI

FSI is based on the theorem of Fourier transform. This method uses phase-shifting sinusoidal fringe patterns to illuminate the scene, and a detector without spatial resolution to collect light and obtain the Fourier spectrum of the scene image [20,21].

In FSI, the scene is illuminated by four-step phase-shifting sinusoid fringe patterns ${P_i}(i = 0,1,2,3)$ with spatial frequency $({f_x},{f_y})$, which can be described as

$${P_i}(x,y,{f_x},{f_y}) = A(x,y) + B(x,y)\cos \left( {2\pi {f_x}x + 2\pi {f_y}y + \frac{{\pi }}{2}i} \right),$$
where $(x,y)$ represents the 2D Cartesian coordinate of a scene point, $A(x,y)$ is the average intensity that is also known as the DC term and $B(x,y)$ is the amplitude of the sinusoid fringe pattern. A digital camera is used to collect the light emitted from the scene. Thus, ${I^{\prime}_i}$ in Eq. (5) can be expressed as follows
$${I^{\prime}_i}(x,y,{f_x},{f_y}) = O(x,y) + \int\!\!\!\int_\Omega {{P_i}} ({x_j},{y_j},{f_x},{f_y})h(x,y;{x_j},{y_j})dxdy,$$
where, we assume that $R(x,y) = h(x,y;x,y)$. The Fourier coefficient of the spatial frequency $({f_x},{f_y})$ can be obtained by projecting four-step phase-shifting fringe patterns ${P_i}$ with the same spatial frequency and a constant phase shift $\Delta \phi {{ = {\pi }} \mathord{\left/ {\vphantom {{ = {\pi }} 2}} \right.} 2}$ between two adjacent patterns. The influence of environmental illumination O and DC term $A(x,y)$ can be eliminated by four-step phase-shifting. The light transport coefficient $h(x,y;{x_j},{y_j})$ can be obtained by
$$\begin{aligned} h(x,y;{x_j},{y_j}) &= \frac{1}{{2B(x,y)}}{F^{ - 1}}\{ [{{I^{\prime}_0}}(x,y,{f_x},{f_y}) - {{I^{\prime}_2}}(x,y,{f_x},{f_y})] + \\ &j[{{I^{\prime}_1}}(x,y,{f_x},{f_y}) - {{I^{\prime}_3}}(x,y,{f_x},{f_y})]\} \\ &= \frac{1}{{2B(x,y)}}{F^{ - 1}}\left[ {\int\!\!\!\int_\Omega {h(x,y;{x_j},{y_j})exp[j2\pi ({f_x}{x_j} + {f_y}{y_j})]dxdy} } \right], \end{aligned}$$
where j is the imaginary unit and ${F^{ - 1}}$ represents the 2D inverse discrete Fourier transform (IDFT) algorithm. By analyzing the light transport coefficient between scene point $({x_j},{y_j})$ and $(x,y)$, direct illumination and subsurface scattering light on the translucent object surface can be separated.

2.3 Separation of direct illumination and subsurface scattering light on translucent object surface

For digital projectors and cameras, due to the focusing principle of lenses, we can assume that each camera pixel or projector pixel corresponds to a scene point, which can be described as

$$\left\{ {\begin{array}{c} {(u,v) = C(x,y)}\\ {(m,n) = P(x,y)} \end{array}} \right.,$$
where, C represents the mapping from a scene point to image plane, and P represents the mapping from DMD plane to a scene point.

In this research, we treat each camera pixel as a single pixel detector. By FSI, each camera pixel obtains the Fourier spectrum of the corresponding scene point, and the IDFT algorithm is applied to the obtained Fourier spectrum to reconstruct the final image. The result comprises the light transport coefficients of the camera pixel, which includes the information of light transport from all the scene points illuminated by the projector to the scene points that correspond to the camera pixels in Section 2.2. Therefore, for each camera pixel $(u,v)$, $h(x,y;{x_j},{y_j})$ can be obtained, where $(x,y)$ is the corresponding scene point, and $({x_j},{y_j})$ is the scene point illuminated by the projector.

For opaque objects, the light transport coefficients of a camera pixel only have a non-zero value, as depicted in Fig. 2(a). However, for the translucent objects, the light transport coefficients of a camera pixel not only have a non-zero value but also many non-zero values in the range of approximately 5×5 pixels. These values include direct illumination and subsurface scattering light at other locations. Therefore, the coordinate with the maximal value in the light transport coefficients of a camera pixel is the pixel coordinate of the projector that corresponds to the direct illumination light.

 figure: Fig. 2.

Fig. 2. FSI result of camera pixel $(u,v)$. (a) Result of opaque object; (b) Result of translucent object. (I) represent the image plane, U×V is the resolution of the camera, (II) represent the DMD plane, M×N is the resolution of the projector, and (III) is the zoomed result in the red box. The results above are normalized.

Download Full Size | PDF

The aforementioned analysis shows that 3D point can be reconstructed by the triangulation principle in the projector-camera system with the correspondence between camera pixel $(u,v)$ and projector pixel $(m,n)$.

In practice, we do not need to normalize the results of FSI. We only need to find the point with the maximal gray value in the light transport coefficients of a camera pixel. Meanwhile, considering that the direct illumination light satisfies epipolar constraint, we can scan along the epipolar line to improve computational efficiency as displayed in Fig. 3.

 figure: Fig. 3.

Fig. 3. Imaging result of camera pixel c as single pixel detector. The image is taken from the perspective of the projector. The red box contains the pixels corresponding to the light received by the camera pixel c, including the direct illumination and subsurface scattering light. The white line indicates the epipolar line on the focal plane of the projector.

Download Full Size | PDF

For light transport coefficients of each camera pixel, a threshold d is set nearby the epipolar line, and the corresponding projector pixel coordinate can be obtained by scanning along the epipolar line within the threshold range to find the maximal gray value. If the equation of an epipolar line is $l = am + bn + c$, then the procedure can be represented as

$$\mathop {\arg \max }\limits_{m \in D} (m,n),|D |\le d,$$
where $D = \frac{{am + bn + c}}{{\sqrt {{a^2} + {b^2}} }}$ is the Euclidean distance from the pixel to the epipolar line, and $(m,n)$ is the projector pixel coordinate.

Therefore, we obtain the projector pixel coordinate $(m,n)$ that corresponds to the camera pixel coordinate $(u,v)$. 3D points can be computed on the basis of stereo triangulation principle by the correspondences between projector pixels and camera pixels.

2.4 Subpixel localization of direct illumination light coordinates in FSI

In the previous section, the obtained corresponding projector pixel coordinates $(m,n)$ of direct illumination light are at pixel level. To obtain accurate 3D measurement results, the pixel coordinates of direct illumination light should be localized at subpixel level.

The imaging result of a camera pixel as a single-pixel detector are shown in Fig. 4. The position of direct illumination light can be determined at pixel level as described in Section 2.3, and we need to calculate its subpixel position. We use the grayscale centroid method to determine the subpixel position. The weighted average value of the pixel coordinates is computed by using the gray values of pixels as the weights as follows

$$\left\{ \begin{array}{l} {x_0} = \frac{{{m_{10}}}}{{{m_{00}}}} = \frac{{\sum\limits_{(i,j) \in S} {i{w_{i,j}}} }}{{\sum\limits_{(i,j) \in S} {{w_{i,j}}} }}\\ {y_0} = \frac{{{m_{01}}}}{{{m_{00}}}} = \frac{{\sum\limits_{(i,j) \in S} {j{w_{i,j}}} }}{{\sum\limits_{(i,j) \in S} {{w_{i,j}}} }} \end{array} \right.,$$
where $({x_{0}},{y_{0}})$ is the subpixel coordinate of direct illumination light, ${w_{i,j}}$ is the weight, which is the gray value of the pixel $(i,j)$, and S is the imaging spot in FSI.

 figure: Fig. 4.

Fig. 4. Imaging result of camera pixel as single-pixel detector.

Download Full Size | PDF

3. Experiments

The experimental setup is shown in Fig. 5. A digital projector with a resolution of 1920×1080 is utilized to project patterns onto the scene and the wavelength of the light source is 465 nm. The reflected light is collected by a monochrome CMOS camera (Basler acA1920-155um) with a resolution of 1920×1200, and each camera pixel is treated as a single-pixel detector. The object is placed in the common depth of field of the projector and the camera.

 figure: Fig. 5.

Fig. 5. Experimental setup. The digital projector projects fringe patterns onto the object, and the object is located 0.5 m away from the experimental system. The reflected light is collected by the camera, and each pixel of the camera is used as a single-pixel detector. The computer is used to process data.

Download Full Size | PDF

The number of projected fringes can be reduced because of the symmetry of Fourier transform. To obtain a coefficient, four-step phase-shifting fringe patterns are projected by the digital projector onto the scene and the reflected light is detected sequentially by the camera, in which each pixel is treated as a single-pixel detector. The use of four-step phase-shifting technique can eliminate the effect of environment illumination and increase measurement accuracy. The total number of projected patterns is 1036800, and the measurement time is 1.8h.

As explained in Section 2.3, the FSI technique is combined with the structured light technique. According to the FSI principle, the projector pixel coordinate $(m,n)$ that corresponds to the camera pixel coordinate $(u,v)$ is obtained. And subpixel coordinate of direct illumination light is localized as described in Section 2.4. 3D points are determined from the corresponding relationship of projector and camera pixels by combining the triangulation principle and the calibration parameters of the projector and the camera.

To verify the feasibility of the proposed method, we measure several vegetables with translucency, including white onion and wax gourd slice. A marble carving with complex shape is also measured. The measurement results of the onion, the slice of wax gourd and the marble carving by traditional FPP, modified FPP which repetitively captures each fringe pattern for K times (K = 12) to suppress random errors [22], and FSI are exhibited in Fig. 6. The measurement results demonstrate that FSI is more effective in measuring 3D shape of translucent objects than traditional FPP and modified FPP. The point clouds measured by FSI are complete and dense.

 figure: Fig. 6.

Fig. 6. Measurement results of onion, wax gourd slice and marble statue by different approaches.

Download Full Size | PDF

To evaluate the precision of measurement results, we investigate a sphere and a statue in the experiment as exhibited in Fig. 7. The sphere is made of polyamide (nylon), which is a kind of synthetic resin with a diameter of 25.4 mm, and the statue is made of jade.

 figure: Fig. 7.

Fig. 7. Measured objects. (a) Polyamide sphere; (b) Jade statue.

Download Full Size | PDF

The measurement results of the sphere and the jade statue are shown in Figs. 8(a) and 8(c), respectively. To evaluate the measurement accuracy of the polyamide sphere, we fit the measurement result of the polyamide sphere by a sphere, and the deviations of sphere fitting are exhibited in Fig. 8(e). To evaluate the measurement accuracy of the jade statue, we coat the jade statue with powders to acquire the reference measurement result. Then, the measurement result obtained by FSI and the reference measurement result are compared to evaluate accuracy as exhibited in Fig. 8(g). We also acquire the measurement results of the polyamide sphere and the jade horse without subpixel localization, as shown in Figs. 8(b), 8(d), 8(f), and 8(h). The diameter of the fitted sphere, mean absolute errors (MAEs), and root mean square errors (RMSEs) are displayed in Table 1.

 figure: Fig. 8.

Fig. 8. Measurement results, deviations of sphere fitting, and deviations between measured point clouds and reference. (a) Measurement result of the polyamide sphere; (b) Measurement result of the polyamide sphere without subpixel localization; (c) Measurement result of the jade statue; (d) Measurement result of the jade statue without subpixel localization; (e) Deviations of the sphere fitting; (f) Deviations of the sphere fitting without subpixel localization; (g) Deviations between the measured point clouds and the reference; (h) Deviations between the measured point clouds and the reference without subpixel localization.

Download Full Size | PDF

Tables Icon

Table 1. Fitting sphere diameter and fitting errors (mm)

4. Conclusion

In this research, we propose an FSI based 3D shape measurement method for translucent objects, and the feasibility of the proposed method is proven by the experiments. We analyze the causes for the failure of 3D shape measurement of translucent objects by FPP, and explain the reasons for the success of our proposed method from the theoretical perspective. We measure the light transport coefficients of the scene points, and the combination of epipolar constraint can accurately separate the direct illumination and the subsurface scattering light, which guarantees measurement accuracy. Meanwhile, we compare the measurement results of FPP with the proposed method through experiments. Compared with the measurement results of FPP, which are incomplete, the measurement results of the proposed method are accurate and dense. We also measure several standard objects and a jade horse to evaluate the accuracy of the proposed method, which demonstrates that the measurement accuracy is high.

Funding

National Natural Science Foundation of China (61735003, 61875007); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).

Disclosures

The authors declare no conflicts of interest.

References

1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  

2. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010). [CrossRef]  

3. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]  

4. J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.

5. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.

6. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.

7. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.

8. P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011). [CrossRef]  

9. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]  

10. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

11. S. M. M. Khamoushi, Y. Nosrati, and S. H. Tavassoli, “Sinusoidal ghost imaging,” Opt. Lett. 40(15), 3452–3455 (2015). [CrossRef]  

12. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

13. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

14. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

15. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179–185 (2018). [CrossRef]  

16. Z. Wang, Z. Zhang, N. Gao, Y. Xiao, F. Gao, and X. Jiang, “Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system,” Appl. Opt. 58(5), A169–A178 (2019). [CrossRef]  

17. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

18. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

19. S. Zhang and P. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

20. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

21. H. Jiang, H. Liu, X. Li, and H. Zhao, “Efficient regional single-pixel imaging for multiple objects based on projective reconstruction theorem,” Opt. Lasers Eng. 110, 33–40 (2018). [CrossRef]  

22. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematics of 3D shape measurement of opaque and translucent objects by projector-camera system. (a) For opaque materials, the pixel q on the digital micromirror device (DMD) plane of the projector illuminates point o on the object surface and then the light enters the camera pixel c; (b) For translucent materials, the illumination at points o1 and o3 diffuses inside the translucent object, overlaps with the direct illumination at point o2, and enters the camera pixel c.
Fig. 2.
Fig. 2. FSI result of camera pixel $(u,v)$. (a) Result of opaque object; (b) Result of translucent object. (I) represent the image plane, U×V is the resolution of the camera, (II) represent the DMD plane, M×N is the resolution of the projector, and (III) is the zoomed result in the red box. The results above are normalized.
Fig. 3.
Fig. 3. Imaging result of camera pixel c as single pixel detector. The image is taken from the perspective of the projector. The red box contains the pixels corresponding to the light received by the camera pixel c, including the direct illumination and subsurface scattering light. The white line indicates the epipolar line on the focal plane of the projector.
Fig. 4.
Fig. 4. Imaging result of camera pixel as single-pixel detector.
Fig. 5.
Fig. 5. Experimental setup. The digital projector projects fringe patterns onto the object, and the object is located 0.5 m away from the experimental system. The reflected light is collected by the camera, and each pixel of the camera is used as a single-pixel detector. The computer is used to process data.
Fig. 6.
Fig. 6. Measurement results of onion, wax gourd slice and marble statue by different approaches.
Fig. 7.
Fig. 7. Measured objects. (a) Polyamide sphere; (b) Jade statue.
Fig. 8.
Fig. 8. Measurement results, deviations of sphere fitting, and deviations between measured point clouds and reference. (a) Measurement result of the polyamide sphere; (b) Measurement result of the polyamide sphere without subpixel localization; (c) Measurement result of the jade statue; (d) Measurement result of the jade statue without subpixel localization; (e) Deviations of the sphere fitting; (f) Deviations of the sphere fitting without subpixel localization; (g) Deviations between the measured point clouds and the reference; (h) Deviations between the measured point clouds and the reference without subpixel localization.

Tables (1)

Tables Icon

Table 1. Fitting sphere diameter and fitting errors (mm)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

P i ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ f ( x , y ) + 2 π N i ] ,
I i ( x , y ) = O ( x , y ) + R ( x , y ) P i ( x , y ) ,
R ( x , y ) = 2 N [ i = 0 N 1 I i ( x , y ) sin ( 2 π N i ) ] 2 + [ i = 0 N 1 I i ( x , y ) cos ( 2 π N i ) ] 2 ,
ϕ f ( x , y ) = arctan [ i = 0 N 1 I i ( x , y ) sin ( 2 π N i ) i = 0 N 1 I i ( x , y ) cos ( 2 π N i ) ] ,
I i ( x , y ) = O ( x , y ) + R ( x , y ) P i ( x , y ) + Ω h ( x , y ; x j , y j ) P i ( x j , y j ) d x d y ,
ϕ f ( x , y ) = arctan ϕ f ( x , y ) = arctan [ i = 0 N 1 I i ( x , y ) sin ( 2 π N i ) i = 0 N 1 I i ( x , y ) cos ( 2 π N i ) ] = R ( x , y ) sin ϕ f ( x , y ) + Ω h ( x , y ; x j , y j ) sin ϕ f ( x j , y j ) d x d y R ( x , y ) cos ϕ f ( x , y ) + Ω h ( x , y ; x j , y j ) cos ϕ f ( x j , y j ) d x d y .
P i ( x , y , f x , f y ) = A ( x , y ) + B ( x , y ) cos ( 2 π f x x + 2 π f y y + π 2 i ) ,
I i ( x , y , f x , f y ) = O ( x , y ) + Ω P i ( x j , y j , f x , f y ) h ( x , y ; x j , y j ) d x d y ,
h ( x , y ; x j , y j ) = 1 2 B ( x , y ) F 1 { [ I 0 ( x , y , f x , f y ) I 2 ( x , y , f x , f y ) ] + j [ I 1 ( x , y , f x , f y ) I 3 ( x , y , f x , f y ) ] } = 1 2 B ( x , y ) F 1 [ Ω h ( x , y ; x j , y j ) e x p [ j 2 π ( f x x j + f y y j ) ] d x d y ] ,
{ ( u , v ) = C ( x , y ) ( m , n ) = P ( x , y ) ,
arg max m D ( m , n ) , | D | d ,
{ x 0 = m 10 m 00 = ( i , j ) S i w i , j ( i , j ) S w i , j y 0 = m 01 m 00 = ( i , j ) S j w i , j ( i , j ) S w i , j ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.