Abstract
3D shape measurement by structured light is a popular technique for recovering object surfaces. However, structured light technique assumes that scene points are directly illuminated by the light source(s). Consequently, global illumination effects, such as subsurface scattering in translucent objects, may cause measurement errors in recovered 3D shapes. In this research, we propose a 3D shape measurement method of translucent objects based on Fourier single-pixel imaging (FSI) technique. The 3D shapes of the translucent objects are reconstructed through stereo matching of direct illumination light, which is separated from the surface. Experimental results show that the proposed method can separate the direct illumination light and the subsurface scattering light. The feasibility and accuracy of the method are analyzed, and the qualitative and quantitative results of the method are provided.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Structured light technique is one of the most popular optical 3D shape measurement methods [1]. In structured light technique, a structured light pattern is projected onto the surface of the object and the camera captures the image using a calibrated pair of camera and projector [2]. The 3D information of the object surface can be reconstructed by establishing the correspondence between the camera and projector pixels. However, for translucent objects, the incident light scatters inside the material and the direct reflection signal on the surface is weak, which makes 3D reconstruction of object surfaces extremely difficult.
Several studies on 3D shape measurements of translucent objects have been proposed. In structured light technique, the presence of subsurface scattering and interreflections hinders the detection of the light that interacts with the objects. Nayar et al. used high-frequency illumination to separate the direct and global components of a scene [3]. Several subsequent methods for separating indirect illumination by phase-shifting method [4,5] are built on the basis of this method. These techniques use high-frequency sinusoidal patterns to modulate low-frequency patterns for suppressing subsurface scattering. Nayar et al. studied spatial frequency multiplexing of illumination patterns to separate direct and global light paths of different illumination sources [4]. Chen et al. used this method for 3D scanning by phase-shifting, in which the effect of subsurface scattering is reduced by using high-frequency patterns [5]. Gupta et al. proposed Micro Phase Shifting to avoid the effects of global illumination [6]. All frequencies are placed in a narrow band so that the amplitudes of all frequencies are approximately the same and can be treated as a single unknown. However, high-frequency patterns cannot completely eliminate the effect of subsurface scattering. Thus, these methods can be combined with polarization difference imaging (PDI), which takes advantage of the fact that multiple scattered lights become depolarized [7]. However, PDI adds complexity to the experimental setup.
Meanwhile, the 3D reconstruction errors of translucent objects can be reduced by establishing an error compensation model. Two main situations exist; one is to measure various materials before and after coating and establish error compensation models for different roughness [8], and the other requires the bidirectional surface scattering reflectance distribution function (BSSRDF) of the object [9]. The former is based on the assumption that the measured translucent material is homogeneous whereas the latter can only deal with short-range scattering effects.
In single-pixel imaging (SPI) technique, a programmable spatial light modulator (SLM) is used to display patterns, and a single-pixel detector without spatial resolution is used to capture the modulated information of a scene [10,11]. Zhang et al. presented the Fourier single-pixel imaging (FSI) technique which can obtain high-quality images by using the four-step phase-shifting sinusoidal illumination to acquire the Fourier spectrum of the desired image [12].
In this research, we propose a 3D shape measurement method of translucent objects based on Fourier single-pixel imaging (FSI) technique. We treat each camera pixel as a single pixel detector, which obtains the Fourier spectrum of the corresponding scene point. And the IDFT algorithm is applied to the obtained Fourier spectrum to reconstruct the final image, which is the light transport coefficients. Direct illumination and subsurface scattering light can be separated by the obtained light transport coefficients, and the 3D shape can be reconstructed through stereo matching of direct illumination light. The experimental results show that the proposed method can acquire complete 3D shape measurement results of translucent objects and significantly reduce measurement errors.
The rest of this paper is organized as follows. The principles of the proposed method, including phase analysis for measuring translucent objects by fringe projection profilometry (FPP), calculation of the light transport coefficients of translucent objects by FSI, and separation of direct illumination light and subsurface scattering light on translucent object surface and subpixel localization of direct illumination light coordinates in FSI are explained in Section 2. Experimental results are exhibited and discussed in Section 3. Conclusions are presented in Section 4.
2. Principles
The structured light 3D shape measurement system consists of a camera and a projector. The phase deviation caused by translucency is analyzed by using FPP as an example. The projector pixel directly illuminates the scene point and is imaged at the camera pixels, and the correspondences between camera pixels and projector pixels are established by projecting fringe patterns onto the scene. The phase offsets for translucent objects are introduced on the surface because of subsurface scattering.
2.1 Phase analysis for measuring translucent objects by FPP
In FPP, a calibrated projector-camera system is utilized, the projector projects fringe patterns onto the surface of the object to be measured, and the collected deformed fringe patterns contain the 3D shape information of the object to be measured [13–16]. N-step phase-shifting sinusoidal fringe patterns ${P_i}(i = 0,1,\ldots ,N,N \ge 2)$ with spatial frequency f are expressed as
where $(x,y)$ represents the 2D Cartesian coordinates in the scene, $A(x,y)$ is the average intensity that is also known as the DC term, $B(x,y)$ is the amplitude, and ${\phi _f}(x,y)$ is the wrapped phase with spatial frequency f.For opaque objects, as exhibited in Fig. 1(a), the total intensity response on the scene point can be described as
where $O(x,y)$ represents the effect of environmental illumination and $R(x,y)$ represents the reflectance of the object surface in the scene, which can be retrieved byHowever, for translucent objects, the incident light penetrates the surface, scatters, and then exits at various points around the point of incidence because of subsurface scattering effect. As exhibited in Fig. 1(b), subsurface scattering causes phase offset, which leads to geometric errors. In this situation, the total intensity response on scene point can be written as
For translucent objects, the phase ${\phi ^{\prime}_f}(x,y)$ can be expressed as
2.2 Calculation of the light transport coefficients for translucent objects by FSI
FSI is based on the theorem of Fourier transform. This method uses phase-shifting sinusoidal fringe patterns to illuminate the scene, and a detector without spatial resolution to collect light and obtain the Fourier spectrum of the scene image [20,21].
In FSI, the scene is illuminated by four-step phase-shifting sinusoid fringe patterns ${P_i}(i = 0,1,2,3)$ with spatial frequency $({f_x},{f_y})$, which can be described as
2.3 Separation of direct illumination and subsurface scattering light on translucent object surface
For digital projectors and cameras, due to the focusing principle of lenses, we can assume that each camera pixel or projector pixel corresponds to a scene point, which can be described as
where, C represents the mapping from a scene point to image plane, and P represents the mapping from DMD plane to a scene point.In this research, we treat each camera pixel as a single pixel detector. By FSI, each camera pixel obtains the Fourier spectrum of the corresponding scene point, and the IDFT algorithm is applied to the obtained Fourier spectrum to reconstruct the final image. The result comprises the light transport coefficients of the camera pixel, which includes the information of light transport from all the scene points illuminated by the projector to the scene points that correspond to the camera pixels in Section 2.2. Therefore, for each camera pixel $(u,v)$, $h(x,y;{x_j},{y_j})$ can be obtained, where $(x,y)$ is the corresponding scene point, and $({x_j},{y_j})$ is the scene point illuminated by the projector.
For opaque objects, the light transport coefficients of a camera pixel only have a non-zero value, as depicted in Fig. 2(a). However, for the translucent objects, the light transport coefficients of a camera pixel not only have a non-zero value but also many non-zero values in the range of approximately 5×5 pixels. These values include direct illumination and subsurface scattering light at other locations. Therefore, the coordinate with the maximal value in the light transport coefficients of a camera pixel is the pixel coordinate of the projector that corresponds to the direct illumination light.
The aforementioned analysis shows that 3D point can be reconstructed by the triangulation principle in the projector-camera system with the correspondence between camera pixel $(u,v)$ and projector pixel $(m,n)$.
In practice, we do not need to normalize the results of FSI. We only need to find the point with the maximal gray value in the light transport coefficients of a camera pixel. Meanwhile, considering that the direct illumination light satisfies epipolar constraint, we can scan along the epipolar line to improve computational efficiency as displayed in Fig. 3.
For light transport coefficients of each camera pixel, a threshold d is set nearby the epipolar line, and the corresponding projector pixel coordinate can be obtained by scanning along the epipolar line within the threshold range to find the maximal gray value. If the equation of an epipolar line is $l = am + bn + c$, then the procedure can be represented as
where $D = \frac{{am + bn + c}}{{\sqrt {{a^2} + {b^2}} }}$ is the Euclidean distance from the pixel to the epipolar line, and $(m,n)$ is the projector pixel coordinate.Therefore, we obtain the projector pixel coordinate $(m,n)$ that corresponds to the camera pixel coordinate $(u,v)$. 3D points can be computed on the basis of stereo triangulation principle by the correspondences between projector pixels and camera pixels.
2.4 Subpixel localization of direct illumination light coordinates in FSI
In the previous section, the obtained corresponding projector pixel coordinates $(m,n)$ of direct illumination light are at pixel level. To obtain accurate 3D measurement results, the pixel coordinates of direct illumination light should be localized at subpixel level.
The imaging result of a camera pixel as a single-pixel detector are shown in Fig. 4. The position of direct illumination light can be determined at pixel level as described in Section 2.3, and we need to calculate its subpixel position. We use the grayscale centroid method to determine the subpixel position. The weighted average value of the pixel coordinates is computed by using the gray values of pixels as the weights as follows
3. Experiments
The experimental setup is shown in Fig. 5. A digital projector with a resolution of 1920×1080 is utilized to project patterns onto the scene and the wavelength of the light source is 465 nm. The reflected light is collected by a monochrome CMOS camera (Basler acA1920-155um) with a resolution of 1920×1200, and each camera pixel is treated as a single-pixel detector. The object is placed in the common depth of field of the projector and the camera.
The number of projected fringes can be reduced because of the symmetry of Fourier transform. To obtain a coefficient, four-step phase-shifting fringe patterns are projected by the digital projector onto the scene and the reflected light is detected sequentially by the camera, in which each pixel is treated as a single-pixel detector. The use of four-step phase-shifting technique can eliminate the effect of environment illumination and increase measurement accuracy. The total number of projected patterns is 1036800, and the measurement time is 1.8h.
As explained in Section 2.3, the FSI technique is combined with the structured light technique. According to the FSI principle, the projector pixel coordinate $(m,n)$ that corresponds to the camera pixel coordinate $(u,v)$ is obtained. And subpixel coordinate of direct illumination light is localized as described in Section 2.4. 3D points are determined from the corresponding relationship of projector and camera pixels by combining the triangulation principle and the calibration parameters of the projector and the camera.
To verify the feasibility of the proposed method, we measure several vegetables with translucency, including white onion and wax gourd slice. A marble carving with complex shape is also measured. The measurement results of the onion, the slice of wax gourd and the marble carving by traditional FPP, modified FPP which repetitively captures each fringe pattern for K times (K = 12) to suppress random errors [22], and FSI are exhibited in Fig. 6. The measurement results demonstrate that FSI is more effective in measuring 3D shape of translucent objects than traditional FPP and modified FPP. The point clouds measured by FSI are complete and dense.
To evaluate the precision of measurement results, we investigate a sphere and a statue in the experiment as exhibited in Fig. 7. The sphere is made of polyamide (nylon), which is a kind of synthetic resin with a diameter of 25.4 mm, and the statue is made of jade.
The measurement results of the sphere and the jade statue are shown in Figs. 8(a) and 8(c), respectively. To evaluate the measurement accuracy of the polyamide sphere, we fit the measurement result of the polyamide sphere by a sphere, and the deviations of sphere fitting are exhibited in Fig. 8(e). To evaluate the measurement accuracy of the jade statue, we coat the jade statue with powders to acquire the reference measurement result. Then, the measurement result obtained by FSI and the reference measurement result are compared to evaluate accuracy as exhibited in Fig. 8(g). We also acquire the measurement results of the polyamide sphere and the jade horse without subpixel localization, as shown in Figs. 8(b), 8(d), 8(f), and 8(h). The diameter of the fitted sphere, mean absolute errors (MAEs), and root mean square errors (RMSEs) are displayed in Table 1.
4. Conclusion
In this research, we propose an FSI based 3D shape measurement method for translucent objects, and the feasibility of the proposed method is proven by the experiments. We analyze the causes for the failure of 3D shape measurement of translucent objects by FPP, and explain the reasons for the success of our proposed method from the theoretical perspective. We measure the light transport coefficients of the scene points, and the combination of epipolar constraint can accurately separate the direct illumination and the subsurface scattering light, which guarantees measurement accuracy. Meanwhile, we compare the measurement results of FPP with the proposed method through experiments. Compared with the measurement results of FPP, which are incomplete, the measurement results of the proposed method are accurate and dense. We also measure several standard objects and a jade horse to evaluate the accuracy of the proposed method, which demonstrates that the measurement accuracy is high.
Funding
National Natural Science Foundation of China (61735003, 61875007); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).
Disclosures
The authors declare no conflicts of interest.
References
1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]
2. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010). [CrossRef]
3. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]
4. J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination,” in Proceedings of International Conference on Computer Vision (IEEE, 2011), pp. 691–698.
5. T. Chen, H. P. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3D scanning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3839–3846.
6. M. Gupta and S. K. Nayar, “Micro phase shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 813–820.
7. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1829–1836.
8. P. Lutzke, P. Kühmstedt, and G. Notni, “Measuring error compensation on three-dimensional scans of translucent objects,” Opt. Eng. 50(6), 063601 (2011). [CrossRef]
9. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]
10. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]
11. S. M. M. Khamoushi, Y. Nosrati, and S. H. Tavassoli, “Sinusoidal ghost imaging,” Opt. Lett. 40(15), 3452–3455 (2015). [CrossRef]
12. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]
13. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]
14. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]
15. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179–185 (2018). [CrossRef]
16. Z. Wang, Z. Zhang, N. Gao, Y. Xiao, F. Gao, and X. Jiang, “Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system,” Appl. Opt. 58(5), A169–A178 (2019). [CrossRef]
17. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]
18. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]
19. S. Zhang and P. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]
20. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]
21. H. Jiang, H. Liu, X. Li, and H. Zhao, “Efficient regional single-pixel imaging for multiple objects based on projective reconstruction theorem,” Opt. Lasers Eng. 110, 33–40 (2018). [CrossRef]
22. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019). [CrossRef]