## Abstract

The paper presents a method aimed at accurately reconstructing transparent objects using the area source. The method called polarized light measurements (PLM) combines two reconstruction techniques: polarization analyses and light-path triangulation. The originality of this study relies on the PLM method that enables to extract the radiometric cues and geometric cues simultaneously during the surface reconstruction. To validate performance, a series of the comparison experiments are developed on different objects for the diverse thickness, material and curvature radius of unit under test. The subsequent error analyses are applied to evaluate the method, and the error distribution can be well observed in the results. The PLM performs an efficient process and a higher accuracy compared with traditional reconstruction on transparent objects made by the polarization analyses and triangulation method used alone.

© 2017 Optical Society of America

## 1. Introduction

The 3D reconstruction of the transparent object is a hot topic in the field of machine vision. There are various measuring methods put forward by researchers over the past two decades, and they all have made great progress recently [1–6]. These methods include shape from distortion, reflectance-based reconstruction, scanning from heating, light-path triangulation, polarization analyses, etc. Among these techniques, what attract those experts most are both the light-path triangulation and polarization analyses due to their flexible applications and strong robustness.

The polarization analyses have the merit of high accuracy [7–10]. The method estimates the surface shapes of transparent objects by analyzing the polarization state of the light, and proposes iterative computation method to estimate the surface shape of the transparent object. The polarization could simplify the imaging processing and analyses. As such, the polarization analyses has been extensively researched in the machine vision [11]. However, the depth cues could only be calculated by iteration since there is only radiometric information get by the degree of polarization [12]. The introduction of iteration will surely reduce the measurement efficiency. The light-path triangulation [13,14], on the other hand, has the merit of simple principle and easy accessibility [15–17]. According to the geometric cues of the calibrated system, the method finds the diverse depth points along the light path. And the surface normal cannot be determined directly. Nearly all the information participate in the method to determine the depth cues rely on the system calibration [18,19]. These external factors will introduce a lot of errors to the whole system. Hence, the accuracy is relatively low generally.

In this paper, we combine two methods, light-path triangulation and polarization analyses, and propose a reconstruction method for transparent objects, called Polarized Light Measurements (PLM). PLM aims to acquire both radiometric cues and geometric cues of the surface simultaneously by an area source. Using the proven technique degree of polarization, we could obtain the radiometric information and the normal vector of the surface under test at the same time. In order to simplify the procedure of iteration while we obtain the point cloud of the surface profile, we introduce light-path triangulation to obtain geometric cues. The using of area source is another innovation in this paper. It could significantly enhance the measuring speed. Meanwhile, it could also bring some challenges. The unit under test (UUT) is a transparent object, which means incident light from source will reflect at both front and back surfaces of the UUT. As for the area source, the reflected rays from both surfaces will be overlapped. Hence, we proposed the Imaging Separation Method (ISM) to overcome this problem.

Compared with traditional polarization analyses, our method changes the acquisition process of depth information and simplifies the data processing. The algorithm is different duo to the alteration of iteration method. We apply the light-path triangulation to acquire the depth points with certainty. Hence, the camera and the light source need to have the fixed position different from polarization analyses. The points we calculate are not disturbed by the surrounding data compared to iteration method, they are all calculated independently. And our method avoids the difficulty of computational capacity. For the traditional triangulation, our method solves the normal vector through polarization analyses instead of geometric cues from calibration. Thus, PLM can reduce the dependence of system calibration.

The rest of the paper is organized as follows: the first part describes the basis related to the principle of PLM. The second part presents the experimental set-up and the imaging separation method. The last part is dedicated to evaluation our method through a series of comparison experiments results in the aspects of thickness, material, and curvature radius. The paper ends with a conclusion, which presents our contribution as well as some insights for future contributions.

## 2. Measuring principle

The reconstruction system we propose here composes a camera, a polarizer, and an area source. The system layout is shown in Fig. 1. For the reconstruction of transparent objects, normal vector and depth points are two main parameters. In the part of solving the surface normal, we use the principle of polarization analyses and obtain the normal vector by the traditional process. In the part of depth information, we introduce the triangulation to replace the iteration which is used in the polarization analyses. We calculate light path by means of the surface normal and determine the depth points directly. A brief process is going to be described as follows.

Unpolarized light, after reflection at a specular surface, generally obtains the partially linearly polarized light with a degree of polarization (DOP). In the PLM, an independent pixel is a unit of reconstruction and could eliminate confusions of certain polarization distinction. We calculate the DOP for the single pixel and pixels do not interfere with each other. One can express DOP as a function of zenithal angle treated with Fresnel equation, and the azimuth angle can be determined by the angle of polarization using the Fresnel reflectance model. Then the normal vector (see Fig. 2) could be calculated by the zenithal angle ${\theta}_{r}$ and azimuth angle $\varphi $ which are already known. We solve the incidence vector via the normal vector and the reflection vector which are acquired by the camera calibration. After that, using the spatial position of experimental components, the intersection of incident ray and reflected ray is the depth information we need to acquire.

#### 2.1 The polarized light analyses

Polarization effects can occur at an interface between two materials of different refractive index, which could be treated by the Fresnel equation. A non-polarized light wave becomes partially polarized according to the normal of the surface and the refractive index of the material at the point of incidence [20]. As to a given material, part of the wave would be transmitted and the remainder of the wave reflected. For the p-wave and s-wave, reflected proportions are different and they can be expressed by the angle of incidence ${\theta}_{1}$ and the angle of refraction${\theta}_{2}$when the objects are dielectric materials, as shown in Eqs. (1) and (2).

*θ*of the transmission axis is given by [12]:

_{pol}We could observe the maximum ${I}_{\mathrm{max}}$ and the minimum ${I}_{\mathrm{min}}$ of the reflected light from Fig. 3 [21].

The zenith angle and the azimuth angle are the two key parameters for the normal vector. We connect the zenithal angle to the definition of DOP $\rho $, and DOP is expressed by Eq. (4) where *I _{max}* and

*I*are acquired by Eq. (3).

_{min}As is shown in Fig. 4, we could acquire two zenithal angles with one DOP except the point where DOP equals to 1. It means the zenithal angle is Brewster angle. However, in our triangulation, the opening angle $\alpha $ between the area source and camera is inferior to the double Brewster angle ${\theta}_{B}$ by the layout constraints. Thus the zenithal angle $\theta $ must be less than Brewster angle due to the limitation of opening angle in the light paths.

The azimuth angle is another key parameter to determine the normal vector in a space. On the basis of the theory of Wolff [23], the azimuth angle can be obtained using the Fresnel reflectance model. The relationship between the azimuth angle $\varphi $ and the angle of polarization $\phi $ is shown in the equation below.

Then we detect the minimum brightness at the angle*θ*of the transmission axis, $\phi -{\theta}_{pol}=90\xb0$. At this time, the

_{pol}*θ*is a known quantity and we could calculate the azimuth angle by:We restrict the azimuth angle $\varphi $ from 0°to 180°since the incident light have a fixated range of incidence as is shown in Fig. 5. So the ambiguity of azimuth angle can be solved by the constraints of the experimental system.

_{pol}The normal vector $\overrightarrow{n}$ [24] could be expressed by Eq. (9), and the relationship of the related parameters is depicted in Fig. 5.

#### 2.2 The geometric cues processing

Since the normal vector and the calibrated camera are in different coordinate system, we have to map them in a unified world coordinate system to measure the depth points. Through the rotation transformation matrix$R(\epsilon )$, the transformational relation between the normal vector coordinate system and the world coordinate system could be determined by:

## 3. Experiments

The method we present in this paper is to reconstruct the exterior surface of a transparent object with a homogeneous interior and a smooth surface, such as glass, lens, crystal and so forth. We used a 1360 × 1024 pixels SVS-Vistek SVS285MUCP camera for image acquisition, manually linear polarizer, and a white LED area source as shown in Fig. 6. We adopted the pinhole model for the camera (smallest aperture and large focal length). The experimental system that we built below is completely calibrated including interior and exterior parameters of the camera.

Since the area source we use here is a LED area source which light-emitting area is 100 × 100 mm, it is impossible to calibrate it through the computer control. Whereas the position of the LED area source influence the incident ray vector. Hence, we adopted the area source simulation to obtain its spatial position, and the experimental results showed the simulation had a good effect on subsequent measurements.

When the non-polarized lights incident on the object, the polarizer is rotated from$0\xb0$to$180\xb0$with a 5° rotation each time, so as to collect 36 images. After the de-noising process, we fitted the variety of the brightness for pixels and solved the degree of polarization. Then we calculated the normal vector and obtained the depth points by the geometric cues of the system. The result is shown in Fig. 7.

In the process of reconstruction for the transparent object, we must cope with the challenge of reflected light from the back surface. The phenomenon could be considered that the correct brightness was extra covered by reflected brightness from the back surface. Thus we proposed the solution: imaging separation method (ISM). In the ISM, we only considered the situation that lights reflected once on the back surface. At an air-glass interface, the specular reflection is characterized by a 4% reflection coefficient [24]. We could think the light wave underwent twice transmission and once reflection. For the intensity received by the camera, the ratio of light intensity from the back surface to the front surface was $(96\%\times 4\%\times 96\%)/4\%$. The brightness was 92.16% of the reflected brightness from the front surface (see Fig. 8(a)). Nevertheless, for the situation that light reflected twice on the back surface, we could think the light wave underwent twice transmission and thrice reflection. The ratio was$(96\%\times 4\%\times 4\%\times 4\%\times 96\%)/4\%$. The brightness of multiple reflections was approximately 0.15% compared to the brightness from the reflection of the front surface (see Fig. 8(b)). It is insufficient to interfere with the front surface reconstruction.

We considered that the received brightness was superimposed by two partially polarized waves. The ISM made a separation from two waves based on the respective transmitted radiance sinusoid. They were expressed by Eqs. (13) and (14). We extracted a partially polarized light wave from the front surface to complete the later reconstruction.

## 4. Results and precision analyses

Due to the selection of the area light source and the process of reconstruction, our method has improved the speed and efficiency of reconstruction to a certain extent. The verification experiments show the performance of two traditional reconstruction methods. The polarization analyses has a high efficiency and obtains accurate information about surface normal. For the subsequent iteration method, it shows a slightly low efficiency and a tedious calculative process for the curved surface. When the initial iteration is the true shape, the root-mean-square (RMS) error is 0.55mm. When the initial iteration is the previous reconstructed result, the RMS error is 0.67mm [7]. For the light-path triangulation, we find the diverse depth points along a light path in the experimental verification. We need to calculate the distance between the incident ray and the reflected ray and find the minimum of differences. The efficiency is influenced during detecting the different depth points. The RMS error is 1.47mm for the reconstruction of transparent objects [17]. Nevertheless, our method combines polarization analyses and light-path triangulation. Also, it can remedy the deficiency when only one of the two methods is used. It further enhances the measurement accuracy and guarantees the robustness of the system.

To further validate the polarized light measurements method for different features of measured objects, a series of designed comparison experiments were made for the thickness, the material, the curvature radius under the premise of confirming two of these three features. We choose different types of glass plates and lens as the objects. To acquire an evaluation criterion for our method, we evaluated the precision through the distribution of point cloud RMS error and the error map. Thereinto, the RMS errors we calculated were the error value between the point cloud data and the ideal surface.

#### 4.1 3D reconstruction on transparent objects with different thicknesses

To verify the impact of thickness for the PLM method, we ensured other features such as the material and the curvature radius, and reconstructed the glass plates with various thicknesses. The Figs. 11 and 12 show the accuracy of 5mm, 7mm, 10mm and 15mm.

We could observe the measuring accuracy is continuous improving with the increase of thickness. This is mainly because the greater the thickness is, the longer the optical path is in the object. The more disturbing lights that cause the interference on the measured surface are absorbed and filtered out during the transmission. The precision of reconstruction will be improved with the decrease of disturbing lights. Comparing to 5mm, the RMS error was 40.41% less in the thickness of 15mm.

#### 4.2 3D reconstruction on transparent objects with different materials

Objects with different materials meant diverse refractive index in our method. To explore the effect of materials, we chose four lenses with F2, F4, F5 and BK7. The refractive indexes were 1.62004, 1.61659, 1.60342 and 1.51680 respectively. The Figs. 13 and 14 show the measuring accuracy in different materials.

In the experiments, the accuracy has slight improvement accompanied by a larger refractive index. We can express the natural light reflectivity R as a function of incident angle θ_{1} and refraction angle θ_{2} using Fresnel coefficients as is shown in Eq. (15), and the θ_{2} could be connected to the θ_{1} by the Eq. (16).

_{2}is. And it can lead to a higher reflectivity $R$. The increased light intensity received by camera will reduce the error in the process of photoelectric conversion, and decrease the error in subsequent numerical calculation.

#### 4.3 3D reconstruction on transparent objects with different curvature radius

The PLM method in different curvature radius was confirmed through lens which the curvature radii were 161.87mm, 268.67mm, 315.75mm, 354.17mm and infinity (plano). We designed the experiment to obtain their precision respectively.

Figure 15 illustrates that the increasing curvature radius shows an agreement with the improvement of reconstruction accuracy. The reason is that smaller curvature radius may cause that the cues are hard to be entirely acquired by the camera, and the great change of the normal vector will make the error of information processed and computed. The error maps of reconstructed surfaces are shown in Figs. 16(a) and 16(b). Due to the diverse laying angle of objects, we observe low precision areas are different but generally concentrate on the edge of the measured surface.

## 5. Summary

This paper creatively proposes a method to reconstruct transparent objects based on polarized light measurements (PLM) which combines the polarization analyses and the light-path triangulation. This method distinctively uses a triangulation system applying both radiometric cues and geometric cues through an area source, and overcomes the deficiency of insufficient information and the dependence of geometric calibration. The utilization of area source will accelerate and simplify the reconstruction process at the same time. The technique is proved to be accurate on transparent models and can efficiently enhance the definiteness of depth points and light paths. A series of designed comparison experiments were developed to verify the performance of this method. The experimental results offered by reconstructing objects with different features (thickness, material and curvature radius) were distinctly observed in the error maps. The thorough precision analyses further demonstrate that the PLM shows more excellent abilities compare to reconstruction of transparent objects from traditional methods.

Further research related to the present work will be focused on the reconstruction of both front and back surfaces for transparent objects.

## Funding

National Natural Science Foundation of China (NSFC) (61605016, 61505201); Project 111 (D17017); Science and Technology Development Plan of Jilin Province of China (20160520175JH).

## References and links

**1. **I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. Magnor, and W. Heidrich, “Transparent and specular object reconstruction,” Comp. Graphics Forum **29**(8), 2400–2426 (2010).

**2. **M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.

**3. **N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

**4. **G. Eren, O. Aubreton, F. Meriaudeau, L. A. Sanchez Secades, D. Fofi, A. T. Naskali, F. Truchetet, and A. Ercil, “Scanning from heating: 3D shape estimation of transparent objects from local surface heating,” Opt. Express **17**(14), 11457–11468 (2009). [PubMed]

**5. **X. Gong and S. Bansmer, “3-D ice shape measurements using mid-infrared laser scanning,” Opt. Express **23**(4), 4908–4926 (2015). [PubMed]

**6. **N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

**7. **D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier **29**(2), 407–427 (2012).

**8. **D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A **19**(4), 687–694 (2002). [PubMed]

**9. **D. Miyazaki and K. Ikeuchi, “Inverse polarization raytracing: estimating surface shapes of transparent objects,” in *Computer Vision and Pattern Recognition* (IEEE, 2005), pp. 910–917.

**10. **T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

**11. **M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

**12. **O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. **45**(17), 4062–4068 (2006). [PubMed]

**13. **V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.

**14. **K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. **76**(1), 13–29 (2007).

**15. **K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

**16. **D. E. Zongker, D. M. Werner, B. Curless, and D. H. Salesin, “Environment matting and compositing,” *CiteSeer* (1999).

**17. **M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

**18. **N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.

**19. **P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE **9528**, 952806 (2015).

**20. **F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE **9205**, 92050N (2015).

**21. **G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. **15**(6), 1653–1664 (2006). [PubMed]

**22. **M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express **17**(23), 21077–21082 (2009). [PubMed]

**23. **L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. **15**(2), 81–93 (1997).

**24. **M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE **7672**(1), 92–96 (2010).