Abstract

The paper presents a method aimed at accurately reconstructing transparent objects using the area source. The method called polarized light measurements (PLM) combines two reconstruction techniques: polarization analyses and light-path triangulation. The originality of this study relies on the PLM method that enables to extract the radiometric cues and geometric cues simultaneously during the surface reconstruction. To validate performance, a series of the comparison experiments are developed on different objects for the diverse thickness, material and curvature radius of unit under test. The subsequent error analyses are applied to evaluate the method, and the error distribution can be well observed in the results. The PLM performs an efficient process and a higher accuracy compared with traditional reconstruction on transparent objects made by the polarization analyses and triangulation method used alone.

© 2017 Optical Society of America

1. Introduction

The 3D reconstruction of the transparent object is a hot topic in the field of machine vision. There are various measuring methods put forward by researchers over the past two decades, and they all have made great progress recently [1–6]. These methods include shape from distortion, reflectance-based reconstruction, scanning from heating, light-path triangulation, polarization analyses, etc. Among these techniques, what attract those experts most are both the light-path triangulation and polarization analyses due to their flexible applications and strong robustness.

The polarization analyses have the merit of high accuracy [7–10]. The method estimates the surface shapes of transparent objects by analyzing the polarization state of the light, and proposes iterative computation method to estimate the surface shape of the transparent object. The polarization could simplify the imaging processing and analyses. As such, the polarization analyses has been extensively researched in the machine vision [11]. However, the depth cues could only be calculated by iteration since there is only radiometric information get by the degree of polarization [12]. The introduction of iteration will surely reduce the measurement efficiency. The light-path triangulation [13,14], on the other hand, has the merit of simple principle and easy accessibility [15–17]. According to the geometric cues of the calibrated system, the method finds the diverse depth points along the light path. And the surface normal cannot be determined directly. Nearly all the information participate in the method to determine the depth cues rely on the system calibration [18,19]. These external factors will introduce a lot of errors to the whole system. Hence, the accuracy is relatively low generally.

In this paper, we combine two methods, light-path triangulation and polarization analyses, and propose a reconstruction method for transparent objects, called Polarized Light Measurements (PLM). PLM aims to acquire both radiometric cues and geometric cues of the surface simultaneously by an area source. Using the proven technique degree of polarization, we could obtain the radiometric information and the normal vector of the surface under test at the same time. In order to simplify the procedure of iteration while we obtain the point cloud of the surface profile, we introduce light-path triangulation to obtain geometric cues. The using of area source is another innovation in this paper. It could significantly enhance the measuring speed. Meanwhile, it could also bring some challenges. The unit under test (UUT) is a transparent object, which means incident light from source will reflect at both front and back surfaces of the UUT. As for the area source, the reflected rays from both surfaces will be overlapped. Hence, we proposed the Imaging Separation Method (ISM) to overcome this problem.

Compared with traditional polarization analyses, our method changes the acquisition process of depth information and simplifies the data processing. The algorithm is different duo to the alteration of iteration method. We apply the light-path triangulation to acquire the depth points with certainty. Hence, the camera and the light source need to have the fixed position different from polarization analyses. The points we calculate are not disturbed by the surrounding data compared to iteration method, they are all calculated independently. And our method avoids the difficulty of computational capacity. For the traditional triangulation, our method solves the normal vector through polarization analyses instead of geometric cues from calibration. Thus, PLM can reduce the dependence of system calibration.

The rest of the paper is organized as follows: the first part describes the basis related to the principle of PLM. The second part presents the experimental set-up and the imaging separation method. The last part is dedicated to evaluation our method through a series of comparison experiments results in the aspects of thickness, material, and curvature radius. The paper ends with a conclusion, which presents our contribution as well as some insights for future contributions.

2. Measuring principle

The reconstruction system we propose here composes a camera, a polarizer, and an area source. The system layout is shown in Fig. 1. For the reconstruction of transparent objects, normal vector and depth points are two main parameters. In the part of solving the surface normal, we use the principle of polarization analyses and obtain the normal vector by the traditional process. In the part of depth information, we introduce the triangulation to replace the iteration which is used in the polarization analyses. We calculate light path by means of the surface normal and determine the depth points directly. A brief process is going to be described as follows.

 

Fig. 1 Structure of the 3D reconstruction for transparent objects.

Download Full Size | PPT Slide | PDF

Unpolarized light, after reflection at a specular surface, generally obtains the partially linearly polarized light with a degree of polarization (DOP). In the PLM, an independent pixel is a unit of reconstruction and could eliminate confusions of certain polarization distinction. We calculate the DOP for the single pixel and pixels do not interfere with each other. One can express DOP as a function of zenithal angle treated with Fresnel equation, and the azimuth angle can be determined by the angle of polarization using the Fresnel reflectance model. Then the normal vector (see Fig. 2) could be calculated by the zenithal angle θr and azimuth angle ϕ which are already known. We solve the incidence vector via the normal vector and the reflection vector which are acquired by the camera calibration. After that, using the spatial position of experimental components, the intersection of incident ray and reflected ray is the depth information we need to acquire.

 

Fig. 2 Specular reflection for the light wave.

Download Full Size | PPT Slide | PDF

2.1 The polarized light analyses

Polarization effects can occur at an interface between two materials of different refractive index, which could be treated by the Fresnel equation. A non-polarized light wave becomes partially polarized according to the normal of the surface and the refractive index of the material at the point of incidence [20]. As to a given material, part of the wave would be transmitted and the remainder of the wave reflected. For the p-wave and s-wave, reflected proportions are different and they can be expressed by the angle of incidence θ1 and the angle of refractionθ2when the objects are dielectric materials, as shown in Eqs. (1) and (2).

Rp=tan2(θ1θ2)tan2(θ1+θ2)
Rs=sin2(θ1θ2)sin2(θ1+θ2)
According to Fig. 1, by rotating the polarizer, the transmitted radiance sinusoid of the partially polarized light and the angle θpol of the transmission axis is given by [12]:
I(θpol)=Imax+Imin2+ImaxImin2cos(2θpol2φ)
where I is the light intensity and φ the angle of polarization.

We could observe the maximum Imax and the minimum Imin of the reflected light from Fig. 3 [21].

 

Fig. 3 Variation curve of the fitting intensity.

Download Full Size | PPT Slide | PDF

The zenith angle and the azimuth angle are the two key parameters for the normal vector. We connect the zenithal angle to the definition of DOP ρ, and DOP is expressed by Eq. (4) where Imax and Imin are acquired by Eq. (3).

ρ=ImaxIminImax+Imin=RsRpRs+Rp
Considering to Snell’s Law and Eqs. (1) and (2), the Eq. (4) could be collated into Eq. (5). The function reveals the relationship between ρ and the zenithal angle θ for a transparent object [22].
ρ=f(θ)=2sinθtanθn2sin2θn2sin2θ+sin2θtan2θ
where n is the relative refractive index of the object.

As is shown in Fig. 4, we could acquire two zenithal angles with one DOP except the point where DOP equals to 1. It means the zenithal angle is Brewster angle. However, in our triangulation, the opening angle α between the area source and camera is inferior to the double Brewster angle θB by the layout constraints. Thus the zenithal angle θ must be less than Brewster angle due to the limitation of opening angle in the light paths.

 

Fig. 4 Schematic of the degree of polarization.

Download Full Size | PPT Slide | PDF

0<2θ=α<2θB

The azimuth angle is another key parameter to determine the normal vector in a space. On the basis of the theory of Wolff [23], the azimuth angle can be obtained using the Fresnel reflectance model. The relationship between the azimuth angle ϕ and the angle of polarization φ is shown in the equation below.

ϕ=φ±90°
Then we detect the minimum brightness at the angle θpol of the transmission axis, φθpol=90°. At this time, the θpol is a known quantity and we could calculate the azimuth angle by:
ϕ=90°+θpol±90°
We restrict the azimuth angle ϕ from 0°to 180°since the incident light have a fixated range of incidence as is shown in Fig. 5. So the ambiguity of azimuth angle can be solved by the constraints of the experimental system.

 

Fig. 5 Schematic of the normal vector for the measured surface.

Download Full Size | PPT Slide | PDF

The normal vector n [24] could be expressed by Eq. (9), and the relationship of the related parameters is depicted in Fig. 5.

n=[cd1]=[tanθcosϕtanθsinϕ1]

2.2 The geometric cues processing

Since the normal vector and the calibrated camera are in different coordinate system, we have to map them in a unified world coordinate system to measure the depth points. Through the rotation transformation matrixR(ε), the transformational relation between the normal vector coordinate system and the world coordinate system could be determined by:

[cworlddworldeworld]=R(ε)[cd1]
Different from conventional polarization analyses, our method applies the light-path triangulation. In this method, the depth points can be calculated by the geometrical relationship of the system. Considering the specular reflection can be regarded as a reversible process, the incident light, reflected light and their normal are coplanar based on Snell’s Law. We could get the reflection vector R2 for any pixels through the calibrated camera, and the normal vector n is calculated by Eq. (10). After the vector unitization, we could solve the incidence vector R1 easily via the equation below.
R1R2=n(R2)×|2R2|×n
R1=R22(nR2)n
With the position of the camera and the emulational area source in the world coordinate system, the depth point is obtained by solving the intersection of incident light and reflected light.

3. Experiments

The method we present in this paper is to reconstruct the exterior surface of a transparent object with a homogeneous interior and a smooth surface, such as glass, lens, crystal and so forth. We used a 1360 × 1024 pixels SVS-Vistek SVS285MUCP camera for image acquisition, manually linear polarizer, and a white LED area source as shown in Fig. 6. We adopted the pinhole model for the camera (smallest aperture and large focal length). The experimental system that we built below is completely calibrated including interior and exterior parameters of the camera.

 

Fig. 6 Experimental system.

Download Full Size | PPT Slide | PDF

Since the area source we use here is a LED area source which light-emitting area is 100 × 100 mm, it is impossible to calibrate it through the computer control. Whereas the position of the LED area source influence the incident ray vector. Hence, we adopted the area source simulation to obtain its spatial position, and the experimental results showed the simulation had a good effect on subsequent measurements.

When the non-polarized lights incident on the object, the polarizer is rotated from0°to180°with a 5° rotation each time, so as to collect 36 images. After the de-noising process, we fitted the variety of the brightness for pixels and solved the degree of polarization. Then we calculated the normal vector and obtained the depth points by the geometric cues of the system. The result is shown in Fig. 7.

 

Fig. 7 Reconstructed point cloud for spherical mirror that the radius of curvature was 161.874mm.

Download Full Size | PPT Slide | PDF

In the process of reconstruction for the transparent object, we must cope with the challenge of reflected light from the back surface. The phenomenon could be considered that the correct brightness was extra covered by reflected brightness from the back surface. Thus we proposed the solution: imaging separation method (ISM). In the ISM, we only considered the situation that lights reflected once on the back surface. At an air-glass interface, the specular reflection is characterized by a 4% reflection coefficient [24]. We could think the light wave underwent twice transmission and once reflection. For the intensity received by the camera, the ratio of light intensity from the back surface to the front surface was (96%×4%×96%)/4%. The brightness was 92.16% of the reflected brightness from the front surface (see Fig. 8(a)). Nevertheless, for the situation that light reflected twice on the back surface, we could think the light wave underwent twice transmission and thrice reflection. The ratio was(96%×4%×4%×4%×96%)/4%. The brightness of multiple reflections was approximately 0.15% compared to the brightness from the reflection of the front surface (see Fig. 8(b)). It is insufficient to interfere with the front surface reconstruction.

 

Fig. 8 Analysis diagram of the ISM. (a) Cause of formation for the imaging. (b) Multiple reflections in the transparent object.

Download Full Size | PPT Slide | PDF

We considered that the received brightness was superimposed by two partially polarized waves. The ISM made a separation from two waves based on the respective transmitted radiance sinusoid. They were expressed by Eqs. (13) and (14). We extracted a partially polarized light wave from the front surface to complete the later reconstruction.

Ifront=Ifmax+Ifmin2+IfmaxIfmin2cos(2(θpolφ1))
Iback=Ibmax+Ibmin2+IbmaxIbmin2cos(2(θpolφ2))
The separated sinusoid and effects that we obtained were shown in the Figs. 9 and 10.

 

Fig. 9 Separated sinusoid.

Download Full Size | PPT Slide | PDF

 

Fig. 10 Working sketches of the ISM. (a) Imaging of the reflected light from back surface. (b), (c) Point cloud including imaging. (d), (e) Point cloud with removing imaging.

Download Full Size | PPT Slide | PDF

4. Results and precision analyses

Due to the selection of the area light source and the process of reconstruction, our method has improved the speed and efficiency of reconstruction to a certain extent. The verification experiments show the performance of two traditional reconstruction methods. The polarization analyses has a high efficiency and obtains accurate information about surface normal. For the subsequent iteration method, it shows a slightly low efficiency and a tedious calculative process for the curved surface. When the initial iteration is the true shape, the root-mean-square (RMS) error is 0.55mm. When the initial iteration is the previous reconstructed result, the RMS error is 0.67mm [7]. For the light-path triangulation, we find the diverse depth points along a light path in the experimental verification. We need to calculate the distance between the incident ray and the reflected ray and find the minimum of differences. The efficiency is influenced during detecting the different depth points. The RMS error is 1.47mm for the reconstruction of transparent objects [17]. Nevertheless, our method combines polarization analyses and light-path triangulation. Also, it can remedy the deficiency when only one of the two methods is used. It further enhances the measurement accuracy and guarantees the robustness of the system.

To further validate the polarized light measurements method for different features of measured objects, a series of designed comparison experiments were made for the thickness, the material, the curvature radius under the premise of confirming two of these three features. We choose different types of glass plates and lens as the objects. To acquire an evaluation criterion for our method, we evaluated the precision through the distribution of point cloud RMS error and the error map. Thereinto, the RMS errors we calculated were the error value between the point cloud data and the ideal surface.

4.1 3D reconstruction on transparent objects with different thicknesses

To verify the impact of thickness for the PLM method, we ensured other features such as the material and the curvature radius, and reconstructed the glass plates with various thicknesses. The Figs. 11 and 12 show the accuracy of 5mm, 7mm, 10mm and 15mm.

 

Fig. 11 Error distribution at different thickness.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Comparison experiments of thicknesses.

Download Full Size | PPT Slide | PDF

We could observe the measuring accuracy is continuous improving with the increase of thickness. This is mainly because the greater the thickness is, the longer the optical path is in the object. The more disturbing lights that cause the interference on the measured surface are absorbed and filtered out during the transmission. The precision of reconstruction will be improved with the decrease of disturbing lights. Comparing to 5mm, the RMS error was 40.41% less in the thickness of 15mm.

4.2 3D reconstruction on transparent objects with different materials

Objects with different materials meant diverse refractive index in our method. To explore the effect of materials, we chose four lenses with F2, F4, F5 and BK7. The refractive indexes were 1.62004, 1.61659, 1.60342 and 1.51680 respectively. The Figs. 13 and 14 show the measuring accuracy in different materials.

 

Fig. 13 Error distribution at different material.

Download Full Size | PPT Slide | PDF

 

Fig. 14 Comparison experiments of materials.

Download Full Size | PPT Slide | PDF

In the experiments, the accuracy has slight improvement accompanied by a larger refractive index. We can express the natural light reflectivity R as a function of incident angle θ1 and refraction angle θ2 using Fresnel coefficients as is shown in Eq. (15), and the θ2 could be connected to the θ1 by the Eq. (16).

R=12(Rs+Rp)=12(sin2(θ1θ2)sin2(θ1+θ2)+tan2(θ1θ2)tan2(θ1+θ2))
1nsin(θ1)=sin(θ2)
We find that when the sum of incident angle and refraction angle is inferior to 90°,we could obtain the law that the larger the refractive index n is, the smaller the refraction angle θ2 is. And it can lead to a higher reflectivity R. The increased light intensity received by camera will reduce the error in the process of photoelectric conversion, and decrease the error in subsequent numerical calculation.

4.3 3D reconstruction on transparent objects with different curvature radius

The PLM method in different curvature radius was confirmed through lens which the curvature radii were 161.87mm, 268.67mm, 315.75mm, 354.17mm and infinity (plano). We designed the experiment to obtain their precision respectively.

Figure 15 illustrates that the increasing curvature radius shows an agreement with the improvement of reconstruction accuracy. The reason is that smaller curvature radius may cause that the cues are hard to be entirely acquired by the camera, and the great change of the normal vector will make the error of information processed and computed. The error maps of reconstructed surfaces are shown in Figs. 16(a) and 16(b). Due to the diverse laying angle of objects, we observe low precision areas are different but generally concentrate on the edge of the measured surface.

 

Fig. 15 Error distribution at different curvature radius.

Download Full Size | PPT Slide | PDF

 

Fig. 16 Comparison experiments of curvature radii.

Download Full Size | PPT Slide | PDF

5. Summary

This paper creatively proposes a method to reconstruct transparent objects based on polarized light measurements (PLM) which combines the polarization analyses and the light-path triangulation. This method distinctively uses a triangulation system applying both radiometric cues and geometric cues through an area source, and overcomes the deficiency of insufficient information and the dependence of geometric calibration. The utilization of area source will accelerate and simplify the reconstruction process at the same time. The technique is proved to be accurate on transparent models and can efficiently enhance the definiteness of depth points and light paths. A series of designed comparison experiments were developed to verify the performance of this method. The experimental results offered by reconstructing objects with different features (thickness, material and curvature radius) were distinctly observed in the error maps. The thorough precision analyses further demonstrate that the PLM shows more excellent abilities compare to reconstruction of transparent objects from traditional methods.

Further research related to the present work will be focused on the reconstruction of both front and back surfaces for transparent objects.

Funding

National Natural Science Foundation of China (NSFC) (61605016, 61505201); Project 111 (D17017); Science and Technology Development Plan of Jilin Province of China (20160520175JH).

References and links

1. I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. Magnor, and W. Heidrich, “Transparent and specular object reconstruction,” Comp. Graphics Forum 29(8), 2400–2426 (2010).

2. M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.

3. N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

4. G. Eren, O. Aubreton, F. Meriaudeau, L. A. Sanchez Secades, D. Fofi, A. T. Naskali, F. Truchetet, and A. Ercil, “Scanning from heating: 3D shape estimation of transparent objects from local surface heating,” Opt. Express 17(14), 11457–11468 (2009). [PubMed]  

5. X. Gong and S. Bansmer, “3-D ice shape measurements using mid-infrared laser scanning,” Opt. Express 23(4), 4908–4926 (2015). [PubMed]  

6. N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

7. D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).

8. D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A 19(4), 687–694 (2002). [PubMed]  

9. D. Miyazaki and K. Ikeuchi, “Inverse polarization raytracing: estimating surface shapes of transparent objects,” in Computer Vision and Pattern Recognition (IEEE, 2005), pp. 910–917.

10. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

11. M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

12. O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45(17), 4062–4068 (2006). [PubMed]  

13. V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.

14. K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).

15. K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

16. D. E. Zongker, D. M. Werner, B. Curless, and D. H. Salesin, “Environment matting and compositing,” CiteSeer (1999).

17. M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

18. N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.

19. P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE 9528, 952806 (2015).

20. F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

21. G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006). [PubMed]  

22. M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express 17(23), 21077–21082 (2009). [PubMed]  

23. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).

24. M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

References

  • View by:
  • |
  • |
  • |

  1. I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. Magnor, and W. Heidrich, “Transparent and specular object reconstruction,” Comp. Graphics Forum 29(8), 2400–2426 (2010).
  2. M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.
  3. N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.
  4. G. Eren, O. Aubreton, F. Meriaudeau, L. A. Sanchez Secades, D. Fofi, A. T. Naskali, F. Truchetet, and A. Ercil, “Scanning from heating: 3D shape estimation of transparent objects from local surface heating,” Opt. Express 17(14), 11457–11468 (2009).
    [PubMed]
  5. X. Gong and S. Bansmer, “3-D ice shape measurements using mid-infrared laser scanning,” Opt. Express 23(4), 4908–4926 (2015).
    [PubMed]
  6. N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  7. D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).
  8. D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A 19(4), 687–694 (2002).
    [PubMed]
  9. D. Miyazaki and K. Ikeuchi, “Inverse polarization raytracing: estimating surface shapes of transparent objects,” in Computer Vision and Pattern Recognition (IEEE, 2005), pp. 910–917.
  10. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  11. M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.
  12. O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45(17), 4062–4068 (2006).
    [PubMed]
  13. V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.
  14. K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).
  15. K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.
  16. D. E. Zongker, D. M. Werner, B. Curless, and D. H. Salesin, “Environment matting and compositing,” CiteSeer (1999).
  17. M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.
  18. N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.
  19. P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE 9528, 952806 (2015).
  20. F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).
  21. G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006).
    [PubMed]
  22. M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express 17(23), 21077–21082 (2009).
    [PubMed]
  23. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).
  24. M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

2015 (3)

X. Gong and S. Bansmer, “3-D ice shape measurements using mid-infrared laser scanning,” Opt. Express 23(4), 4908–4926 (2015).
[PubMed]

P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE 9528, 952806 (2015).

F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

2012 (1)

D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).

2010 (1)

M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

2009 (2)

2007 (1)

K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).

2006 (2)

G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006).
[PubMed]

O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45(17), 4062–4068 (2006).
[PubMed]

2002 (1)

1997 (1)

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).

Alt, N.

N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

Atkinson, G. A.

G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006).
[PubMed]

Aubreton, O.

F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

G. Eren, O. Aubreton, F. Meriaudeau, L. A. Sanchez Secades, D. Fofi, A. T. Naskali, F. Truchetet, and A. Ercil, “Scanning from heating: 3D shape estimation of transparent objects from local surface heating,” Opt. Express 17(14), 11457–11468 (2009).
[PubMed]

Bansmer, S.

Benezra, M.

M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.

Breugnot, S.

M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

Chari, V.

V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.

Chen, T.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Drouet, F.

F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

Ercil, A.

Eren, G.

Ferraton, M.

Fofi, D.

Fuchs, C.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Gong, X.

Gorria, P.

Han, K.

K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

Hancock, E. R.

G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006).
[PubMed]

Ikeuchi, K.

D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).

D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A 19(4), 687–694 (2002).
[PubMed]

Iqbal, M.

M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

Iwata, S.

M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

Kutulakos, K. N.

K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.

N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Laligant, O.

F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

Lechocinski, N.

M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

Lensch, H. P. A.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Liu, M.

K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

Meriaudeau, F.

Mériaudeau, F.

M. Ferraton, C. Stolz, and F. Mériaudeau, “Optimization of a polarization imaging system for 3D measurements of transparent objects,” Opt. Express 17(23), 21077–21082 (2009).
[PubMed]

M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

Miyazaki, D.

D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).

D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A 19(4), 687–694 (2002).
[PubMed]

Morel, O.

O. Morel, C. Stolz, F. Meriaudeau, and P. Gorria, “Active lighting applied to three-dimensional reconstruction of specular metallic surfaces by polarization imaging,” Appl. Opt. 45(17), 4062–4068 (2006).
[PubMed]

M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

Morris, N. J. W.

N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.

Naskali, A. T.

Nayar, S. K.

M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.

Rives, P.

N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

Saito, M.

Sanchez Secades, L. A.

Sato, Y.

Seidel, H. P.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Seitz, P. C.

P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE 9528, 952806 (2015).

Steger, E.

K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).

Steinbach, E.

N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

Stolz, C.

Sturm, P.

V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.

Truchetet, F.

Vedel, M.

M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

Wolff, L. B.

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).

Wong, K. Y. K.

K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

Xu, G.

M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

Yamazaki, M.

M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

Appl. Opt. (1)

IEEE Trans. Image Process. (1)

G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process. 15(6), 1653–1664 (2006).
[PubMed]

Image Vis. Comput. (1)

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).

Int. J. Comput. Vis. (1)

K. N. Kutulakos and E. Steger, “A theory of refractive and specular 3D shape by light-path triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007).

IPSJ Digital Courier (1)

D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using polarization analyses,” IPSJ Digital Courier 29(2), 407–427 (2012).

J. Opt. Soc. Am. A (1)

Opt. Express (3)

Proc. SPIE (3)

M. Vedel, N. Lechocinski, and S. Breugnot, “3D shape reconstruction of optical element using polarization,” Proc. SPIE 7672(1), 92–96 (2010).

P. C. Seitz, “3D measurement with active triangulation for spectacle lens optimization and individualization,” Proc. SPIE 9528, 952806 (2015).

F. Drouet, C. Stolz, O. Laligant, and O. Aubreton, “3D measurement of both front and back surfaces of transparent objects by polarization imaging,” Proc. SPIE 9205, 92050N (2015).

Other (12)

V. Chari and P. Sturm, “A theory of refractive photo-light-path triangulation,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1438–1445.

K. Han, K. Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 4001–4008.

D. E. Zongker, D. M. Werner, B. Curless, and D. H. Salesin, “Environment matting and compositing,” CiteSeer (1999).

M. Yamazaki, S. Iwata, and G. Xu, “Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method,” in Asian Conference on Computer Vision, (Springer-Verlag, 2007), pp. 570–579.

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” in Tenth IEEE International Conference on Computer Vision (IEEE, 2011), pp. 1573–1580.

N. J. W. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

I. Ihrke, K. N. Kutulakos, H. P. A. Lensch, M. Magnor, and W. Heidrich, “Transparent and specular object reconstruction,” Comp. Graphics Forum 29(8), 2400–2426 (2010).

M. Benezra and S. K. Nayar, “What does motion reveal about transparency?” IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1025–1032.

N. Alt, P. Rives, and E. Steinbach, “Reconstruction of transparent objects in unstructured scenes with a depth camera,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 4131–4135.

D. Miyazaki and K. Ikeuchi, “Inverse polarization raytracing: estimating surface shapes of transparent objects,” in Computer Vision and Pattern Recognition (IEEE, 2005), pp. 910–917.

T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

M. Iqbal, O. Morel, and F. Mériaudeau, “Extract information of polarization imaging from local matching stereo,” International Conference on Intelligent and Advanced Systems (IEEE, 2010), pp. 1–6.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Structure of the 3D reconstruction for transparent objects.
Fig. 2
Fig. 2 Specular reflection for the light wave.
Fig. 3
Fig. 3 Variation curve of the fitting intensity.
Fig. 4
Fig. 4 Schematic of the degree of polarization.
Fig. 5
Fig. 5 Schematic of the normal vector for the measured surface.
Fig. 6
Fig. 6 Experimental system.
Fig. 7
Fig. 7 Reconstructed point cloud for spherical mirror that the radius of curvature was 161.874mm.
Fig. 8
Fig. 8 Analysis diagram of the ISM. (a) Cause of formation for the imaging. (b) Multiple reflections in the transparent object.
Fig. 9
Fig. 9 Separated sinusoid.
Fig. 10
Fig. 10 Working sketches of the ISM. (a) Imaging of the reflected light from back surface. (b), (c) Point cloud including imaging. (d), (e) Point cloud with removing imaging.
Fig. 11
Fig. 11 Error distribution at different thickness.
Fig. 12
Fig. 12 Comparison experiments of thicknesses.
Fig. 13
Fig. 13 Error distribution at different material.
Fig. 14
Fig. 14 Comparison experiments of materials.
Fig. 15
Fig. 15 Error distribution at different curvature radius.
Fig. 16
Fig. 16 Comparison experiments of curvature radii.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

R p = tan 2 ( θ 1 θ 2 ) tan 2 ( θ 1 + θ 2 )
R s = sin 2 ( θ 1 θ 2 ) sin 2 ( θ 1 + θ 2 )
I ( θ p o l ) = I max + I min 2 + I max I min 2 cos ( 2 θ p o l 2 φ )
ρ = I max I min I max + I min = R s R p R s + R p
ρ = f ( θ ) = 2 sin θ tan θ n 2 sin 2 θ n 2 sin 2 θ + sin 2 θ tan 2 θ
0 < 2 θ = α < 2 θ B
ϕ = φ ± 90 °
ϕ = 90 ° + θ p o l ± 90 °
n = [ c d 1 ] = [ tan θ cos ϕ tan θ sin ϕ 1 ]
[ c w o r l d d w o r l d e w o r l d ] = R ( ε ) [ c d 1 ]
R 1 R 2 = n ( R 2 ) × | 2 R 2 | × n
R 1 = R 2 2 ( n R 2 ) n
I f r o n t = I f m a x + I f m i n 2 + I f m a x I f m i n 2 cos ( 2 ( θ p o l φ 1 ) )
I b a c k = I b m a x + I b m i n 2 + I b m a x I b m i n 2 cos ( 2 ( θ p o l φ 2 ) )
R = 1 2 ( R s + R p ) = 1 2 ( sin 2 ( θ 1 θ 2 ) sin 2 ( θ 1 + θ 2 ) + tan 2 ( θ 1 θ 2 ) tan 2 ( θ 1 + θ 2 ) )
1 n sin ( θ 1 ) = sin ( θ 2 )

Metrics