Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D surface reconstruction of transparent objects using laser scanning with a four-layers refinement process

Open Access Open Access

Abstract

Acquiring the 3D geometry of objects has been an active research topic, wherein the reconstruction of transparent objects poses a great challenge. In this paper, we present a fully automatic approach for reconstructing the exterior surface of a complex transparent scene. Through scanning a line laser by a galvo-mirror, images of the scene are captured from two viewing directions. Due to the light transmission inside the transparent object, the captured feature points and the calibrated laser plane can produce large number of 3D point candidates with large incorrect points through direct triangulation. Various situations of laser transmission inside the transparent object are analyzed and the reconstructed 3D laser point candidates are classified into two types: first-reflection points and non-first-reflection points. The first-reflection points means the first reflected laser points on the front surface of measured objects. Then, a novel four-layers refinement process is proposed to extract the first-reflection points step by step from the 3D point candidates through optical geometric constraints, including (1) Layer-1 : fake points removed by single camera, (2) Layer-2 : ambiguity points removed by the dual-camera joint constraint, (3) Layer-3 : retrieve the missing first-reflection exterior surface points by fusion and (4) Layer-4 : severe ambiguity points removed by contour-continuity. Besides, a novel calibration model about this imaging system is proposed for 3D point candidates reconstruction through triangulation. Compared with traditional laser scanning method, we pulled in the viewing angle information of the second camera and a novel four-layers refinement process is adopted for reconstruction of transparent objects. Various experiments on real objects demonstrate that proposed method can successfully extract the first-reflection points from the candidates and recover the complex shapes of transparent and semitransparent objects.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) profilometry not only plays as a guidance part in manufacturing inspection [1], automobile robot navigation [2], automated manipulation [3], but acts as an indispensable role in other fields [46]. Optical surface reconstruction is a crucial method without requiring physical contact with the measured objects [7,8]. Optical three-dimensional profilometry can be classified into two types [9], namely, passive methods [1013] and active methods [1417]. However, no matter with passive methods or active methods, 3D reconstruction on transparent objects is still a challenge. For passive methods, due to the light transmittance of transparent objects, conventional texture matching would be affected. For active methods, two reasons in the following cause the reconstruction problem on transparent objects. First, weak reflection light on transparent object surface led to poor reconstruction results. Second, complex light transmission inside transparent objects led to difficulties in implementing matching process during binocular reconstruction process.

In recent years, researchers have proposed several methods to solve the problem of transparent objects surface reconstruction. According to the characteristics of light transformation, the application methods are classified into three standard categories [18], namely, reflection-based, refraction-based, and intrusive methods.

1.1 Reflection-based methods

In the reflection-based methods, Liu $\textit {et al.}$ [19] proposed a 3D reconstruction method which combines a frequency-based method with a frequency-based matting method. The projecting way of frequency-based method is similar to structured light methods, which projects patterns onto the target, and analyses each pixel of the captured image along the time axis. However, the frequency analysis is time-consuming requiring approximately 33 minutes to capture and process all the 1,350 images. A scatter-trace photography method is proposed by Morries $\textit {et al.}$ [20] to achieve reconstruction of inhomogeneous transparent scenes. In this method, they capture the scene from one or more viewpoints through moving a proximal light source. Eren $\textit {et al.}$ [21] proposed a method, called Scanning From Heating (SFH), to determine the surface shape of transparent objects through laser surface heating and thermal imaging. Gong $\textit {et al.}$ [22] adopted a general approach based on mid-infrared(MIR) laser scanning to measure the 3-D ice shape because the MIR radiation penetrates ice within 10 micrometers. Landmann $\textit {et al.}$ [23] further presented a new approach of sequential thermal fringe projection to achieve high-resolution reconstruction of transparent objects reconstruction based on scanning from heating. However, the high laser power (up to 40 Watt) and a long irradiation period may damage measured object and low-resolution of MWIR camera limits the density of point clouds. From analysing images with different polarization angles, Xu $\textit {et al.}$ [24] proposed a method called polarized light measurements (PLM), the characteristic of this method is to combine the property of polarized light and light-path triangulation to extract the radiometric cues and geometric cues, simultaneously. In our previous work, He $\textit {et al.}$ [25] proposed a laser tracking frame to frame (LTFtF) method, through picking the laser reflected by the front surface from the scattered laser reflected by any other surfaces to recover the transparent surface.

1.2 Refraction-based methods

In the refraction-based methods, Qian $\textit {et al.}$ [26] provided a position-normal consistency constraint to acquire the both two sides surface of a given transparent object. Based on above mentioned condition, Wu $\textit {et al.}$ [27] and Lyu $\textit {et al.}$ [28] finished more details recovery on transparent objects by optimizing function, which is based on three constraints: surface and refraction normal consistency, surface projection and silhouette consistency, and surface smoothness. However, there is a prerequisite through optimization to achieve complete reconstruction with all details, that is there must exist an initial rough model before the optimization, and the rough model is obtained by other methods such as space carving. And then a computationally expensive optimization process is adopted to refine the model. On the other hand, some researchers developed some works in dynamic transparent shape reconstruction. Ding $\textit {et al.}$ [29] achieved capturing fast evolving fluid wavefronts robustly from multi-perspective views. Ji $\textit {et al.}$ [30] choosed a novel computational imaging solution, which uses the light field probe (LF-Probe) to detect dynamic transparent object, for example, the 3D gas flow is reconstructed by creating reliable ray-ray correspondences through the LF-Probe. Enlightened by Ding’s work, Qian $\textit {et al. }$ [31] obtained more information on dynamic fluid surfaces with a novel global optimization-based approach that recovers both depths and normals. Further, Qian $\textit {et al.}$ [32] takes advantage of multiple viewpoints based on the normal consistency constraint, which makes a great progress in reconstruction of water surface and underwater scene, but there is an assumption in this work, that is, the light is assumed refraction only once at the water surface.

1.3 Intrusive methods

In the intrusive methods, as the name implies, the common application methods like diffuse coating [33], heating [34], and objects immersing in special liquids [35] placed the measured object in a special environment to change its optical path propagation characteristics. However, these approaches are difficult to implement and may damage the objects to be reconstructed.

1.4 Proposed method

In this work, a reflection-based approach is presented for reconstructing the exterior surface of transparent objects using laser scanning. Compared to surface illuminance obtained in other structured-light methods like phase-shifting methods [3640] and Gray-code methods [41,42], the illuminance intensity of a single line projected by the laser would be higher. The problem in reconstructing transparent or semi-transparent objects with insufficient intensity of the reflected light can be avoided. Moreover, surface illuminance can easily cause intense cross-reflections inside transparent objects. As a result the laser line scanning method is more suitable for the creation of high-illuminance texture features. The imaging system is composed of a line laser scanned by a galvo-mirror and two cameras, which is calibrated by a proposed model. Various situations of laser light transmission inside the transparent object are analyzed and the reconstructed laser point candidates captured are classified into two types: first-reflection points and non-first-reflection points. The first-reflection points means the first reflected laser points on the front surface of measured objects. Then, a novel four-layers refinement process is proposed to extract the first-reflection points step by step from the large number of 3D point candidates through optical geometric constraints.

Compared with traditional laser scanning method, we pulled in the viewing angle information of the second camera and a novel refinement process is adopted for reconstruction of transparent object. Therefore, the proposed method can achieve higher accuracy and more reliable 3D points for the reconstruction of transparent objects. Compared with our previous work [25], the proposed method is less restricted and fully automatic, which are validated in the designed experiments. Various experiments on real object demonstrates that proposed method can successfully extract the first-reflection points from the candidates and recover the complex shapes of transparent and semitransparent objects.

The remaining part of this paper is organized as follows. Section 2 presents the composition of the entire system and its calibration. Section 3 analyzes various situations of laser transmission inside the transparent object and details the novel four-layers refinement process. In section 4, various experiments including reconstruction of standard and complex objects are conducted to validate the performance of the proposed method. Section 5 presents the contributions and limitations of the proposed method. Section 6 concludes the work.

2. Imaging system and calibration

The traditional laser scanning system usually involves mechanical mobile platforms, which make the structure of the system huge and complicated. In recent years, the galvanometric laser scanner can solve this problem by replacing the mechanical mobile platforms into the galvo-mirror. The most approaches of calibration of this galvo-laser scanning system [4345] mainly focus on one camera and one galvo-laser. Since we need to utilize the viewing angle information of the second camera, a novel simultaneous calibration approach of two cameras and the galvo-laser is also proposed to for 3D point candidates reconstruction through triangulation. The reconstruction system we propose is shown in Fig. 1. It is composed of four modules: an image acquisition module, a structure light generation module, a control module and a computing module. In the structured light generation module, a laser light source emits a line laser onto the galvanometer with single-axis rotation capability, which reflects the laser onto the object to be reconstructed to form the designed feature points. The laser scans across the measured surface through rotating the galvo-mirror to continuous setting-angles. The image acquisition module collects the feedback image pairs and transfers the images to the computing unit. The control module is responsible for synchronizing the structured light generation and image acquisition through pulse modulation. Finally, the proposed method is applied to the image pairs, and the 3D point cloud is obtained.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed 3D imaging system.

Download Full Size | PDF

Fig. 2 shows the laser scanning system with dual-cameras. Based on this system, the coordinate system of the galvo-mirror is established as shown in Fig. 3. First, the rotation center axis of galvo-mirror is taken as z-axis. Second, the x-axis is parallel to the line laser incident plane $\pi _1$ and perpendicular to the z-axis. The $\alpha$ means the angle between the galvo-mirror reflection plane $\pi _s$ and y-axis. Ideally, the line laser incident plane $\pi _1$ crosses the z-axis. Taking the installation deviation into account, two parameters $\gamma$ and $d$ are created for correct the deviation. $\gamma$ represents the angle between the intersection line of the $\pi _1$ and YOZ plane and z-axis. The $\pi _1$ intersects the y-axis at the point $(0,d,0)$. Then the line laser incident plane $\pi _1$ and the galvo-mirror reflection plane $\pi _s$ can be expressed as Eq. 1 and Eq. 2. According to the Householder transformation [46], the reflection matrix $H$ can be calculated based on normal of plane $\pi _s$ shown in Eq. 3. Therefore, the normal vector of reflected laser plane $\pi _2$ can be derived as shown in Eq. 4.

$$\pi_1 : y - tan(\gamma)z - d = 0 .$$
$$\pi_s : cos(\alpha)x - sin(\alpha)y = 0 .$$
$$H = 1 - 2\vec{n}_{\pi_{s}}\vec{n}_{\pi_{s}^{*}}= \left[ \begin{matrix} -cos(2\alpha) & sin(2\alpha) & 0\\ sin(2\alpha) & cos(2\alpha) & 0\\ 0 & 0 & 1\\ \end{matrix} \right] .$$
$$\vec{n}_{\pi_{2}}= H\vec{n}_{\pi_{1}} = \left[ \begin{matrix} sin(2\alpha) \\ cos(2\alpha) \\ -tan(\gamma) \\ \end{matrix} \right] .$$

 figure: Fig. 2.

Fig. 2. Laser Scanning System with Dual-cameras.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Calibration Model of Galvo-mirror.

Download Full Size | PDF

Moreover, the reflected laser plane $\pi _2$ crosses the point $(dtan(\alpha ),d,0)$. Therefore, the reflected laser plane $\pi _2$ can be obtained as Eq. 5. Suppose the rotation vector $\vec {r}$ and translation vector $\vec {t}$ are the conversion from the camera coordinate system to the galvo-mirror coordinate system shown in Eq. 6. Then The rotation matrix can be obtained by rodrigues formula [47] as Eq. 7, where $\theta = \sqrt {r_1^2 + r_2^2 + r_3^2}$.

For the angle $\alpha$, it can be controlled by the input current value $I$ as shown in Eq. 8, where $k$ means the linear increased angle of unit current and $\alpha _0$ means initial bias angle. Therefore, the conversion relationship from the point $(x_c,y_c,z_c)$ in the camera coordinate system to the point $(x_s,y_s,z_s)$ in the galvo-mirror coordinate system is shown in Eq. 9. Substitute into Eq. 5, the

$$\pi_2 : sin(2\alpha)x_s - cos(2\alpha)y_s - tan(\gamma)z_s - d = 0 .$$
$$\begin{array}{r} \vec{r} = (r_1,r_2,r_3) . \\ \vec{t} = (t_1,t_2,t_3) . \end{array}$$
$$R = cos(\theta)I + \frac{(1-cos(\theta))\vec{r}\vec{r}^T}{\theta^2} +sin(\theta) \left[ \begin{matrix} 0 & -r_3/\theta & r_2/\theta \\ r_3/\theta & 0 & -r_1/\theta\\ -r_2/\theta & r_1/\theta & 0\\ \end{matrix} \right].$$
reflected laser plane $\pi _2$ in the camera coordinate system is acquired as Eq. 10. The galvo-mirror coordinate system is not constrained completely, which can move along the rotation center axis of galvo-mirror. The galvo-mirror coordinate system can be fixed by making $t_3=0$.

There are, in total, 9 independent unknown parameters to describe the galvo-mirror model with no position assumptions. Simultaneously, the assembly error is considered in the mathematical model. The 9 independent unknown parameters can be estimated by minimizing the following objective function as shown in Eq. 11, where $D(P_{ij},\pi _2^j)$ is the distance from the sample point $P_{i,j}(x_{ij},y_{ij},z_{ij})$ to the $j$th estimated reflected laser plane $\pi _2^j$. $X$ are the 9 independent parameters to be optimized. The Levenberg-Marquardt algorithm [48] is adopted to acquire optimized parameters.

Here, the approach of acquiring sample points $P_{i,j}(x_{ij},y_{ij},z_{ij})$ corresponding to the $j$th estimated reflected laser plane $\pi _2^j$ is given. As shown in Fig. 2, the two cameras are calibrated according to the Pinhole model with distortion coefficients by Zhang’s method [49]. When the

$$\alpha(I) = kI +\alpha_0 .$$
$$\left[ \begin{matrix} x_s \\ y_s \\ z_s \\ 1 \\ \end{matrix} \right] = \left[ \begin{matrix} cos(\theta)+\frac{r_1^2(1-cos(\theta))}{\theta^2} & \frac{r_1r_2(1-cos(\theta))}{\theta^2}-\frac{r_3sin(\theta)}{\theta} & \frac{r_1r_3(1-cos(\theta))}{\theta^2}+\frac{r_2sin(\theta)}{\theta} & t_1\\ \frac{r_1r_2(1-cos(\theta))}{\theta^2}+\frac{r_3sin(\theta)}{\theta} & cos(\theta)+\frac{r_2^2(1-cos(\theta))}{\theta^2} & \frac{r_2r_3(1-cos(\theta)}{\theta^2}-\frac{r_1sin(\theta)}{\theta} & t_2\\ \frac{r_1r_3(1-cos(\theta))}{\theta^2}-\frac{r_2sin(\theta)}{\theta} & \frac{r_2r_3(1-cos(\theta))}{\theta^2}+\frac{r_1sin(\theta)}{\theta} & cos(\theta)+\frac{r_3^2(1-cos(\theta))}{\theta^2} & t_3 \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} x_c \\ y_c \\ z_c \\ 1 \\ \end{matrix} \right] .$$
$$\left[ \begin{matrix} sin(2\alpha) \\ cos(2\alpha) \\ -tan(\gamma) \\ -d \\ \end{matrix} \right]^T \left[ \begin{matrix} cos(\theta)+\frac{r_1^2(1-cos(\theta))}{\theta^2} & \frac{r_1r_2(1-cos(\theta))}{\theta^2}-\frac{r_3sin(\theta)}{\theta} & \frac{r_1r_3(1-cos(\theta))}{\theta^2}+\frac{r_2sin(\theta)}{\theta} & t_1\\ \frac{r_1r_2(1-cos(\theta))}{\theta^2}+\frac{r_3sin(\theta)}{\theta} & cos(\theta)+\frac{r_2^2(1-cos(\theta))}{\theta^2} & \frac{r_2r_3(1-cos(\theta)}{\theta^2}-\frac{r_1sin(\theta)}{\theta} & t_2\\ \frac{r_1r_3(1-cos(\theta))}{\theta^2}-\frac{r_2sin(\theta)}{\theta} & \frac{r_2r_3(1-cos(\theta))}{\theta^2}+\frac{r_1sin(\theta)}{\theta} & cos(\theta)+\frac{r_3^2(1-cos(\theta))}{\theta^2} & t_3 \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} x_c \\ y_c \\ z_c \\ 1 \\ \end{matrix} \right] = 0.$$
$$E(X) = \frac{\sum^n_{i=1}\sum^m_{j=1}D(P_{ij},\pi_2^j)}{mn}.$$
system scan across a planar target in multiply orientations, a large number of laser stripe points $P_{i,j}$ corresponding to the $j$th estimated reflected laser plane $\pi _2^j$ can be reconstructed by binocular triangulation through dual-cameras.

To sum up, the calibration procedure of the system including the galvo-mirror and dual-cameras is described in Fig. 4 as follows:

  • 1. The checkerboard is put at different poses and captured by dual-cameras without laser scanning.
  • 2. The captured images are calibrated for dual-cameras including the intrinsic parameters, extrinsic parameters and distortion coefficients.
  • 3. The planar target is located at different orientations and captured by dual-cameras with laser scanning.
  • 4. The laser stripe feature points are extracted and reconstructed by binocular triangulation through dual-cameras.
  • 5. The 9 independent parameters are estimated according to Eq. 11.

 figure: Fig. 4.

Fig. 4. Flowchart of the laser scanning system calibration procedures

Download Full Size | PDF

3. Method for 3D reconstruction

An overview of the proposed method is shown in Fig. 5. We proposed a four-layers refinement process for reconstructing the exterior surface of a transparent object with unknown interior. Our work offers four layers of progressive methods to extract the exterior surface of transparent objects through optical geometric constraints.

 figure: Fig. 5.

Fig. 5. Overview of the proposed method.

Download Full Size | PDF

3.1 Layer-1: fake points removed by single camera

To achieve the fake points removed by single camera, the optical path is analyzed first as shown in Fig. 6(a). When the point $g$ on the galvanometer projects the laser onto the surface point $p$ on the exterior surface of the measured object, part of the light is directly reflected into the camera through ray $\vec {l}$ by diffuse reflection. The remaining light is refracted into the transparent object, reflected by the back surface at the point $p'$, and finally captured by the camera through ray $\vec {l'}$. According to the previous calibration results, the incident laser plane $\pi _2$ is known. By extracting the feature laser points in the camera image, the ray $\vec {l}$ and $\vec {l'}$ can be determined. The point candidates $p$ and $p^*$ can be calculated according to the triangulation as shown in Eq. 12.

$$\left[ \begin{aligned} \pi_2: (p-p_0)\cdot\vec{n}=0 \\ p = d\vec{l} + l_0, d \in R \end{aligned} \right] \Rightarrow p .$$

 figure: Fig. 6.

Fig. 6. Optical path analysis. (a) Layer-1: fake points are removed by single camera. (b) Two situations with ambiguity points. (c) Layer-2: ambiguity points are removed by dual-camera joint constraint. (d) Layer-3: retrieve the missing first-reflection exterior point $p$ by fusion. (e) The situation with severe ambiguity points. (f) Layer-4: the severe ambiguity points are removed by the contour-continuity.

Download Full Size | PDF

When the point candidates $p$ and $p^*$ are acquired, the fake point $p^*$ can be removed by this restriction Eq. 13. As shown in Fig. 5, the points $TP_{li}$ (True Points) are reserved by first-layer refinement.

$$p^* = max_{p,p^*}(gp,gp^*) .$$

However, there are two situations with ambiguity points which cannot be removed correctly. As shown in Fig. 6(b), due to the refraction and reflection of laser light inside transparent objects, the non-first-reflection points can include the reflected laser point $p'$ from the rear surface and some permanent spots $p^s$ on the exterior surface. The laser point $p'$ can be produced from the mirror-reflection on the rear surface of measured objects. The permanent spot $p^s$ on the exterior surface is created because of the complex cross-reflection inside the transparent object especially prone to appear at the location of multiple cross-reflections, which is determined by the shape of the object itself. When a laser light propagates at the location of multiple cross-reflections, the permanent spot are formed and it is stationary when laser moves, which is displayed as the green dotted line in the figure. In these two situations, the first-reflection exterior point $p$ is removed and the ambiguity point $p^*$ is reserved incorrectly by the restriction Eq. 13. To solve this ambiguity problem, dual-cameras and other two layer methods are adopted to remove the ambiguity points $p^*$ and retrieve the missing first-reflection exterior point $p$.

3.2 Layer-2: ambiguity points removed by dual-camera joint constraint

To guarantee the reliability of the reconstruction points, the ambiguity points $p^{*}$ are removed by second refinement layer. As shown in Fig. 6(c), the images of the right camera are draw into consideration to provide the second angle of view information. For the right camera, the ambiguity points $p^{*}$ and right camera center $Cr$ form another ray $\vec {l^r}$. Actually, the right camera receives no light intensity through the optical path (yellow solid line). Therefore, the ambiguity points $p^{*}$ cannot be categorized into the laser point candidate by right camera. Based on this principle, the second layer refinement is proposed. First, the reserved true points $TP_{li}$ are reprojected to the right camera plane as shown below:

$$tp_{ri} = reproject(TP_{li},Cr) .$$

Through this reprojection, the 2D coordinate of points $tp_{ri}$ are obtained in right camera plane. Then the ambiguity points $p^{*}$ are removed by determining whether the $tp_{ri}$ are feature laser points. As shown in Eq. 15, $CTP_{li}$ (Confident True Points) are obtained by removing the ambiguity points $p^{*}$ through reprojection and re-judgement on the right camera plane.

$$CTP_{li} = \mathop{Compare}_{TP_{li}}{(R_i(tp_{ri}),thr)} .$$

3.3 Layer-3: retrieve the missing first-reflection point by fusion

In the first refinement layer, the ambiguity situations cause the first-reflection exterior point $p$ removed and the ambiguity point $p^*$ reserved incorrectly as shown in Fig. 6(b). The second refinement layer remove the ambiguity points $p^*$ through reprojection and re-judgement on the right camera plane. In the third refinement layer, the missing first-reflection exterior points $p$ will be retrieved through fusing the result from the right camera view.

As shown in Fig. 6(d), the removed points $p$ in the first refinement layer by left camera can be retrieved by right camera. In right camera view, the first-reflection exterior point $p$ is retained through this restriction Eq. 13 and it also passed the second layer refinement process. Then the removed point $p$ by left camera can be retrieved by fusing dual-camera results as shown in Eq. 16. In the fusion process, due to some common points $CTP_{li}$ and $CTP_{ri}$ between the left and right camera, it will cause data redundancy if the reconstruction results of the common points are directly merged. To overcome the problem, the decision map fusion [42] in our previous work is adopted, ensuring that only the point $p$ filtered by the left camera is supplemented by the right camera.

$$CTP = fuse(CTP_{li},CTP_{ri}) .$$

However, there is one situation with severe ambiguity points as shown in Fig. 6(e). In this situation, the ambiguity points $p^*$ and right camera center form another ray $\vec {l^r}$ (yellow solid line). Coincidentally, the right camera receives the intensity from point $p'$ through yellow solid line. So the ambiguity points $p^*$ cannot be removed by the right camera information. It is worth noting that this situation rarely happens and the phenomenon disappears automatically when laser moves. These severe ambiguity points form discrete external virtual contours, which can be removed by the fourth layer refinement process.

3.4 Layer-4: the severe ambiguity points removed by the contour-continuity

As analyzed in the Section 3.3, the severe ambiguity points form discrete external virtual contours. According to the contour- continuity, the discrete external virtual contours can be removed by Eq. 17 as shown in Fig. 6(f).

$$Result = filterOutlier(CTP,min_{pts},radius) .$$

The parameter $radius$ means the point neighboring search radius range and the parameter $min_{pts}$ means the minimum number of points in the search range.

3.5 Algorithm summary

Overall, the four-layer refinement method can be utilized by following steps:

  • 1. Acquire the image pairs from the image acquisition module and stereorectify the left and right images according to the epipolar geometry;
  • 2. Calculate all the feature point candidates by triangulation according to calibration parameters;
  • 3. Layer-1 : In the each epipolar constrain, when more than one reflection points are obtained by left camera, the points include a first point and a second point, the first point is closer to reflection point of laser on galvanometer mirror than the second point, remove the second point;
  • 4. Layer-2 : When the first point is not obtained by right camera, remove the first point;
  • 5. Layer-3 : When the second point obtained by left camera is the first point obtained by the right camera, retrieve the second point;
  • 6. Layer-4 : Remove the discrete external virtual contours.

4. Experimental evaluation

To validate the performance of the proposed method, various experiments were carried out on the system, as shown in Fig. 7. The laser light generation module is composed of a line laser and a galvo-mirror with single-axis rotation capability. To guarantee the sufficient intensity of the reflected light, the power line laser is set to 1200 $mW$ at the wavelength of 635 $nm$. The rotation accuracy and the maximum swing angle of the galvo-mirror are 12 $\mu rad$ and $40^{\circ }$, respectively. The line laser scans across the measured surface through rotating the galvo-mirror to continuous setting-angles. Simultaneously, the acquisition module comprised of two cameras and two bandpass filters is synchronized to capture the image pairs and transfer them to the processing unit. The resolution of the cameras (MER-301-125U3M) is $2048 \times 1536$, and the filters are utilized to enhance the signal-to-noise ratio. The distance of the two cameras is 380mm. The system recover 3D points of the scanning images with the speed of 50fps, which is a limitation imposed by the acquisition speed of the cameras.

 figure: Fig. 7.

Fig. 7. (a) Front view of the system. (b) Back view of the system.

Download Full Size | PDF

4.1 Standard parts reconstruction

In this experiment, the manufactured standard glass ladder and glass convex ball are reconstructed separately as shown in Fig. 8(a)(c) at a working distance about 400mm. Meanwhile, the reconstruction results of traditional laser scanning method, laser scanning with LTFtF method [25] and the proposed method are compared.

 figure: Fig. 8.

Fig. 8. (a) Photograph of the glass ladder. (b) The size parameters of the glass ladder. (c) Photograph of the glass convex ball. (d) The size parameters of the convex ball.

Download Full Size | PDF

For the standard glass ladder, the reconstructed point cloud of the two stair-planes are extracted and are fitted as $Plane1$ and $Plane2$ as shown in Fig. 9. From the lateral view of the reconstruction result shown in Fig. 9(a), since there is no distinction between the light reflected from the front and rear surfaces by traditional scanning method, all the feature point candidates are directly reconstructed according to the triangulation method. Many incorrect reconstruction points are produced due to this error operation. The lateral view of Fig. 9(b) shows that the $Plane1$ is reconstructed well. However, the rear surface of the $Plane2$ is incorrectly reconstructed by LTFtF method. Because the LTFtF method [25] need to set a searching threshold $\Delta pix_{thr}$ to meet a requirement that $\Delta pix_{scan} < \Delta pix_{thr} < \Delta pix$. However, in the position of $Plane2$, the displacement of $\Delta pix$ is smaller than $\Delta pix_{scan}$, which cause $\Delta pix_{thr}$ cannot be found. In the proposed method, Fig. 9(c) shows that the front surfaces of $Plane1$ and $Plane2$ are both well-reconstructed.

 figure: Fig. 9.

Fig. 9. Front and lateral view of reconstruction results of the glass ladder by different methods, respectively. (a) Traditional laser scanning method. (b) LTFtF method. (c) Proposed method.

Download Full Size | PDF

The standard deviation, normal deviation and ladder depth of the planes are analyzed in the Table 1 and Table 2. The standard deviation of the $Plane1$ and $Plane2$ reconstructed by traditional method reach several millimeters due to the incorrect reconstruction points. As analyzed above, the rear surface of the $Plane2$ is incorrectly reconstructed by LTFtF method. Therefore, the standard deviation of the $Plane1$ is only 0.0926 mm, but the standard deviation of the $Plane2$ is 1.5834 mm by LTFtF method. Our proposed method achieve the highest accuracy compared with other two methods. The standard deviation of the $Plane1$ and $Plane2$ are around 0.07 mm and the ladder depth deviation is only 0.7%.

Tables Icon

Table 1. Glass Ladder Reconstruction Data (Unit: mm)

Tables Icon

Table 2. Glass Ladder Reconstruction Analyze Data

For the standard glass convex ball, the reconstructed point cloud of the plane and sphere are extracted and are fitted as $Plane1$ and $Sphere1$ as shown in Fig. 10. The standard deviation, diameter deviation of $Sphere1$ and distance from $Sphere1$ centre to $Plane1$ are analyzed in the Table 3 and Table 4. Similar to the glass ladder, the traditional laser scanning method cannot distinguish the reflected light from the front and rear surface of the glass convex ball, causing bad reconstruction results. In the LTFtF method, the surface of $Sphere1$ can be well-reconstructed with 0.106 mm standard deviation and $0.67\%$ diameter deviation. However, the surface of the $Plane1$ is incorrectly reconstructed by LTFtF method with 1.549 mm standard deviation due to the same reason in the experiment of the glass ladder. In the proposed method, the standard deviation of the $Sphere1$ and $Plane1$ are less than 0.1 mm. The deviation of $Sphere1$ diameter and distance from $Sphere1$ centre to $Plane1$ are $0.48\%$ and $1.05\%$, respectively.

 figure: Fig. 10.

Fig. 10. Two lateral views of reconstruction results of the glass convex ball by different methods, respectively. (a) Traditional laser scanning method. (b) LTFtF method. (c) Proposed method.

Download Full Size | PDF

Tables Icon

Table 3. Glass Convex Ball Reconstruction Data (Unit: mm)

Tables Icon

Table 4. Glass Convex Ball Reconstruction Analyze Data

4.2 Complex parts reconstruction

More experiments are carried out to validate the performance of proposed method including a glass cup, stacking plastic bottles, a plastic funnel and a crystal lotus as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Complex Objects. (a) Photograph of the glass cup. (b) Photograph of the stacking plastic bottles. (c) Photograph of the plastic funnel. (d) Photograph of the crystal lotus.

Download Full Size | PDF

The Fig. 12(a)(d)(g)(j) display the reconstruction results by traditional laser scanning method. The correct reconstruction results are submerged in a large number of incorrect reconstruction points. Because all the captured bright points, without distinguishing them reflected by the front or rear surfaces, are directly reconstructed according to the triangulation method. This operation creates a large number of incorrect reconstruction points. From the figures of reconstruction results by proposed method, the non-first-reflection points are filtered out and the desired first-reflection points are extracted from the reconstruction result.

 figure: Fig. 12.

Fig. 12. Reconstruction results by traditional laser scanning method, front view and lateral view of reconstruction results by proposed method, respectively. (a)-(c) Glass cup. (d)-(f) Stacking plastic bottles. (g)-(i) Plastic funnel. (j)-(l) Crystal lotus.

Download Full Size | PDF

To further estimate the accuracy of proposed method for complex transparent objects, the plastic funnel is coated as shown in Fig. 13(c). Then, the coated plastic funnel is reconstructed by detailed laser-scanning as ground truth as shown in Fig. 13(d). The deviation map of reconstruction by proposed method and ground truth is shown in Fig. 13(e).

 figure: Fig. 13.

Fig. 13. (a) Photograph of the plastic funnel. (b) Reconstruction result by proposed method. (c) Photograph of the coated plastic funnel. (d) Ground truth of reconstruction. (e) Deviation error map.

Download Full Size | PDF

In order to validate the effect of each layer operation, the ablation study experiment is carried out for the plastic funnel as shown in Fig. 14. Comparing with Fig. 14(a) and (b), the fake points behind the front surface points are removed by layer-1. However, there are still some ambiguity points not filtered out after layer-1 process. As shown in Fig. 14(c), most of ambiguity points are removed by dual-camera joint constraint through layer-2 process. However, in the zoomed black box inside, the ambiguity situations cause some first-reflection exterior points removed incorrectly. In Fig. 14(d), the incorrectly removed first-reflection exterior points are retrieved by fusing the result from the right camera view after layer-3 process. Finally, some severe ambiguity points are removed by contour-continuity after layer-4 process.

 figure: Fig. 14.

Fig. 14. Ablation study. (a) All feature point candidates reconstructed by left camera. (b) Reserved points after Layer-1 process. (c) Reserved points after Layer-2 process. (d) Fusion points after Layer-3 process. (e) Reserved points after Layer-4 process.

Download Full Size | PDF

5. Contributions and limitations

5.1 Contributions

In this paper, a fully automatic approach for reconstructing the exterior surface of a complex transparent scene is presented. Through scanning a line laser by a galvo-mirror, images of the scene are captured from two viewing directions. A novel calibration model about this imaging system is proposed for 3D point candidates reconstruction through triangulation. Then, a novel four-layers refinement process is proposed to extract the first-reflection points step by step from the large number of 3D points through optical geometric constraints. There are different advantages compared with other reflection-based methods, such as frequency-based method [19], Scanning From Heating(SFH) [21,23], polarized light measurements(PLM) [24]. For frequency-based method, it requires approximately 33 minutes to capture and process all the 1,350 images for computationally expensive process along the time axis. In this work, the system recover 3D points of the scanning images with the speed of 50fps, which can be speed up by improving the acquisition speed of the cameras. For SFH method, the high laser power and a long irradiation period may damage measured object and low-resolution of MWIR camera limits the density of point clouds. Compared with PLM method, the polarizer is not required for the cameras and the ambiguity of azimuth angle for polarization analyses is not required for consideration.

Compared with traditional laser scanning method, its hardware usually includes only one camera and one line laser. In this work, we pulled in the viewing angle information of the second camera to solving reconstruction outliers and ambiguity issues. A calibration method is also developed to calibrate the both two cameras and laser-plane, respectively. Simultaneously, Various situations of laser transmission inside the transparent object are analyzed and captured feature point candidates are classified into two types: first-reflection points and non-first-reflection points. A novel four-layers refinement process is proposed to extract the first-reflection points step by step from the large number of 3D points through optical geometric constraints.

In our previous work [25], the LTFtF method based on the consistency of laser motion was proposed for the extraction of the laser reflected by the front surface from the reflected laser light candidates. Compared with LTFtF method, there are two major advantages below: First, the LTFtF method need to select a laser line manually as standard line, which is not automated enough. In the proposed method, the first-reflection points are extracted through optical geometric constraints without any manual intervention. Second, the LTFtF method need to set a searching threshold $\Delta pix_{thr}$ to meet a requirement that $\Delta pix_{scan} < \Delta pix_{thr} < \Delta pix$. However, in some situations, the displacement of $\Delta pix$ is smaller than $\Delta pix_{scan}$, which cause $\Delta pix_{thr}$ cannot be found. The proposed method is not limited by this requirement and it is validated in the experiment of standard parts reconstruction at Section 4.1.

5.2 Limitations

5.2.1 Double-sided surface reconstruction

In the proposed method, only the exterior surface points of transparent objects can be reconstructed during a single scanning process. Although the first-reflection points and non-first-reflection points can be extracted separately, the non-first-reflection points cannot be reconstructed. Because the angle of incidence at the surface and the refractive index of the object are unknown, the refracted light path cannot be tracked inside. Future work will attempt to track the refracted light path to achieve reconstruction of both the front and back surfaces in a single laser scanning loop.

5.2.2 Cross-reflection problem

The proposed method will fail to reconstruct area with severe cross reflections, which will cause severe ambiguity points easily as analyzed in Section 3.4. Simultaneously, severe cross-reflection problem will cause the intensity, reflected by first-reflection points, not enough to be extracted as feature point candidates. As shown in Fig. 15, some mirror cross-reflection situations happen in the left concave ball, which cause the received reflected intensity of first-reflection points not enough to be extracted as feature points. Therefore, unsuccessfull reconstruction result is acquired by proposed method.

 figure: Fig. 15.

Fig. 15. Failure reconstruction of a glass double concave ball. (a) Photograph of the glass double concave ball. (b) Front view of reconstruction result by proposed method. (c) Lateral view of reconstruction result by proposed method.

Download Full Size | PDF

5.2.3 Occlusion problem

Another limitation of the proposed method is occlusion problem. According to the proposed algorithm, only the points, scanned by laser and captured by both left camera and right camera, can be classified as first-reflection points and reserved. As shown in Fig. 15, some concave areas are not properly reconstructed. Because these areas are blocked by adjacent surfaces, which cause one of the cameras cannot capture the surface information or the laser cannot scan across the surface. For the glass ladder as shown in the Fig. 9, the inclined part obscured by the left front surface is not captured by left camera, which cause the missing reconstruction area in the inclined part.

5.2.4 Relative position of cameras

In this imaging system, the distance of the two cameras is 380 millimeters and the relative angle of the two cameras is about 50 degrees. The galvo-mirror is put in the middle of the two cameras. For a fixed measurement distance and binocular public area, the higher the distance of the two cameras, the bigger of the relative angle of the two cameras is. According to the triangulation method, the higher distance of the camera and galvo-mirror, the higher accuracy of the imaging system can obtain. However, for a transparent mirror-reflective material, the system requires a small relative angle of the two cameras to keep both of them can receive enough reflected laser intensity for proposed algorithm process. Therefore, for a semi-transparent object, a higher distance of the two cameras can acquire a result with a better precision. For a transparent mirror-reflective material, the system requires a small relative angle of the two cameras to keep both of them can receive enough reflected laser intensity. In practical applications, a trade-off between the two requirements should be considered.

6. Conclusions

In this work, a fully automatic approach and an imaging system are presented for reconstructing the exterior surface of a complex transparent scene. Through scanning a line laser by a galvo-mirror, images of the scene are captured from two viewing directions. Compared with traditional laser scanning method, we pulled in the viewing angle information of the second camera to solving reconstruction outliers and ambiguity issues. A calibration method is also developed to calibrate the both two cameras and laser-plane, respectively. Due to the light transmission inside the transparent object, the captured feature laser points and the calibrated laser plane can produce large number of 3D point candidates with large incorrect points through direct triangulation. Various situations of laser transmission inside the transparent object are analyzed and the reconstructed 3D point candidates are classified into two types: first-reflection points and non-first-reflection points. Then, a novel four-layers refinement process is proposed to extract the first-reflection points step by step from the large number of 3D point candidates through optical geometric constraints, including (1) fake points removed by single camera, (2) ambiguity points removed by dual-camera joint constraint, (3) retrieve the missing first-reflection exterior surface points by fusion and (4) severe ambiguity points removed by contour-continuity. Compared with traditional laser scanning method and our previous work through LTFtF method [25], the proposed method can achieve higher accuracy and obtain more reliable 3D points for the reconstruction of transparent and semi-transparent objects. Various experiments, including the reconstruction of both standard and complex objects, validated the precision and efficiency of the proposed 3D transparent object surface reconstruction. The proposed methods can be applied into complex scenarios, such as industrial inspection, and in transparent object grasping and location.

Funding

National Key Research and Development Program of China (2018YFB1309300); University Grants Committee (T42-409/18-R); VC Fund of CUHK T Stone Robotics Institute (4930745); ITC via Hong Kong Centre for Logistics Robotics; Shenzhen-Hong Kong Collaborative Zone Project.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Rao, G. Wang, X. Yang, J. Xu, and K. Chen, “Normal direction measurement and optimization with a dense three-dimensional point cloud in robotic drilling,” IEEE/ASME Trans. Mechatron. 23(3), 986–996 (2018). [CrossRef]  

2. J. R. Rosell-Polo, E. Gregorio, J. Gené, J. Llorens, X. Torrent, J. Arnó, and A. Escolà, “Kinect v2 sensor-based mobile terrestrial laser scanner for agricultural outdoor applications,” IEEE/ASME Trans. Mechatron. 22(6), 2420–2427 (2017). [CrossRef]  

3. Y. Li, Y. Wang, Y. Yue, D. Xu, M. Case, S. Chang, E. Grinspun, and P. K. Allen, “Model-driven feedforward prediction for manipulation of deformable objects,” IEEE Trans. Automat. Sci. Eng. 15(4), 1621–1638 (2018). [CrossRef]  

4. G. M. Frank Chen and S. Mumin, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–21 (2000). [CrossRef]  

5. F. Blais, “Review of 20 years of range sensor development,” in Videometrics VII, vol. 5013 (International Society for Optics and Photonics, 2003), pp. 62–76.

6. J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognition 37(4), 827–849 (2004). [CrossRef]  

7. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

8. Z. Zhang, C. Chang, X. Liu, Z. Li, Y. Shi, N. Gao, and Z. Meng, “Phase measuring deflectometry for obtaining 3d shape of specular surface: a review of the state-of-the-art,” Opt. Eng. 60(2), 020903 (2021). [CrossRef]  

9. D. J. Mirota, M. Ishii, and G. D. Hager, “Vision-based navigation in image-guided interventions,” Annu. Rev. Biomed. Eng. 13(1), 297–319 (2011). [CrossRef]  

10. T. Okatani and K. Deguchi, “Shape reconstruction from an endoscope image by shape from shading technique for a point light source at the projection center,” Comput. vision image understanding 66(2), 119–131 (1997). [CrossRef]  

11. N. Lazaros, G. C. Sirakoulis, and A. Gasteratos, “Review of stereo vision algorithms: from software to hardware,” Int. J. Optomechatronics 2(4), 435–462 (2008). [CrossRef]  

12. M. Hu, G. Penney, M. Figl, P. Edwards, F. Bello, R. Casula, D. Rueckert, and D. Hawkes, “Reconstruction of a 3d surface from video that is robust to missing data and outliers: Application to minimally invasive surgery using stereo and mono endoscopes,” Med. Image Anal. 16(3), 597–611 (2012). [CrossRef]  

13. S. Röhl, S. Bodenstedt, S. Suwelack, H. Kenngott, B. P. Müller-Stich, R. Dillmann, and S. Speidel, “Dense gpu-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration,” Med. Phys. 39(3), 1632–1645 (2012). [CrossRef]  

14. M. Hansard, S. Lee, O. Choi, and R. P. Horaud, Time-of-flight cameras: principles, methods and applications (Springer Science & Business Media, 2012).

15. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

16. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using gobo projection,” Opt. Lasers Eng. 87, 90–96 (2016). [CrossRef]  

17. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

18. I. Ihrke, K. N. Kutulakos, H. P. Lensch, M. Magnor, and W. Heidrich, “State of the art in transparent and specular object reconstruction,” in EUROGRAPHICS 2008 STAR–STATE OF THE ART REPORT, (Citeseer, 2008).

19. D. Liu, X. Chen, and Y.-H. Yang, “Frequency-based 3d reconstruction of transparent and specular objects,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 660–667.

20. N. J. Morris and K. N. Kutulakos, “Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography,” in 2007 IEEE 11th International Conference on Computer Vision, (IEEE, 2007), pp. 1–8.

21. G. Eren, O. Aubreton, F. Meriaudeau, L. S. Secades, D. Fofi, A. T. Naskali, F. Truchetet, and A. Ercil, “Scanning from heating: 3d shape estimation of transparent objects from local surface heating,” Opt. Express 17(14), 11457–11468 (2009). [CrossRef]  

22. X. Gong and S. Bansmer, “3-d ice shape measurements using mid-infrared laser scanning,” Opt. Express 23(4), 4908–4926 (2015). [CrossRef]  

23. M. Landmann, H. Speck, P. Dietrich, S. Heist, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-resolution sequential thermal fringe projection technique for fast and accurate 3d shape measurement of transparent objects,” Appl. Opt. 60(8), 2362–2371 (2021). [CrossRef]  

24. X. Xu, Y. Qiao, and B. Qiu, “Reconstructing the surface of transparent objects by polarized light measurements,” Opt. Express 25(21), 26296–26309 (2017). [CrossRef]  

25. K. He, C. Sui, T. Huang, R. Dai, C. Lyu, and Y.-H. Liu, “3d surface reconstruction of transparent objects using laser scanning with ltftf method,” Opt. Lasers Eng. 148, 106774 (2022). [CrossRef]  

26. Y. Qian, M. Gong, and Y. Hong Yang, “3d reconstruction of transparent objects with position-normal consistency,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 4369–4377.

27. B. Wu, Y. Zhou, Y. Qian, M. Gong, and H. Huang, “Full 3d reconstruction of transparent objects,” arXiv preprint arXiv:1805.03482 (2018).

28. J. Lyu, B. Wu, D. Lischinski, D. Cohen-Or, and H. Huang, “Differentiable refraction-tracing for mesh reconstruction of transparent objects,” arXiv e-prints pp. arXiv–2009 (2020).

29. Y. Ding, F. Li, Y. Ji, and J. Yu, “Dynamic fluid surface acquisition using a camera array,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 2478–2485.

30. Y. Ji, J. Ye, and J. Yu, “Reconstructing gas flows using light-path approximation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2013), pp. 2507–2514.

31. Y. Qian, M. Gong, and Y.-H. Yang, “Stereo-based 3d reconstruction of dynamic fluid surfaces by global optimization,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1269–1278.

32. Y. Qian, Y. Zheng, M. Gong, and Y.-H. Yang, “Simultaneous 3d reconstruction for water surface and underwater scene,” in Proceedings of the European Conference on Computer Vision (ECCV), (2018), pp. 754–770.

33. M. Goesele, H. P. Lensch, J. Lang, C. Fuchs, and H.-P. Seidel, “Disco: acquisition of translucent objects,” in ACM SIGGRAPH 2004 Papers, (2004), pp. 835–844.

34. A. Brahm, C. Rößler, P. Dietrich, S. Heist, P. Kühmstedt, and G. Notni, “Non-destructive 3d shape measurement of transparent and black objects with thermal fringes,” in Dimensional Optical Metrology and Inspection for Practical Applications V, vol. 9868 (International Society for Optics and Photonics, 2016), p. 98680C.

35. K. Han, K.-Y. K. Wong, and M. Liu, “A fixed viewpoint approach for dense reconstruction of transparent objects,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2015), pp. 4001–4008.

36. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

37. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

38. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 (2006). [CrossRef]  

39. C. Sui, K. He, C. Lyu, Z. Wang, and Y.-H. Liu, “Active stereo 3-d surface reconstruction using multistep matching,” IEEE Trans. Automat. Sci. Eng. 17(4), 2130–2144 (2020). [CrossRef]  

40. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

41. J. Pan, P. S. Huang, S. Zhang, and F.-P. Chiang, “Color n-ary gray code for 3-d shape measurement,” in 12th international conference on experimental mechanics, vol. 29 (2004).

42. K. He, C. Sui, C. Lyu, Z. Wang, and Y. Liu, “3d reconstruction of objects with occlusion and surface reflection using a dual monocular structured light system,” Appl. Opt. 59(29), 9259–9271 (2020). [CrossRef]  

43. S. Yang, L. Yang, G. Zhang, T. Wang, and X. Yang, “Modeling and calibration of the galvanometric laser scanning three-dimensional measurement system,” Nanomanuf. Metrol. 1(3), 180–192 (2018). [CrossRef]  

44. A. Manakov, H.-P. Seidel, and I. Ihrke, “A mathematical model and calibration procedure for galvanometric laser scanning systems,” Vision, Modeling, and Visualization (2011).

45. C. Yu, X. Chen, and J. Xi, “Modeling and calibration of a novel one-mirror galvanometric laser scanner,” Sensors 17(12), 164 (2017). [CrossRef]  

46. A. S. Householder, “Unitary triangularization of a nonsymmetric matrix,” J. ACM 5(4), 339–342 (1958). [CrossRef]  

47. O. Rodrigues, “Des lois géométriques qui régissent les déplacements d’un système solide dans l’espace, et de la variation des coordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire,” Journal de mathématiques pures et appliquées 5, 380–440 (1840).

48. J. J. Moré, “The levenberg-marquardt algorithm: implementation and theory,” in Numerical analysis, (Springer, 1978), pp. 105–116.

49. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic diagram of the proposed 3D imaging system.
Fig. 2.
Fig. 2. Laser Scanning System with Dual-cameras.
Fig. 3.
Fig. 3. Calibration Model of Galvo-mirror.
Fig. 4.
Fig. 4. Flowchart of the laser scanning system calibration procedures
Fig. 5.
Fig. 5. Overview of the proposed method.
Fig. 6.
Fig. 6. Optical path analysis. (a) Layer-1: fake points are removed by single camera. (b) Two situations with ambiguity points. (c) Layer-2: ambiguity points are removed by dual-camera joint constraint. (d) Layer-3: retrieve the missing first-reflection exterior point $p$ by fusion. (e) The situation with severe ambiguity points. (f) Layer-4: the severe ambiguity points are removed by the contour-continuity.
Fig. 7.
Fig. 7. (a) Front view of the system. (b) Back view of the system.
Fig. 8.
Fig. 8. (a) Photograph of the glass ladder. (b) The size parameters of the glass ladder. (c) Photograph of the glass convex ball. (d) The size parameters of the convex ball.
Fig. 9.
Fig. 9. Front and lateral view of reconstruction results of the glass ladder by different methods, respectively. (a) Traditional laser scanning method. (b) LTFtF method. (c) Proposed method.
Fig. 10.
Fig. 10. Two lateral views of reconstruction results of the glass convex ball by different methods, respectively. (a) Traditional laser scanning method. (b) LTFtF method. (c) Proposed method.
Fig. 11.
Fig. 11. Complex Objects. (a) Photograph of the glass cup. (b) Photograph of the stacking plastic bottles. (c) Photograph of the plastic funnel. (d) Photograph of the crystal lotus.
Fig. 12.
Fig. 12. Reconstruction results by traditional laser scanning method, front view and lateral view of reconstruction results by proposed method, respectively. (a)-(c) Glass cup. (d)-(f) Stacking plastic bottles. (g)-(i) Plastic funnel. (j)-(l) Crystal lotus.
Fig. 13.
Fig. 13. (a) Photograph of the plastic funnel. (b) Reconstruction result by proposed method. (c) Photograph of the coated plastic funnel. (d) Ground truth of reconstruction. (e) Deviation error map.
Fig. 14.
Fig. 14. Ablation study. (a) All feature point candidates reconstructed by left camera. (b) Reserved points after Layer-1 process. (c) Reserved points after Layer-2 process. (d) Fusion points after Layer-3 process. (e) Reserved points after Layer-4 process.
Fig. 15.
Fig. 15. Failure reconstruction of a glass double concave ball. (a) Photograph of the glass double concave ball. (b) Front view of reconstruction result by proposed method. (c) Lateral view of reconstruction result by proposed method.

Tables (4)

Tables Icon

Table 1. Glass Ladder Reconstruction Data (Unit: mm)

Tables Icon

Table 2. Glass Ladder Reconstruction Analyze Data

Tables Icon

Table 3. Glass Convex Ball Reconstruction Data (Unit: mm)

Tables Icon

Table 4. Glass Convex Ball Reconstruction Analyze Data

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

π 1 : y t a n ( γ ) z d = 0 .
π s : c o s ( α ) x s i n ( α ) y = 0 .
H = 1 2 n π s n π s = [ c o s ( 2 α ) s i n ( 2 α ) 0 s i n ( 2 α ) c o s ( 2 α ) 0 0 0 1 ] .
n π 2 = H n π 1 = [ s i n ( 2 α ) c o s ( 2 α ) t a n ( γ ) ] .
π 2 : s i n ( 2 α ) x s c o s ( 2 α ) y s t a n ( γ ) z s d = 0 .
r = ( r 1 , r 2 , r 3 ) . t = ( t 1 , t 2 , t 3 ) .
R = c o s ( θ ) I + ( 1 c o s ( θ ) ) r r T θ 2 + s i n ( θ ) [ 0 r 3 / θ r 2 / θ r 3 / θ 0 r 1 / θ r 2 / θ r 1 / θ 0 ] .
α ( I ) = k I + α 0 .
[ x s y s z s 1 ] = [ c o s ( θ ) + r 1 2 ( 1 c o s ( θ ) ) θ 2 r 1 r 2 ( 1 c o s ( θ ) ) θ 2 r 3 s i n ( θ ) θ r 1 r 3 ( 1 c o s ( θ ) ) θ 2 + r 2 s i n ( θ ) θ t 1 r 1 r 2 ( 1 c o s ( θ ) ) θ 2 + r 3 s i n ( θ ) θ c o s ( θ ) + r 2 2 ( 1 c o s ( θ ) ) θ 2 r 2 r 3 ( 1 c o s ( θ ) θ 2 r 1 s i n ( θ ) θ t 2 r 1 r 3 ( 1 c o s ( θ ) ) θ 2 r 2 s i n ( θ ) θ r 2 r 3 ( 1 c o s ( θ ) ) θ 2 + r 1 s i n ( θ ) θ c o s ( θ ) + r 3 2 ( 1 c o s ( θ ) ) θ 2 t 3 0 0 0 1 ] [ x c y c z c 1 ] .
[ s i n ( 2 α ) c o s ( 2 α ) t a n ( γ ) d ] T [ c o s ( θ ) + r 1 2 ( 1 c o s ( θ ) ) θ 2 r 1 r 2 ( 1 c o s ( θ ) ) θ 2 r 3 s i n ( θ ) θ r 1 r 3 ( 1 c o s ( θ ) ) θ 2 + r 2 s i n ( θ ) θ t 1 r 1 r 2 ( 1 c o s ( θ ) ) θ 2 + r 3 s i n ( θ ) θ c o s ( θ ) + r 2 2 ( 1 c o s ( θ ) ) θ 2 r 2 r 3 ( 1 c o s ( θ ) θ 2 r 1 s i n ( θ ) θ t 2 r 1 r 3 ( 1 c o s ( θ ) ) θ 2 r 2 s i n ( θ ) θ r 2 r 3 ( 1 c o s ( θ ) ) θ 2 + r 1 s i n ( θ ) θ c o s ( θ ) + r 3 2 ( 1 c o s ( θ ) ) θ 2 t 3 0 0 0 1 ] [ x c y c z c 1 ] = 0.
E ( X ) = i = 1 n j = 1 m D ( P i j , π 2 j ) m n .
[ π 2 : ( p p 0 ) n = 0 p = d l + l 0 , d R ] p .
p = m a x p , p ( g p , g p ) .
t p r i = r e p r o j e c t ( T P l i , C r ) .
C T P l i = C o m p a r e T P l i ( R i ( t p r i ) , t h r ) .
C T P = f u s e ( C T P l i , C T P r i ) .
R e s u l t = f i l t e r O u t l i e r ( C T P , m i n p t s , r a d i u s ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.