Abstract

Many vision measurement systems, especially in outdoor engineering, usually utilize glass ports to protect sensors against environmental influences. The refraction caused by glass ports lead to measurement errors in traditional single viewpoint model. Most existing methods only deal with the refraction that happens once, and the glass ports are required to be perpendicular to cameras or the orientations of glass ports are obtained by auxiliary equipment. This paper proposes a corrected 3D reconstruction model based on refraction geometry, which can be used for any number of glass ports with any orientations. The orientation of each glass port is obtained only using refracted and unrefracted images of the same scene, which doesn’t need any auxiliary equipment. A series of validation experiments are performed. An existing image rectification method is used to make a comparison. The proposed method is also employed in a train wheelset profile measurement application, which proves that the method is effective in actual applications.

© 2017 Optical Society of America

1. Introduction

Contactless measurement that uses vision systems has been employed in various areas. Vision measurement offers the advantages of automation, flexibility, and high precision and is thus appealing to engineers. The main component of a vision system is a 3D reconstruction model based on ray paths. The single viewpoint (SVP) model is the most popular reconstruction model if rays are in homogenous mediums. However, ray paths may comprise different mediums; for example, a camera is installed in a sealed housing with a glass port, or a measured object is submerged in water while cameras observe it in air. According to Snell’s law, a ray refracts at the interface of two different mediums, thus causing the SVP model to degenerate [1].

The methods of refraction correction can be divided into two categories: image rectification methods and geometrical correction methods. Image rectification methods [2, 3] first remove image distortion caused by refraction and then perform a 3D reconstruction of the undistorted image using the SVP model. Haile et al. [2] used an image registration technique to correct a refracted image so that it becomes an unrefracted image. In the study, several control points were used on a reference image and a refracted image of the same scene, and polynomials of order two were determined to match the control points. Then, a local weighted mean transformation method was used to reconstruct the unrefracted image. However, feature points must be re-extracted on the unrefracted image because of the difficulty in directly determining corrected positions from feature points on the refracted image. Samper et al. [3] used the same polynomials to directly correct refracted image points such that they become unrefracted image points. However, the compensations for refracted image points only involved distances and not directions. Moreover, the depth from a target to a camera affected the polynomials; thus, control points should be near a measured object. By contrast, geometrical correction methods track refractive ray paths [4–6] and use a modified non-SVP model [1, 7–10]. These methods are mainly used in underwater applications. In the literature, the refraction effect of the glass port of a camera housing or a water tank is easily ignored because of its minimal thickness and its refractive index being near the water [1, 5, 7, 10]. The stereo vision reconstruction method proposed by Du et al. [4] proves that the joint line of a real object point and a pseudo object point reconstructed with the SVP model is perpendicular to the refractive interface. Therefore, the depth compensation to the refractive interface is applicable. However, the stereo vision reconstruction method is only suitable for stereo cameras in housings with one glass port or those installed outside water tanks. The method proposed by Kang et al. [5] could separate two cameras into independent housings, but it requires the camera lens to be nearly perpendicular to the glass port [1]. Wang et al. [7] considered the small deviation of the normal of a glass interface from the optical axis and pointed out the departure from the SVP model in this case. The authors utilized a known pattern projected on a known surface to calibrate the non-SVP model, which included the intrinsic parameters of the camera, the normal of the refractive interface, and the distance between the optical center and the refractive interface. However, the deviation of the normal of the interface from the optical axis should be minimal because the initial value of the normal was set to be the optical axis as well. Ke et al. [8] considered the influence of glass thickness and obtained the position and orientation of the interface using a planar target, which was extremely close to the glass surface. In [9], the interface of the water was obtained using a floating planar target. Chang et al. [10] used an inertial measurement unit to obtain the relative orientation and positon of the camera relative to the interface. In non-SVP models, the orientation and position of the interface, the refractive index, and the thickness of the glass are significant parameters, and some of them, if not all, are optimized simultaneously with the parameters of the camera using the bundle adjustment process [8]. In removing refraction, some vision systems use correction lenses or dome ports [11–13], which make the ray of each point perpendicular to the interface. In this case, the SVP model could still be used. However, the projection center of a lens should be the exact optical center of a dome port [13]; such exactness is difficult to achieve. Installation errors could eventually lead to unexpected refraction.

In summary, image rectification methods do not consider refractive indexes, the position and orientation of refractive interfaces, or even the shape of these interfaces. Their requirements mainly include unrefracted images and refracted images of the same scene for establishing transformation functions. However, the transformation is related to the depth of scenes [1, 3], and thus, targets on which control points lie should be near measured objects. By contrast, geometrical correction methods require a large number of parameters, including the position and orientation of refractive interfaces, refractive indexes, and shape of interfaces. In addition to flat glass ports, interface shapes such as cylindrical surfaces have been studied [6]. Morris et al. [14] proposed a stereo method to reconstruct the 3D position and surface normal of points on an unknown, arbitrarily shaped refractive surface. Most non-SVP models consider that rays refract only once. In practice, vision systems with multiple glass ports are common.

Underwater measurement usually involves three (air, glass, and water) or more mediums, and its ray paths are more complex than those of the measurement involving only glass ports or optical filters in air. However, glass ports and optical filters are typically preferred in protecting cameras and reducing ambient noise in engineering, especially in an outdoor environment. Train wheelset profile measurement systems [15, 16] are typical applications that use optical filters and glass ports for cameras. In the present study, we analyze a ray path refracted by a flat glass port at any position and orientation. A simple method to determine the normal of a refractive interface is proposed on the basis of refractive distortion. Moreover, a modified 3D reconstruction model, which can be used for any number of glass ports with any orientation, is proposed. Validation experiments involving a single camera vision with one glass port, another single camera vision with two glass ports, and a structured light vision were performed. A comparison experiment with Haile’s method [2] was also conducted. This paper is organized as follows. Section 2 describes the geometrical ray path in 2D space. Section 3 describes the geometrical ray path in 3D space and provides the corrected 3D reconstruction model. Section 4 describes the validation experiments and the comparison experiment. Section 5 describes a train wheelset profile measurement application. Section 6 presents the conclusion.

2. Geometry of refraction

In a typical setup, the glass port that protects a camera is perpendicular to the lens. In this case, the refraction and distortion of images are symmetric around the optical axis [17]. However, glass ports can easily tilt because of installation errors or other mechanical structures. According to Snell’s law, an incident ray, the normal of refractive interface, and an emergent ray all lie in one plane. Figure 1 shows the refractive ray path of two cases in 2D space without loss of generality.

 

Fig. 1 Refractive ray paths of two cases. (a) The glass port is perpendicular. (b) The glass port is titled. α is the incident angle, β is the refractive angle, γ is the emergent angle, h is the thickness of the glass port, P is the real objective point, and P’ is the pseudo objective point.

Download Full Size | PPT Slide | PDF

In this study, we only consider a flat glass port with two parallel surfaces. According to Swell’s law, we have

nairsinα=nglasssinβ=nairsinγ,
where nair and nglass are the refractive indexes of air and glass, respectively. We know that α is equal to γ. According to the imaging principle, only the ray passing through the optical center can form an image point on an image plane. Therefore, this ray path is tracked from the image plane to the objective point. The ray path can be divided into three segments: the incident ray from the image plane to the left interface, the refractive ray in the glass medium from the left interface to the right, and the emergent ray from the right interface to the objective point in the air.

As shown in Fig. 1(a), the 2D coordinate system is established as follows: the origin is the optical center, the x-axis is the optical axis, and the y-axis obeys the right-hand basis. The equation of the incident ray is

y=tanαx.
The intersection of the left interface and the optical axis is set as (x0, 0). The equation of the refractive ray is
y=tanβx+(tanαtanβ)x0.
The equation of the emergent ray is
y=tanγx+(tanβtanα)h=tanαx+(tanβtanα)h.
Substituting Eq. (1) into Eq. (4) results in the following modification:
y=tanα(xh)+(sinα(nglass/nair)2sin2α)h.
The equation of the emergent ray is irrelevant to x0. In other words, the position of the glass port exerts no effect on the emergent ray path. For an image point on the image plane, the incident ray can be determined using the SVP model. The emergent ray can then be calculated using Eq. (5) provided that the refractive index and thickness of the port are known.

The 2D coordinate system in Fig. 1(b) is established in the same way. The angle between the interface and the optical axis is set as θ. Thus, the angle between the incident ray and the optical axis is θ+απ/2. Then, the equation of the incident ray is

y=tan(θ+απ/2)x.
The equation of the refractive ray is
y=tan(β+θπ/2)x+(tan(α+θπ/2)tan(β+θπ/2))tanθx0tanθtan(α+θπ/2).
The equation of the emergent ray is
y=tan(θ+απ/2)x+tan(β+θπ/2)tan(α+θπ/2)cosθ(tanθtan(β+θπ/2))h.
The equation of the emergent ray segment is also irrelevant to the position of the port. By substituting θ=π/2 into Eqs. (7) and (8), the equations become identical to Eqs. (4) and (5), respectively, thereby proving the correctness and commonality of the equations. Substituting Eq. (1) into Eq. (8) yields
y=tan(θ+απ/2)x+(tanθ+cot(α+θ)tanθ+cosθ(nglass/nair)2sin2αsinαsinθsinαcosθ+sinθ(nglass/nair)2sin2α1)hcosθ.
However, for an image point, the angle between the incident ray and the optical axis can be determined as θ+απ/2. Thus, let φ=(θ+απ/2). Equation (9) can then be rewritten as

y=tanφx+(tanθtanφtanθ+cosθ(nglass/nair)2cos2(θφ)cos(θφ)sinθcos(θφ)cosθ+sinθ(nglass/nair)2cos2(θφ)1)hcosθ.

Note that the ray may be located above or below the x-axis; thus, the sign of the angle can be negative. We establish a sign rule such that Eq. (10) can be adapted for both situations. The sign rule is as follows:

  • (1) The absolute value of the angle is always the acute angle value.
  • (2) Sign of θ: A positive sign indicates that the acute angle rotates in a clockwise direction from the interface to the optical axis; a negative sign indicates the inverse.
  • (3) Sign of φ: A positive sign indicates that the acute angle rotates in a clockwise direction from the ray to the optical axis; a negative sign indicates the inverse.

3. 3D reconstruction model

Refraction by glass can be corrected with the parameters φ, θ, nglass/nair, and h. The incident ray, which provides the parameter φ, is obtained with the SVP model. The emergent ray can be obtained using Eq. (10). However, Eq. (10) is derived from the simplified 2D coordinate system although the reconstruction is actually performed in a 3D space. According to Swell’s law, the incident ray, the normal of the interface, and the emergent ray all lie in one plane. Therefore, the plane can be easily found using the incident ray and the normal of the interface. However, this plane may not contain the optical axis. In other words, the case is different from those shown in Fig. 1. The general case in a 3D space is shown in Fig. 2. The 3D measurement coordinate system is established with the origin being the optical center; in addition, the x-axis is parallel to the x-axis of the image plane, the y-axis is parallel to the y-axis of the image plane, and the z-axis is the optical axis. The normal of the interface is set as N=(nx,ny,1), and the incident ray is set as IR=(Ix,Iy,1). The plane containing N and IR is exactly the 2D space in which refraction occurs. Thus, the 2D coordinate system can be established with the origin being the optical center and the x-axis being the normal of the interface passing through the optical center. The case then becomes a 2D case, as shown in Fig. 1(a) with the incident angle α=N,IR.

 

Fig. 2 Refraction in general 3D space.

Download Full Size | PPT Slide | PDF

The emergent ray has the same direction as the incident ray and can thus be determined by the intersection of the emergent ray and the normal that passes through the optical center. The coordinate of the intersection in the 2D coordinate system can be obtained using Eq. (5).

((1cosα(nglass/nair)2sin2α)h,0).
The 3D coordinate (xi, yi, zi) can then be determined as
(xi,yi,zi)=(1cosα(nglass/nair)2sin2α)hNN.
Therefore, the emergent ray path is

xxiIx=yyiIy=zzi1.

3.1 Determining the incident ray

For the SVP model, let the intrinsic parameters of the camera be

A=[f/dXu0f/dYv01],
where f is the focal length of the camera; dX and dY are the physical size in millimeter of one pixel in the x-axis and y-axis directions of the image plane, respectively; and (u0, v0) is the image principal point. The intrinsic parameters can be obtained using Zhang’s method [18], which is a flexible calibration approach that uses a planar target. Let the 3D coordinate of the objective point in the 3D space be (xc, yc, zc). In this case, we obtain
s[uuvu1]=A[xcyczc],
where (uu,vu) is the coordinate of the image point in pixel and s is the scale coefficient. However, because of lens distortion, the actual image coordinate is given by
[udvd]=(1+k1(uu2+vu2)+k2(uu2+vu2)2)[uuvu],
where k1 and k2 are the coefficients of the radial distortion, which contributes the most lens distortion. (uu, vu) can be solved from Eq. (16) using a nonlinear optimization method, such as the Levenberg–Marquardt method. Then, the incident ray IR can be calculated from Eq. (15) as

IR=(uuu0f/dX,vuv0f/dY,1).

3.2 Determining the normal of the interface

Generally, glass ports are designed to be perpendicular to the lens. However, because of installation errors, these ports tend to tilt easily with the optical axis. Although we can measure the angle between a given port and lens by using certain mechanical methods or the design drawing, the orientation of the optical axis cannot be determined. In this study, we propose a flexible method to determine the normal on the basis of refractive distortion. The proposed method utilizes several pairs of images that are captured with and without a glass port. For one target scene, two images are captured: one image is refracted by a glass port, and the other image is not. The feature points on the two images are extracted, and the pixel position difference between the corresponding points is called refractive distortion. Using several pairs of corresponding points, we can determine the normal of the interface.

As shown in Fig. 1(a), the pseudo objective point P’ is projected onto the same position of the image point of P if a glass port is absent. Given that P’ is farther away from the optical axis, we can readily deduce that the refracted image is more expanded than the unrefracted image [2]. Meanwhile, the ray that is parallel to the normal and passes the optical center is not refracted, and the image point of this ray is the expansion center of the refracted image. Figure 3 shows the refracted and unrefracted image points, as well as the expansion center. The normal, the refracted emergent ray, and the unrefracted emergent ray lie on one plane, which can be determined by the corresponding image points and expansion center. The intersection line of two or more planes is the normal, which is what we are seeking. The joint line of the corresponding image points on the image is also the intersection line of the plane and image plane. Ideally, if several corresponding image points are obtained, then the joint lines of these image points should intersect at one point, that is, the expansion center. However, the refractive distortion is always small. Therefore, these joint lines are highly sensitive to noise, which then leads to numerous intersections. Fortunately, the best point position can be found by the shortest distance to all the joint lines. In this case, a large number of corresponding image points in a wide range around the expansion center are necessary to improve accuracy.

 

Fig. 3 Expansion effect of refraction.

Download Full Size | PPT Slide | PDF

For each pair of corresponding image points, let the coordinates of the unrefracted point and the refracted point be (uog, vog) and (uwg, vwg), respectively. Therefore, the joint line is

(vwgvog)x+(uoguwg)y+(uwguog)vog(vwgvog)uog=0.

We suppose that K joint lines are obtained. The expansion center is the point that is nearest to all the joint lines. The objective function is established as follows, and the linear least squares method can be used to solve the function:

argmin(ue,ve)i=1K((vwgivogi)ue+(uogiuwgi)ve+(uwgiuogi)vogi(vwgivogi)uogi(vwgivogi),(uogiuwgi)2)2,
where (ue, ve) is the expansion center, and K is the total number of joint lines. The normal of the interface N can be determined as

N=(ueu0f/dX,vev0f/dY,1).

3.3 Solution for multiple glass ports

Certain applications, especially those in outdoor engineering, involve more than one glass port. For example, a camera is installed in a sealed housing, which comprises a glass port for protection against rain, snow, and other environmental influences. A camera also needs an optical glass port or filter to eliminate ambient light or other noise. If glass ports are parallel to each other and share the same refractive index, then they can be seen as one glass port, the thickness of which is the summation of the thicknesses of all the glass ports. However, optical glass ports or filters for cameras are generally perpendicular to the lenses, whereas glass ports on housing may not be. Fortunately, in this situation, we can still determine the emergent ray path using the proposed method. We suppose that M glass ports are used; the normal of each port (Nm for the mth port) can be determined using the expansion center method. The incident ray of the mth port is the emergent ray of the (m-1)th port. According to Eq. (1), the incident ray and emergent ray of each port share the same direction as IR given by Eq. (17). Let us start with the mth port from the camera. We suppose that the intersection of the (m-1)th emergent ray and the normal of the (m-1)th port is (xim1,yim1,zim1). According to Eq. (12), the intersection of the mth emergent ray and the normal of the mth port is

(xim,yim,zim)=(1cos<IR,Nm>(nglassm/nairm)2sin2<IR,Nm>)hmNmNm+(xim1,yim1,zim1).
The emergent ray path is
xximIx=yyimIy=zzim1.
For the first port, the intersection point (xi0,yi0,zi0) should be the optical center, which is (0, 0, 0). The final emergent ray path can be determined by computing the intersections one by one.

4. Experiments

Three types of validation experiments and one comparison experiment were performed. The validation experiments involved a single camera vision with one glass port, another single camera vision with two glass ports, and a structured light vision. The experimental platform was established, as shown in Fig. 4. The platform consisted of two industrial AVT GC1380H cameras with a resolution of 1380 × 1024 pixels, two Schneider 23 mm lenses, and one 808 nm wavelength line-laser projector. All the experiments were performed on this platform. For the first two experiments, the glass port with a thickness of 3 mm and a refractive index of 1.6 was used. The glass port was installed on a specific mechanism on the camera so that it was perpendicular to the lens or on the 2D mobile platform, which could make the port tilt at any angle. For the structured light vision experiment, a narrow-band filter with a thickness of 3 mm and a refractive index of 1.5 was used. The filter was only installed on the mechanism. The glass port or filter could be removed easily using the 2D mobile platform or the mechanism without changing its orientation. Thus, the refracted and unrefracted images could be captured repeatedly. The comparison experiment used an image rectification method to compare with the proposed method.

 

Fig. 4 Experimental platform.

Download Full Size | PPT Slide | PDF

4.1 Experiment with a single camera with one glass port

The left camera in Fig. 4 was used to perform the single camera vision experiment. A 2D planar target of a 10 × 10 checkerboard with an interval of 10 mm was used for calibration and validation. The experiment was performed twice. First, the glass port was set to be perpendicular to the lens and then tilted at about 15°. For each trial, the 2D target was replaced 15 times, and for each placement, the refracted and unrefracted images were both captured. The refracted images are shown in Fig. 5 (the unrefracted images are similar and were thus excluded from the image).

 

Fig. 5 Refracted images.

Download Full Size | PPT Slide | PDF

Although the refractive distortion is radially symmetric around the principal point when the interface is perpendicular to the optical axis [17], its effect is not considered to be the same as that of the lens radial distortion [19]. Therefore, the calibration with the glass port cannot compensate for the effect of refraction. To show the result of the calibration with the glass port, we also calibrated the camera parameters using the refracted images and performed a 3D reconstruction using the refracted parameters. The normal of the glass port was obtained using all the images. Figure 6 shows the joint lines of all the corresponding image points and expansion center. Table 1 shows the results of the camera parameters and the normal of the glass port.

 

Fig. 6 Determination of expansion center. (a) The glass port is perpendicular. (b) The glass port is tilted.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Parameters and tilt angles of camera.

Another refracted image for 3D reconstruction and measurement was then captured, as shown in Fig. 7. An unrefracted image of the same target scene was also taken for reference value.

 

Fig. 7 Image for 3D reconstruction and measurement. (a) The glass port is perpendicular. (b) The glass port is tilted. The four sides of the target are indicated by the red lines (L, R, T, and B), and the two diagonals are indicated by the green lines (LS and RS).

Download Full Size | PPT Slide | PDF

The unrefracted image was used to obtain the target position (the extrinsic parameters of the target relative to the camera) as the reference value. Then, the refracted image was used to perform the 3D reconstruction using three methods, namely, the SVP model with the unrefracted parameters, the SVP model with the refracted parameters, and the proposed method with the unrefracted parameters. The results of the perpendicular port and tilted port are shown in Figs. 8 and 9, respectively, and the root mean square (RMS) errors are calculated.

 

Fig. 8 3D reconstruction errors of a perpendicular glass port. (a) For the SVP model with unrefracted parameters, the RMS error is 0.12 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.22 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.01 mm.

Download Full Size | PPT Slide | PDF

 

Fig. 9 3D reconstruction errors of a tilted glass port. (a) For the SVP model with unrefracted parameters, the RMS error is 0.22 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.88 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.02 mm.

Download Full Size | PPT Slide | PDF

The lengths of the four sides and two diagonals of the target (red and green lines in Fig. 7, respectively) were also measured to show the measurement errors of the three methods. The errors are reported in Table 2.

Tables Icon

Table 2. Measurement errors of sides and diagonals of target in experiment with one glass port.

The proposed method effectively eliminated the effect of refraction regardless of the glass port being perpendicular or tilted. The error of the SVP model for the tilted glass port is greater than that for the perpendicular glass port. This error increases as the image point moves farther away from the expansion center. The error of the SVP model with the refracted parameters is slightly greater than that of the SVP model with the unrefracted parameters when the glass port is perpendicular. This error considerably increases when the glass port is tilted because the image distortion caused by refraction cannot be simply considered as lens radial distortion when the glass port is tilted. The refracted parameters can cause a significant error if the angle between the normal and the optical axis is large. Although the glass port is perpendicular to the lens, the normal being aligned to the optical axis cannot be ascertained. In our experiment, the tilted angle was 3.08° when the glass port was perpendicular; this result explains why the SVP model with the refracted parameters still resulted in greater error in comparison with the SVP model with the unrefracted parameters. In the measurement of the lengths, the proposed method achieved the lowest measurement error. We must note that the RMS error in the measurement of the SVP model with the refracted parameters was 0.19 mm, which was considerably smaller than the RMS error of 0.88 mm for the 3D reconstruction when the glass port was tilted. This result was due to the expansion center being outside the left region of the image, in which case the 3D reconstruction error of each point had approximately the same direction. A large proportion of the error was balanced out when the lengths were calculated through 3D position subtraction.

4.2 Experiment with a single camera with two glass ports

Two glass ports installed on the mechanism and on the 2D mobile platform were used in this experiment. Similar to the setup of the previous experiment, the glass port on the mechanism was perpendicular to the lens, and the glass port on the 2D mobile platform was tilted at approximately 15°. Fifteen pairs of unrefracted and refracted images for each glass port were captured. The unrefracted parameters, refracted parameters, and expansion centers of each glass port were obtained in the same way as those used in the last experiment. The tilt angles of the glass ports were 6.3° and 15.5°. Another refracted image with the two glass ports for 3D reconstruction and measurement was captured, and an unrefracted image of the same target scene was captured for reference value. The 3D reconstruction errors and measurement errors are shown in Fig. 10 and Table 3, respectively. The proposed method shows favorable behavior in correcting the refraction caused by the two glass ports.

 

Fig. 10 3D reconstruction errors of two glass ports. (a) For the SVP model with unrefracted parameters, the RMS error is 0.29 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.84 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.05 mm.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Measurement errors of sides and diagonals of target in experiment with two glass ports.

4.3 Experiment with structured light vision

In many applications using structured light vision, filters are indispensable for eliminating ambient light. However, similar to glass ports, filters also result in refraction. In this experiment, we replaced the glass port with a narrow-band filter. The left camera and an 808 nm wavelength line-laser were used to perform the experiment. The filter was installed on the mechanism perpendicular to the lens. The refractive index of the filter was 1.5, and the thickness was 3 mm. We used another 2D planar target of a 7 × 7 dot matrix with an interval of 12 mm to calibrate the laser plane equation using Sun’s method [20]. First, 10 unrefracted images were captured for calculation. Second, another image with the filter was captured for 3D reconstruction; the same image without filter was captured for reference value. The images for calculation and reconstruction are shown in Fig. 11. The 3D reconstruction result of the unrefracted image was considered as the reference value, and the 3D results obtained with the SVP model and the proposed method were compared with the reference value (Fig. 12). The proposed method eliminates the refraction effectively, thereby showing good performance in the structured light vision experiment with a filter.

 

Fig. 11 Images for calibration and 3D reconstruction. (a) 10 images for calibration. (b) Image for 3D reconstruction.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Results of structured light vision experiment.

Download Full Size | PPT Slide | PDF

4.4 Comparison experiment

The proposed method showed a favorable behavior in the validation experiments. However, it should be compared with an existing method. Given that the proposed method is a type of geometrical correction method, we chose the elastic image registration method proposed by Haile et al. [2] for the comparison experiment. We used the same data in the experiment with the single camera with a perpendicular glass port. The control points used in the image registration method were the feature points of all the 15 refracted and unrefracted images. The polynomials were determined using these control points. Then, the corrected image was reconstructed from the image shown in Fig. 7(a) using the local weighted mean transformation method [2]. The corrected image closely resembled the unrefracted image, which was captured for reference value. Then, the feature points on the corrected image were extracted and reconstructed using the SVP model with the unrefracted parameters. Figure 13 shows the results of the image registration method and the proposed method. The two results are so similar that they are almost indistinguishable. In the proposed method, the feature point on the refracted image can be directly used in the 3D reconstruction model. However, in the image registration method, the corrected image must be reconstructed first before the feature point extracted from the corrected image can be used. Reconstructing a new image that includes numerous undesired areas is time consuming.

 

Fig. 13 3D reconstruction errors. (a) In the proposed method, the RMS error is 0.01 mm. (b) In the image registration method, the RMS error is 0.01 mm.

Download Full Size | PPT Slide | PDF

5. Application in a train wheelset profile measurement system

In a train wheelset profile measurement system [15, 16], multiple structured light sensors are employed to construct profiles of wheelsets. We established a wheelset profile measurement system on the basis of our previous work [21]. Two structured light sensors were installed on the inside and outside of a wheel as shown in Fig. 14. The two laser planes were aligned to one plane by adjusting the mobile holders. Two cameras observed the laser stripe from the inside and outside of the wheel simultaneously. The fields of view of the cameras had a small common field that lay on the tread and the top of the flange. Two 3D profiles were reconstructed using the structured light vision model of the two sensors, respectively, and unified to one coordinate system using extrinsic parameters of the two cameras. Thus the 3D profiles should constitute a complete profile of a cross section of the wheel. However, the two profiles might deviate slightly from each other as shown in Fig. 15(a) because of the refraction effect caused by the optical filters and the glass ports. The proposed method was used to eliminate the refraction effect, and the two profiles matched each other as well as shown in Fig. 15(b). Results of two essential sizes of the wheel profile are indicated in Table 4, which shows the accuracy of the proposed method. We have installed two wheelset profile measurement systems in railway detection stations of Lixian and Mulin of China, and the systems are running with high accuracy and stability.

 

Fig. 14 A wheelset profile measurement system with glass ports and optical filters.

Download Full Size | PPT Slide | PDF

 

Fig. 15 Reconstruction result of wheelset profiles. (a) Profiles constructed by SVP method. (b) Profiles constructed by the proposed method.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Results of flange height and flange thickness of the wheel.

6. Conclusion

The refraction caused by a glass port was geometrically analyzed in 2D and 3D spaces. A flexible method for correcting 3D reconstruction errors was then proposed on the basis of the analysis. The proposed method can be used for any number of glass ports with any orientation. A flexible method to determine the orientation of a refractive interface was also proposed using refracted and unrefracted images. This method can be conveniently applied without any other equipment. However, because refractive indexes and thicknesses of glass ports are usually easy to obtain, the normal of the interface is important for the proposed method. The expansion center technique used in the proposed method is obviously sensitive to noise because refractive distortion is always small and the uncertainty of the expansion center is large (Fig. 6). Therefore, a good, clear target image is significant. In addition, the normal of the interface can be optimized further using our 3D reconstruction model with an initial value obtained with the expansion center method; this optimization will be further studied in the future. To validate the application of the proposed method in vision systems, we performed three types of experiments covering the most common vision systems. In the single camera experiment, the perpendicular glass port resulted in an error of approximately 0.1 mm. The error brought about by the tilted glass port was large, whereas the SVP model with the refracted parameters could cause a serious error because the lens radial distortion could not effectively describe the refractive distortion in this situation. The proposed method could reduce the error to about 0.02 mm in the single camera experiment. In the experiment with two glass ports, the proposed method yielded an error of 0.046 mm. In the structured light vision experiment, the error obtained with the proposed method was considerably smaller than that obtained with the SVP model. All the results show that the proposed method shows excellent behavior in these vision systems. We also believe that if the expansion center is optimized further, the proposed method could achieve even better results, especially in situations with multiple glass ports. In the comparison experiment, the proposed method achieved the same precision as that of Haile’s method [2]. The corrected image reconstructed with Haile’s method closely resembled the unrefracted image, which we captured without the glass port. Thus, such level of precision proves the effectiveness of our method in eliminating refraction from the side. Besides the disadvantage of time consuming, image rectification methods must require control points on unrefracted and refracted images. The proposed method only requires such control points for the normal of refractive interfaces. If control points cannot be achieved, for example systems that use optical filters cannot obtain ordinary features on a filtered image, other methods [4, 8–10] can be used to obtain an initial value of the normal. This initial value can be optimized further in the 3D reconstruction model. However, image rectification methods cannot work in this situation. The application of the train wheelset profile measurement system also proves the effectiveness of the proposed method in outdoor engineering.

Funding

National Key Scientific Instrument and Equipment Development Projects of China (2012YQ140032), National Natural Science Foundation of China (NSFC) (51575033) and Central University in Beijing Major Achievements Transformation Project (On-line Dynamic Detection System for Train Pantograph and Catenary).

References and links

1. T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012). [CrossRef]   [PubMed]  

2. M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012). [CrossRef]  

3. D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013). [CrossRef]  

4. H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016). [CrossRef]  

5. L. Kang, L. Wu, and Y. Yang, “Two-view underwater structure and motion for cameras under flat refractive interfaces,” in 12th European Conference on Computer Vision (Springer, 2012), pp. 303–316. [CrossRef]  

6. B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664. [CrossRef]  

7. Y. Wang, S. Negahdaripour, and M. D. Aykin, “Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging,” Appl. Opt. 55(24), 6564–6575 (2016). [CrossRef]   [PubMed]  

8. X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008). [CrossRef]  

9. R. Ferreira, J. P. Costeira, and J. A. Santos, “Stereo reconstruction of a submerged scene,” in 2nd Iberian Conference on Pattern Recognition and Image Analysis (Springer, 2005), pp. 102–109.

10. Y. J. Chang and T. H. Chen, “Multi-view 3D reconstruction for scenes under the refractive plane with known vertical direction,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2011), pp. 351–358. [CrossRef]  

11. E. J. Moore, “Underwater photogrammetry,” Photogramm. Rec. 8(48), 748–763 (1976). [CrossRef]  

12. H. R. Suiter, “Correction of underwater pincushion distortion by a compensating camera lens,” Proc. SPIE 8357, 83571R (2012). [CrossRef]  

13. M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Basel) 15(12), 30810–30826 (2015). [CrossRef]   [PubMed]  

14. N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011). [CrossRef]   [PubMed]  

15. MERMEC Group, “Profile and diameter,” http://www.mermecgroup.com/inspect/train-monitoring/87/wheel-parameters.php.

16. K. L. D. Labs, Inc., “Wheel profile measurement,” http://www.kldlabs.com/index.php?s=wheel+profile+measurement.

17. R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.

18. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

19. L. Kang, L. Wu, and Y. H. Yang, “Experimental study of the influence of refraction on underwater three-dimensional reconstruction using the SVP camera model,” Appl. Opt. 51(31), 7591–7603 (2012). [CrossRef]   [PubMed]  

20. J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009). [CrossRef]  

21. Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
    [Crossref] [PubMed]
  2. M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012).
    [Crossref]
  3. D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
    [Crossref]
  4. H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
    [Crossref]
  5. L. Kang, L. Wu, and Y. Yang, “Two-view underwater structure and motion for cameras under flat refractive interfaces,” in 12th European Conference on Computer Vision (Springer, 2012), pp. 303–316.
    [Crossref]
  6. B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664.
    [Crossref]
  7. Y. Wang, S. Negahdaripour, and M. D. Aykin, “Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging,” Appl. Opt. 55(24), 6564–6575 (2016).
    [Crossref] [PubMed]
  8. X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
    [Crossref]
  9. R. Ferreira, J. P. Costeira, and J. A. Santos, “Stereo reconstruction of a submerged scene,” in 2nd Iberian Conference on Pattern Recognition and Image Analysis (Springer, 2005), pp. 102–109.
  10. Y. J. Chang and T. H. Chen, “Multi-view 3D reconstruction for scenes under the refractive plane with known vertical direction,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2011), pp. 351–358.
    [Crossref]
  11. E. J. Moore, “Underwater photogrammetry,” Photogramm. Rec. 8(48), 748–763 (1976).
    [Crossref]
  12. H. R. Suiter, “Correction of underwater pincushion distortion by a compensating camera lens,” Proc. SPIE 8357, 83571R (2012).
    [Crossref]
  13. M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Basel) 15(12), 30810–30826 (2015).
    [Crossref] [PubMed]
  14. N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011).
    [Crossref] [PubMed]
  15. MERMEC Group, “Profile and diameter,” http://www.mermecgroup.com/inspect/train-monitoring/87/wheel-parameters.php .
  16. K. L. D. Labs, Inc., “Wheel profile measurement,” http://www.kldlabs.com/index.php?s=wheel+profile+measurement .
  17. R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.
  18. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  19. L. Kang, L. Wu, and Y. H. Yang, “Experimental study of the influence of refraction on underwater three-dimensional reconstruction using the SVP camera model,” Appl. Opt. 51(31), 7591–7603 (2012).
    [Crossref] [PubMed]
  20. J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
    [Crossref]
  21. Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
    [Crossref] [PubMed]

2016 (3)

H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
[Crossref]

Y. Wang, S. Negahdaripour, and M. D. Aykin, “Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging,” Appl. Opt. 55(24), 6564–6575 (2016).
[Crossref] [PubMed]

Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
[Crossref] [PubMed]

2015 (1)

M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Basel) 15(12), 30810–30826 (2015).
[Crossref] [PubMed]

2013 (1)

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

2012 (4)

H. R. Suiter, “Correction of underwater pincushion distortion by a compensating camera lens,” Proc. SPIE 8357, 83571R (2012).
[Crossref]

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012).
[Crossref]

L. Kang, L. Wu, and Y. H. Yang, “Experimental study of the influence of refraction on underwater three-dimensional reconstruction using the SVP camera model,” Appl. Opt. 51(31), 7591–7603 (2012).
[Crossref] [PubMed]

2011 (1)

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011).
[Crossref] [PubMed]

2009 (1)

J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
[Crossref]

2008 (1)

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

2000 (1)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1976 (1)

E. J. Moore, “Underwater photogrammetry,” Photogramm. Rec. 8(48), 748–763 (1976).
[Crossref]

Aguilar, J. J.

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

Aykin, M. D.

Chang, Y. J.

Y. J. Chang and T. H. Chen, “Multi-view 3D reconstruction for scenes under the refractive plane with known vertical direction,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2011), pp. 351–358.
[Crossref]

Chen, T. H.

Y. J. Chang and T. H. Chen, “Multi-view 3D reconstruction for scenes under the refractive plane with known vertical direction,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2011), pp. 351–358.
[Crossref]

Du, H.

H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
[Crossref]

Gong, Z.

Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
[Crossref] [PubMed]

Haile, M. A.

M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012).
[Crossref]

Ifju, P. G.

M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012).
[Crossref]

Kang, L.

Ke, X.

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

Kunz, C.

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

Kutulakos, K. N.

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011).
[Crossref] [PubMed]

Lessner, S. M.

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

Li, M. G.

H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
[Crossref]

Li, R.

R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.

Liu, Q. Z.

J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
[Crossref]

Majarena, A. C.

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

Meng, J.

H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
[Crossref]

Moore, E. J.

E. J. Moore, “Underwater photogrammetry,” Photogramm. Rec. 8(48), 748–763 (1976).
[Crossref]

Morris, N. J. W.

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011).
[Crossref] [PubMed]

Negahdaripour, S.

Park, J.

B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664.
[Crossref]

Park, J. I.

B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664.
[Crossref]

Samper, D.

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

Santolaria, J.

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

Schechner, Y.

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

Seo, B. K.

B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664.
[Crossref]

Shortis, M.

M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Basel) 15(12), 30810–30826 (2015).
[Crossref] [PubMed]

Singh, H.

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

Suiter, H. R.

H. R. Suiter, “Correction of underwater pincushion distortion by a compensating camera lens,” Proc. SPIE 8357, 83571R (2012).
[Crossref]

Sun, J.

Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
[Crossref] [PubMed]

Sun, J. H.

J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
[Crossref]

Sutton, M. A.

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

Tao, C.

R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.

Treibitz, T.

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

Wang, Y.

Wu, L.

Yang, Y. H.

Yost, M.

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

Zhang, G.

Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
[Crossref] [PubMed]

Zhang, G. J.

J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
[Crossref]

Zhang, Z. Y.

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zou, W.

R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.

Adv. Eng. Softw. (1)

H. Du, M. G. Li, and J. Meng, “Study on the reconstruction method of stereo vision in glass flume,” Adv. Eng. Softw. 94, 14–19 (2016).
[Crossref]

Appl. Opt. (2)

IEEE Trans. Pattern Anal. Mach. Intell. (3)

T. Treibitz, Y. Schechner, C. Kunz, and H. Singh, “Flat refractive geometry,” IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 51–65 (2012).
[Crossref] [PubMed]

N. J. W. Morris and K. N. Kutulakos, “Dynamic refraction stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1518–1531 (2011).
[Crossref] [PubMed]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

J. Mech. Eng. (1)

J. H. Sun, G. J. Zhang, and Q. Z. Liu, “Universal method for calibrating structured-light vision sensor on the spot,” J. Mech. Eng. 45(03), 174–177 (2009).
[Crossref]

J. Strain Anal. Eng. (1)

X. Ke, M. A. Sutton, S. M. Lessner, and M. Yost, “Robust stereo vision and calibration methodology for accurate three-dimensional digital image correlation measurements on submerged objects,” J. Strain Anal. Eng. 43(8), 689–704 (2008).
[Crossref]

Metrol. Meas. Syst. (1)

D. Samper, J. Santolaria, A. C. Majarena, and J. J. Aguilar, “Correction of the refraction phenomenon in photogrammetric measurement systems,” Metrol. Meas. Syst. 20(4), 601–612 (2013).
[Crossref]

Photogramm. Rec. (1)

E. J. Moore, “Underwater photogrammetry,” Photogramm. Rec. 8(48), 748–763 (1976).
[Crossref]

Proc. SPIE (1)

H. R. Suiter, “Correction of underwater pincushion distortion by a compensating camera lens,” Proc. SPIE 8357, 83571R (2012).
[Crossref]

Sensors (Basel) (2)

M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Basel) 15(12), 30810–30826 (2015).
[Crossref] [PubMed]

Z. Gong, J. Sun, and G. Zhang, “Dynamic measurement for the diameter of a train wheel based on structured-light vision,” Sensors (Basel) 16(4), 564 (2016).
[Crossref] [PubMed]

Strain (1)

M. A. Haile and P. G. Ifju, “Application of elastic image registration and refraction correction for non-contact underwater strain measurement,” Strain 48(2), 136–142 (2012).
[Crossref]

Other (7)

L. Kang, L. Wu, and Y. Yang, “Two-view underwater structure and motion for cameras under flat refractive interfaces,” in 12th European Conference on Computer Vision (Springer, 2012), pp. 303–316.
[Crossref]

B. K. Seo, J. Park, and J. I. Park, “3D trajectory reconstruction under refraction at a cylindrical surface,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2015), pp. 2660–2664.
[Crossref]

R. Ferreira, J. P. Costeira, and J. A. Santos, “Stereo reconstruction of a submerged scene,” in 2nd Iberian Conference on Pattern Recognition and Image Analysis (Springer, 2005), pp. 102–109.

Y. J. Chang and T. H. Chen, “Multi-view 3D reconstruction for scenes under the refractive plane with known vertical direction,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2011), pp. 351–358.
[Crossref]

MERMEC Group, “Profile and diameter,” http://www.mermecgroup.com/inspect/train-monitoring/87/wheel-parameters.php .

K. L. D. Labs, Inc., “Wheel profile measurement,” http://www.kldlabs.com/index.php?s=wheel+profile+measurement .

R. Li, C. Tao, and W. Zou, “An underwater digital photogrammetric system for fishery geomatics,” In XVIIIth ISPRS Congress Technical Commission V: Close Range Techniques and Machine Vision (ISPRS, 1996) pp. 319–323.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Refractive ray paths of two cases. (a) The glass port is perpendicular. (b) The glass port is titled. α is the incident angle, β is the refractive angle, γ is the emergent angle, h is the thickness of the glass port, P is the real objective point, and P’ is the pseudo objective point.
Fig. 2
Fig. 2 Refraction in general 3D space.
Fig. 3
Fig. 3 Expansion effect of refraction.
Fig. 4
Fig. 4 Experimental platform.
Fig. 5
Fig. 5 Refracted images.
Fig. 6
Fig. 6 Determination of expansion center. (a) The glass port is perpendicular. (b) The glass port is tilted.
Fig. 7
Fig. 7 Image for 3D reconstruction and measurement. (a) The glass port is perpendicular. (b) The glass port is tilted. The four sides of the target are indicated by the red lines (L, R, T, and B), and the two diagonals are indicated by the green lines (LS and RS).
Fig. 8
Fig. 8 3D reconstruction errors of a perpendicular glass port. (a) For the SVP model with unrefracted parameters, the RMS error is 0.12 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.22 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.01 mm.
Fig. 9
Fig. 9 3D reconstruction errors of a tilted glass port. (a) For the SVP model with unrefracted parameters, the RMS error is 0.22 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.88 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.02 mm.
Fig. 10
Fig. 10 3D reconstruction errors of two glass ports. (a) For the SVP model with unrefracted parameters, the RMS error is 0.29 mm. (b) For the SVP model with refracted parameters, the RMS error is 0.84 mm. (c) For the proposed method with unrefracted parameters, the RMS error is 0.05 mm.
Fig. 11
Fig. 11 Images for calibration and 3D reconstruction. (a) 10 images for calibration. (b) Image for 3D reconstruction.
Fig. 12
Fig. 12 Results of structured light vision experiment.
Fig. 13
Fig. 13 3D reconstruction errors. (a) In the proposed method, the RMS error is 0.01 mm. (b) In the image registration method, the RMS error is 0.01 mm.
Fig. 14
Fig. 14 A wheelset profile measurement system with glass ports and optical filters.
Fig. 15
Fig. 15 Reconstruction result of wheelset profiles. (a) Profiles constructed by SVP method. (b) Profiles constructed by the proposed method.

Tables (4)

Tables Icon

Table 1 Parameters and tilt angles of camera.

Tables Icon

Table 2 Measurement errors of sides and diagonals of target in experiment with one glass port.

Tables Icon

Table 3 Measurement errors of sides and diagonals of target in experiment with two glass ports.

Tables Icon

Table 4 Results of flange height and flange thickness of the wheel.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

n a i r sin α = n g l a s s sin β = n a i r sin γ ,
y = tan α x .
y = tan β x + ( tan α tan β ) x 0 .
y = tan γ x + ( tan β tan α ) h = tan α x + ( tan β tan α ) h .
y = tan α ( x h ) + ( sin α ( n g l a s s / n a i r ) 2 sin 2 α ) h .
y = tan ( θ + α π / 2 ) x .
y = tan ( β + θ π / 2 ) x + ( tan ( α + θ π / 2 ) tan ( β + θ π / 2 ) ) tan θ x 0 tan θ tan ( α + θ π / 2 ) .
y = tan ( θ + α π / 2 ) x + tan ( β + θ π / 2 ) tan ( α + θ π / 2 ) cos θ ( tan θ tan ( β + θ π / 2 ) ) h .
y = tan ( θ + α π / 2 ) x + ( tan θ + cot ( α + θ ) tan θ + cos θ ( n g l a s s / n a i r ) 2 sin 2 α sin α sin θ sin α cos θ + sin θ ( n g l a s s / n a i r ) 2 sin 2 α 1 ) h cos θ .
y = tan φ x + ( tan θ tan φ tan θ + cos θ ( n g l a s s / n a i r ) 2 cos 2 ( θ φ ) cos ( θ φ ) sin θ cos ( θ φ ) cos θ + sin θ ( n g l a s s / n a i r ) 2 cos 2 ( θ φ ) 1 ) h cos θ .
( ( 1 cos α ( n g l a s s / n a i r ) 2 sin 2 α ) h , 0 ) .
( x i , y i , z i ) = ( 1 cos α ( n g l a s s / n a i r ) 2 sin 2 α ) h N N .
x x i I x = y y i I y = z z i 1 .
A = [ f / d X u 0 f / d Y v 0 1 ] ,
s [ u u v u 1 ] = A [ x c y c z c ] ,
[ u d v d ] = ( 1 + k 1 ( u u 2 + v u 2 ) + k 2 ( u u 2 + v u 2 ) 2 ) [ u u v u ] ,
I R = ( u u u 0 f / d X , v u v 0 f / d Y , 1 ) .
( v w g v o g ) x + ( u o g u w g ) y + ( u w g u o g ) v o g ( v w g v o g ) u o g = 0.
arg min ( u e , v e ) i = 1 K ( ( v w g i v o g i ) u e + ( u o g i u w g i ) v e + ( u w g i u o g i ) v o g i ( v w g i v o g i ) u o g i ( v w g i v o g i ) , ( u o g i u w g i ) 2 ) 2 ,
N = ( u e u 0 f / d X , v e v 0 f / d Y , 1 ) .
( x i m , y i m , z i m ) = ( 1 cos < I R , N m > ( n g l a s s m / n a i r m ) 2 sin 2 < I R , N m > ) h m N m N m + ( x i m 1 , y i m 1 , z i m 1 ) .
x x i m I x = y y i m I y = z z i m 1 .

Metrics