Abstract

We propose a novel method for the robust, non-contact, and six degrees of freedom (6-DOF) motion sensing of an arbitrary rigid body using multi-view laser Doppler measurements. The proposed method reconstructs the 6-DOF motion from fragmentary velocities on the surface of the target. It is unique compared to conventional contact-less motion sensing methods since it is robust against lack-of-feature objects and environments. By discussing the formulation of motion reconstruction by fragmentary velocities, we show that at least three viewpoints are essential for 6-DOF motion reconstruction. Further, we claim that the condition number of the measurement matrix can be a measure of system accuracy, and numerical simulation is performed to find an appropriate system configuration. The proposed method was implemented using a laser Doppler velocimeter, a galvanometer scanner, and some mirrors. We introduce the methods for calibration, coordinate system selection, and the calculation pipeline, all of which contribute to the accuracy of the proposed system. For evaluation, the proposed system is compared with an off-line chessboard-tracking scheme of a 500 fps camera. Experiments of measuring six different motion patterns are demonstrated to show the robustness of the proposed method against different kinds of motion. We also conduct evaluations with different distances and velocities. The mean value error is less than 1.3 deg/s in rotation and 3.2 mm/s in translation, and is robust against changes in distance and velocity. For speed evaluation, the throughput of the proposed method is approximately 250 Hz and the latency is approximately 20 ms.

© 2017 Optical Society of America

1. Introduction

Sensing and digitizing the physical world has always been a critical topic, and the six degrees of freedom (6-DOF) rigid body motion information is one of the most important aspects. Recently, with the emergence of more and more intelligent systems, motion information of rigid bodies has shown its value as the fundamental element of a wide range of tasks. With regard to specific application scenarios, motion sensing methods can vary greatly.

For applications in common environments such as security monitoring, robot navigation, and human-machine interaction, the motions are mainly estimated by changes in the appearance of 2-D images or 3-D point clouds in different frames. For methods using 2-D images, correspondence between some invariant features [1,2], patches [3,4], or pixels [5,6] in different frames are found that enables motion to be estimated by solving a perspective-n-point problem [7] or optimizing the motion for minimized reprojection error [8,9]. Point-cloud-based methods, on the other hand, tend to optimize the motion for minimum error when merging overlapping 3-D point clouds in two different frames [10–12]. This typically leads to the matching of several salient structures in the point clouds by separately weighting the point pairs or trimming the outliers [13, 14]. These methods can be effective in a wide range of scenarios, but the robustness of such methods depends on whether the motion causes changes in the observations of the object. When the target has barely any texture or structure, or the ambient illumination is so harsh that changes cannot be seen, these methods are likely to fail.

To improve the universality, some methods try to directly measure the motion of the object by introducing the Doppler effect. Heide et al. [15,16] modulated the adaptive infrared illumination for time-of-flight (TOF) cameras so that the velocity can be measured via the frequency shift of reflected light. Thus, the velocities are densely mapped to specific positions in the scene and a 3-D flow can be calculated by the combination of the measured velocities and 2-D optical flow. Although the 3-D flow is promising for the description of 6-DOF motion, the robustness of 2-D optical flow computation still depends on the ambient light and appearance of the object. On the other hand, quantitative evaluations on the accuracy of these methods are not discussed in detail, while the poor noise performance and low resolution of commercially available TOF chips are mentioned [15].

In the context of high-accuracy machining, several methods measure multi-DOF geometric motion or motion errors via laser interferometry [17–19], laser collimation [20] or time-of-flight measurements [21]. Mirrors or reflectors are attached to the target, e.g. a linear or rotary stage, and ultra-high accuracy with translational errors in the order of micrometers and angular errors in the order of arc seconds can be achieved. However, such systems make their focus on the specific application, where reflectors are intrusively attached on the target and the motion of the targets are always to some extent limited, so that the principles also cannot be extended to non-cooperative objects with unknown motion.

The laser Doppler velocimeter (LDV), as a interferometric device, is capable for non-intrusive, high-accuracy but low-dimensional velocity measurement for non-cooperative targets. Such properties of the LDV can open up the possibility to measure high-precision motion for any diffuse object, thus showing a promising solution for the common multi-DOF motion sensing problems. The principles and configurations of LDV are well-explored in the literature [22–24], and various arrangements of the devices are introduced to deal with specific tasks such as measuring the fluid [22,25], transversal motion of a surface [26] or the rotation of a motor [27]. However, principle on using LDV for common multi-DOF motion sensing scenarios have been barely explored.

In our previous work [28], we first explored the principles on using LDV for common motion sensing scenarios and demonstrated several applications of such a system, including 3-D shape integration and user interface. The proposed system calculates motion via the distance and velocity acquired simultaneously by a laser range finder (LRF) and a LDV. By time-division measurement of several points on the target, the method can reconstruct the motion with high throughput at approximately 410 Hz and low error for certain kinds of motions bounded below 2%. However, in this method, the accuracy is sensitive to different motion components, and the accuracy may decline when there are motion components in some specific DOFs.

In this paper, we propose a method for robust, 6-DOF, contactless motion sensing of an arbitrary rigid body using multi-view LDVs. The proposed system is partially based on our previous work [28], but in this paper we fully explore the essential conditions for LDV-based 6-DOF motion sensing, and overcome the shortcomings of the previous system with a largely optimized multi-view system configuration without a LRF. With the basic assumption that the available LDVs provide one-dimensional velocity measurement with adequate precision in the direction of the laser beam, the paper begins with a discussion of the basic motion sensing principle, and then proves that at least three viewpoints are essential for LDV-based 6-DOF motion reconstruction. We acknowledge that the proposed method is a typical linear system; hence, errors are strongly related to the condition number of its measurement matrix. Thus, to find the best system configuration, numerical simulation of the condition number was performed. Based on the discussion, a simplest form of the proposed system was built and some implementation techniques are introduced. Finally, the system was validated by comparison with the marker tracking scheme of a high-speed camera. The accuracy and robustness of the proposed system were proven by demonstrating the sensed velocity as well as calculated change of position in six different motion patterns. We also evaluated the accuracy for sensing constant velocity when the distance and velocities were different. It is shown in the experiment that the mean value error is less than 1.3 deg/s in rotation and less than 3.2 mm/s in translation in the proposed system. Finally, the throughput of the proposed method is approximately 250 Hz and the latency is approximately 20 ms.

2. Principle

2.1. Formulation of LDV-based 6-DOF motion sensing

As a prior information, the LDV used in the proposed system takes the heterodyne principle [29] which is commonly adopted in today’s commercial LDV devices for measurement on solid surfaces. When the laser hits the object, only the velocity component along the direction of the laser beam is measured.

We discuss the motion of a rigid body in an orthogonal, right-handed coordinate system with axis x, y, and z, as illustrated in Fig. 1. It can be represented by six DOFs consisting of angular velocity ω = [ωx, ωy, ωz] around the origin, and linear velocity v = [vx, vy, vz]. ωx, ωy, and ωz are the components of the angular velocity around the x, y, z axis; and vx, vy, vz are the components of translation velocity along x, y, z axis. Assume that we use N non-overlapping laser beams from the LDV for measurement, namely Li, i = 1, 2 . . . N, and the unit direction vector of Li is li. The laser beam Li hits the object at point pi. The velocity of pi, namely vi consists of two parts, caused by the rotation and translation velocity of the object, respectively, as formulated in Eq. (1).

vi=ω×pi+v

 

Fig. 1 The 6-DOF motion of a rigid body measured by a laser from the LDV. The rigid body is moving with rotational velocity ω and translational velocity v, and is hit by a laser beam Li from the LDV at point pi. Points on Li can be denoted by oi + sli, where oi is any point on Li, li is the direction vector, and s is a scalar. For ease of description, we refer to oi as “viewpoint” and li as “direction” for each laser beam Li. The velocity of pi, namely vi, consists of two parts introduced by the rotation velocity and translation velocity of the object. The measured velocity by LDV is the projection of vi in the laser beam’s direction li, namely vi = vi · li.

Download Full Size | PPT Slide | PDF

Let vi be the measured velocity from Li. Given that the LDV measures only the velocity component in the direction of the laser beam, vi equals the length of the projection of vi in li:

vi=(ω×pi+v)li=pi×liω+liv
as depicted in Fig. 1.

Eqation (2) formulates what the LDV measures when the laser beam hits an object. At first sight, this means that the measured velocity is related to the position where the laser beam hits the target. This was taken as the basic formulation in previous work [28], and an LRF was used for position measurement. However, we can rewrite Eq. (2) as

vi=((pioi)×li+oi×li)ω+liv
where oi denotes any point other than pi on laser beam Li. Knowing that pioi is parallel to li, we have
vi=(oi×li)ω+liv

Equation (4) shows that the measured 6-DOF velocity from the laser Doppler effect is not really related to where the laser beam hits the object. In other words, the shape and position of the target object does not matter. Hence, once the laser beam is calibrated, i.e., oi and li are obtained, we can get an equation where only ω and v are unknown, when vi is measured via LDV.

Thus, combining the measurement from N non-overlapping laser beams, an equation set can be formulated with matrix representation as

((o1×l1)Tl1T(o2×l2)Tl2T(oN×lN)TlNT)(ωv)=(v1v2vN)

Let the leftmost matrix, namely the measurement matrix, be denoted by A = [r1T r2T . . . rNT]T, where ri = [(oi × li)T liT], i = 1, 2, . . ., N, is the ith row of A. A represents the arrangement of the laser beams from the LDV in the system configuration. The rightmost vector is denoted by b, consisting of the measured velocities from the LDV. Assuming that X = (ωT vT)T, the 6-DOF motion reconstruction can be formulated by

X=A+b
where A+ = (AT A)−1AT is the pseudo inverse of A. In particular, when N = 6, A+ = A−1 is the inverse of A.

2.2. Modeling for system configuration

Equation (6) provides a universal least-squares solution for LDV-based velocity sensing. A system configuration that solves such a problem is described by the corresponding measurement matrix A, which can be further abstracted into an arrangement of several non-overlapping laser beams in the space, i.e., the corresponding N groups of oi and li. In this subsection, we discuss the system configurations and their influence on the solution of Eq. (6).

2.2.1. Conditions for solution existence

The problem of whether 6-DOF motion can be robustly reconstructed is equivalent to whether a stable solution for Eq. (6) exists, which apparently relies on whether rank(A) = 6. The most direct conclusion is that N ≥ 6, but this is not sufficient. With regard to the specific system configurations, some rows in A may be linearly correlated, causing the reduction of rank(A).

For common linear systems, we need to determine all the entries of matrix A in order to analyze its rank; fortunately, the rows of A are well defined in this problem. Hence, we can analyze the dimension of the row space of A, given that rank(A) = dim(span{r1, r2, . . ., rN}). Intuitively, the reduction of dim(span{r1, r2, . . ., rN}) occurs when oi and li are shared among different rows. While li can be adjusted easily with the use of controllable reflectors such as galvanometer scanners, oi is highly related to the placement of system component. Here, we discuss the sharing of viewpoints in an enumerative way. It is worth noting again that oi can be any point on Li without changing ri. Thus, the oi of two laser beams can be regarded shared as long as the two laser beams intersect with each other, in order to simplify the following discussion.

First, let us consider the simplest condition where there are a total of N(1) rows in matrix A(1) sharing the same viewpoint o1. The jth row of A(1) can be written as

rj(1)=[(o1×lj)TljT]
where j = 1, 2, . . ., N(1).

Noting that lj is a unit vector in the 3-D space, each lj can be represented by the linear combination of any other three linearly independent 3-D vectors. Obviously, the same property exists for any rj(1), which is expressed as Eq. (7). Hence, we have

dim(span{rj(1)})3
This corresponds to the system configuration when all the laser beams are generated by reflecting a laser from one LDV by only one scanner, as in our previous work [28]. In this case, the reduction of rank means that the problem is ill-conditioned. The ill-conditioned problem can be solved using regularization techniques, but only three of the six DOFs can be correctly reconstructed, and the motion in the other three DOFs are constrained to nearly zero such that there may be large errors.

Following the above discussion, in order to increase dim(span{rj}), we add N(2) rows with a different viewpoint, namely o1 + q1. Then, we can write the kth row of matrix A(2) as

rk(2)=[((o1+q1)×lk)TlkT]=[(o1×lk)TlkT]+[(q1×lk)T0]
where k = N(1) + 1, N(1) + 2, . . ., N(1) + N(2).

rk(2) is the summation of two terms. The first term belongs to span{rj(1)}, and the second term is a vector constrained on a two-dimensional plane. Thus, the additional viewpoint o1 + q1 can only add two more dimensions to the row space, i.e.,

dim(span{rj(1),rk(2)})5

Naturally, the last dimension can be provided by an additional viewpoint. For validation, we add N(3) rows with viewpoint o1 + q2 to the matrix. In this situation, the mth additional row can be represented by

rm(3)=[((o1+q2)×lm)TlmT]=[(o1×lm)TlmT]+[(q2×lm)T0]
where the first term belongs to span{rj(1)}, and the second term is a vector constrained on a two-dimensional plane. Whether dim(span{rj(1),rj(2),rj(3)})=6 is decided by whether q2 has a component orthogonal to q1. From the perspective of system configuration, this means that the three viewpoints should not be on the same line.

In summary, for 6-DOF motion reconstruction represented by Eq. (6), rank(A) needs to be six.

This means that the system configuration should satisfy the following conditions:

  1. At least six non-overlapping laser beams are used for measurement.
  2. There are at least three different viewpoints.
  3. All the viewpoints should not be collinear.

2.2.2. Accuracy and system configuration

Following the discussion about rank(A), here we further discuss the relationship between system configuration and measurement error. As Eq. (6) denotes a typical linear system, we introduce the condition number of A, defined by κA = ‖A2 · ‖A+2, as the measure of stability of the corresponding system configuration. The overall error of the system can then be bounded by Eq. (12) [30,31]:

X˜X2X2κA*b˜b2b2
where is the estimated X, and denotes b with measurement error. The condition number κA denotes the sensitivity of the solution to the measurement error, and it is independent of round off and computational errors [32]. To improve the accuracy of the proposed system, κA must be minimized during the system design stage.

We conducted a numerical simulation of different system configurations for minimizing κA. The setup for this simulation is shown in Fig. 2(a). For minimum realization of the proposed method, we adopted three viewpoints and six non-overlapping laser beams (two laser beams for each viewpoint) for measurement. As the three viewpoints are always coplanar, they are fixed on the plane αZ=0. The target is assumed to be away from the system at a distance of D = 1500 mm, which is in the middle of the expected working range of the proposed system. The details of the working range will be discussed in Sec. 3.1. During the simulation, the first viewpoint is fixed at the origin [0 0 0]. The other two viewpoints are placed on the x and y axis, respectively, at a distance of d from the first one. The color of the laser beams denotes the viewpoint they pass through, specifically, red for o1, green for o2, and blue for o3. We assume that the laser beams always hit the object at some fixed points illustrated in Fig. 2(a). Thus, d will influence both oi and li for each laser beam.

 

Fig. 2 Numerical simulation of different system configurations. (a) Illustration of constraints of system configuration during simulation. The three viewpoints o1, o2, and o3 are on the gray plane αz=0. In the simulation, where the laser beams hit the target is assumed to be fixed at certain points on the surface of the target. The color of a laser beam represents the viewpoint that it passes through. (b) Illustration of κA when d changes.

Download Full Size | PPT Slide | PDF

The simulation results are illustrated in Fig. 2(b). It is shown that the condition number can be largely reduced as the distances between viewpoints increase. However, it is less effective to increase the distance when the distance is larger than 600 mm. As we also want the system to be smaller, there is a tradeoff between the size of the system and accuracy. In this work, we let d ≈ 650 mm, corresponding to κA at approximately 550 in the simulation.

The distribution of the points on the scanning pattern also influences the condition number. However, there is no much space to use a large scale pattern, because the lasers have to hit the object which might have a relatively small surface. In our system design and simulation, the largest pattern size at 1500 mm is limited within a circle with a diameter of 200mm. Under such constraint, changing scanning pattern is not important to the condition number, compared with the separation of viewpoints. Another numerical simulation is performed to prove this. In this simulation, the distance d = 600 mm is fixed, and we randomly generates the scanning pattern 10000 times, and evaluate the corresponding condition number. As the result, over 90% of the random generated pattern corresponds to condition numbers smaller than 1000, and around 80% of the condition number in less than 500. Even a random generated pattern can have a fairly large chance to have a fairly good performance, so that it is not too important to carefully design the points on the scanning pattern.

On the other hand, to increase the number of points on the scanning pattern can also reduce the condition number, since more fragmentary velocities on the surface of the object are used for calculation and the problem is better conditioned. Nevertheless, increasing the points on the scanning pattern also increases the delay of the system, and the 6 points on the scanning pattern which is the minimum required number, have resulted in a tolerable condition number. Thus, in the following discussion and experiments, we will only discuss about the situation of the 6-point scanning pattern.

2.2.3. Influence of coordinate system selection

It is worthwhile to point out that the motion of a rigid body has different representations in different coordinate systems, but they are equivalent, i.e., they correspond to the same physical motion. Nevertheless, if the coordinate system is changed, the measurement matrix is also changed, and it will influence the accuracy of the proposed method. This means that we can change the coordinate system in order to increase accuracy without losing information of the physical motion.

Assuming that the origin of the new coordinate system is replaced at point c, the coordinates of the viewpoints are changed but the direction vectors are not affected. Thus, the adjusted matrix A(a)(c) will be

A(a)(c)=(((o1c)×l1)Tl1T((o2c)×l2)Tl2T((o6c)×l6)Tl6T)

Here, we still use κA as the measure of stability. κA is supposed to be minimized to stabilize the calculation. Hence, the optimized center c* can be defined by

c*=argminc(κA(a)(c))

Further discussions on the solution of Eq. (14) will be presented in Sec 3.4.

3. Implementation

3.1. System configuration

The schematic of the proposed 6-DOF motion sensing system is illustrated in Fig. 3. The laser beam emitted by the LDV will be reflected by the mirror embedded on the galvanometer scanner. The galvanometer scanner changes the direction of laser beams temporally, and will stop at different positions on the scanning pattern, namely the six points shown in Fig. 3. The LDV measures the velocity when the galvanometer scanner stops at the ith point on the scanning pattern. The active laser beam at this time is Li so that the measured velocity is vi. Although only one set of LDV and galvanometer scanner is used, we enable different laser beams be reflected by different groups of mirrors (MR1,2,3,4) to change the equivalent viewpoints of the laser beams. Specifically, laser beams No. 2 and No. 3 (denoted by green lines) are reflected by MR2 and MR4, and the corresponding equivalent viewpoint is o3. laser beams No. 4 and No. 5 (denoted by blue lines) are reflected by MR1 and MR3, and the corresponding equivalent viewpoint is o2. Laser beams No. 1 and No. 6 (denoted by red lines) are not reflected by mirrors, so the equivalent viewpoint o1 is on the galvanometer scanner. When all the laser beams hit the target object, the velocities at several points on the surface of the object can be measured by the LDV using a time-division strategy. Then, the 6-DOF motion can be reconstructed using the principles discussed in Sec. 2.

 

Fig. 3 Schematic of the proposed 6-DOF motion sensing system with a single LDV.

Download Full Size | PPT Slide | PDF

The proposed method was implemented by a long-range LDV (Polytec OFV-505,OFV-5000 with decoder VD-09), 2-D galvanometer scanner (GSI 6220H, silver-coated), and four silver-coated mirrors (Edmund Optics), as illustrated in Fig. 4. To verify the feasibility of the proposed 6-DOF motion sensing method, an additional CMOS camera (XIMEA, 648 × 480, 500 fps) was set up together with the proposed system for comparison.

 

Fig. 4 System implementation. The corresponding distances between viewpoints, i.e. ‖o2o1‖ and ‖o3o1‖ are 685 mm and 651 mm, respectively. The angle between the two laser beams from one viewpoint is 4.31 deg and κA = 520.

Download Full Size | PPT Slide | PDF

3.2. Configuration of LDV

While to use a LDV device is fairly straightforward, it is important to appropriately configure the LDV in order to reduce the noise level in its output. The noises of LDVs have been investigated and results presented in the open literature [23]. The dominant noise in the LDV is the speckle noise, caused by undesired interference when the measurement is conducted on a rough surface. Works on the optical configuration have been done to reduce the speckle noise [24], and it has been proven that the speckle noise can be largely relieved by low-pass filter [23] and trimming out the undesired peaks [33].

Besides the speckle noise, the imperfect focus of the lenses and large incident angle may also cause noises sourcing from weak Doppler signal, but the effects are less important compared with speckle noise [33]. The noises caused by imperfect focus only become obvious when the mismatch in the focus is too large (e.g., larger than 500 mm), and the large incident angle only matters when the target is near-specular, as in our experimental observation.

In the context of 6-DOF motion sensing, the change of velocity is mostly in the low-frequency region. Thus in this work, a low-pass filter at 5kHz together with a tracking filter is applied for dealing with noises. While the low-pass filter limits the bandwidth of the measured velocity, the tracking filter bridges brief dropouts with large acceleration caused by the speckle noise. Note that the noises of LDV may in some cases have a profound effect on the performance of velocity sensing system, especially when oscillation at high frequency are measured. However, under the conditions applied both for simulation and experimental verification, we found the effects of speckle noise and photon noise as well as laser noise to be of minor importance.

3.3. Geometrical calibration

The system must be parameterized into the measurement matrix A for calculation. We used a pre-calibrated monochrome camera and a chessboard to independently calibrate the six laser beams on the scanning pattern in Fig. 3.

The scheme is simple. The camera is initially fixed with the system. The chessboard is placed where the camera can see it and the laser beams can hit it. The center of the laser spot (up, vp) in the camera image is assumed to be the intersection between the laser beam and chessboard. We used the world coordinate system with the origin at the optical center of the camera. The rotation matrix R and translation vector T of the chessboard can be acquired by the camera with intrinsic matrix Kc [34]. The coordinate of the intersection point pw in the world coordinate system can be denoted by

pw=nTn(Kc1pi)Kc1pi
where pi = [up vp 1]T is the coordinate of the intersection point in the intrinsic coordinate system of the camera, and n = R · [0 0 1]T is the normal vector of the chessboard plane.

Then, the chessboard is moved and other points on the laser beam can be calculated. Assume that we have K points, namely pw,j, j = 1, 2, . . ., K and the line equation of laser beam Li is denoted by p = oi + sli. With all the K points we can build an equation set consisting of equations with form oi + kj li = pw,j, which is an equation set with 3K equations with 6 + K unknown parameters. Note that for this situation, the position of oi and length of li remains unconstrained. Hence, we add two constraints, namely oi = [xo,i yo,i 0] and li = [xl,i yl,i 1] to the equation set and solve it via a linear least-squares method. Finally, li will be normalized to guarantee that it is a unit vector.

3.4. Adjustment of the coordinate system

We discussed that the coordinate system influences the calculation accuracy of the proposed system in Sec 2.2.3. However, after calibration, the parameters of the system are in the camera’s coordinate system, which is not optimized for calculation. In this subsection, we describe the method to find the optimized coordinate system for calculation by solving Eq. (14).

First, to analyze the distribution of κAa(c) in different coordinate systems, we performed a numerical simulation of κAa(c). One-thousand different coordinate systems were evaluated in this simulation. Their axes were parallel but their origins were randomly generated in a cube with size 1000 × 1000 × 3000 mm3. The measurement matrices were built by the practical calibration parameters of the system acquired by the method described in Sec. 3.3 and were adjusted by Eq. (13). Then, κAa(c) for each coordinate system was calculated.

The results of the simulation are illustrated in Fig. 5. The color of the dots denotes the condition number of the measurement matrices in the corresponding coordinate systems. It can be seen that κAa(c) ranges from 500 to 50000 when we change the origin of the coordinate system, and the origins actually converge to a certain region when we attempt to minimize κAa(c). Based on this observation, equation (14) is likely to be a convex problem. Hence, c* is solved by a steepest descent method, in which the initial c is the one corresponding to the smallest κAa(c) in the simulation, and the iteration terminates when the change in κAa(c) is less than 10. Finally, c* = [70 − 70 840] and κAa(c*) = 520. This new coordinate system was used in the experiments.

 

Fig. 5 Simulation of κAa(c) when the origin of the coordinate system is changed. The origins are randomly generated in a cube with size 1000 × 1000 × 3000. The corresponding condition number is illustrated by the color.

Download Full Size | PPT Slide | PDF

3.5. Calculation pipeline

The proposed system uses a time-division strategy for measurement. While the galvanometer scanner scans continuously, the velocities are sampled at different times, by Li, i = 1, 2, . . ., 6, respectively. To boost the throughput and accuracy of the proposed system, a pipeline strategy was adopted in the system.

At each sampling time, the motion is calculated by one measured velocity and five interpolated velocities. Figure 6 illustrates an example of this strategy. Assume that we are trying to calculate the motion of the object at time tk. At this time, we only know v6,k measured by L6, but the velocities along other laser beams are unknown. Thus, we wait until tk+5 and calculate v′i,k, i = 1, 2, . . ., 5 by the linear interpolation of two samples on the same laser beam. For instance, v′5,k = v5,k−1 + (v5,k+5v5,k−1) * (tktk−1)/(tk+5tk−1). Then, the 6-DOF motion can be reconstructed by Eq. (6). This strategy gives well approximated velocities for each time, which contributes to the accuracy of motion reconstruction without reducing the throughput. However, a latency of five time intervals is needed to perform the interpolation. With the throughput at 250 Hz in the proposed system, this latency can be as large as 20 ms. We believe that this will not be a problem in most common applications. However, both throughput and latency can be further improved by using alternative system configurations if a higher speed is desired, which will be discussed in Sec. 4.5.

 

Fig. 6 An example of the calculation pipeline.

Download Full Size | PPT Slide | PDF

4. Experiments

4.1. Experimental setup

In the experiment, the target object is a white plastic board attached with a chessboard pattern. While the proposed system recorded the motion of the target, the camera captured the images simultaneously, and the motion of the object was calculated by offline analysis of these sequential chessboard images. It should be noted that the chessboard is only needed by the camera in order to provide a reference motion for evaluation, but is not necessary in the proposed method. The comparisons between the camera and proposed system are presented in the following subsections.

4.2. Experimental results for six motion patterns

As depicted in Fig. 7, the target was placed 1500 mm away from the measurement system. Six motion patterns were used to evaluate the accuracy of the proposed method in different DOFs. To be specific, in each motion pattern there is a dominant motion component, namely, ωx, vx, ωy, vy, ωz, vz in motion pattern 1,2,3,...,6, respectively.

 

Fig. 7 Experiment with six different motion patterns. See Visualization 1 for details. Note that the target is a chessboard in the practical experiment for the camera, but the proposed method does not need the chessboard; hence, a white board is used in the video to demonstrate this property.

Download Full Size | PPT Slide | PDF

For each motion pattern, the target was moved back and forward mainly in one DOF. To mimic the motion in practical situations that are often a mixture of multiple DOFs, the target was moved manually in the experiments.

The results are shown in Fig. 8. The 10-s consistent motion sensing results, including the velocity sensing results (top) and pose/position sensing results (bottom), are illustrated. As the proposed system only measured the velocity and the camera only measured the position and pose, the position of the proposed system and velocity from the camera were calculated by temporal integration and differential, respectively. Figure 8 shows that the integrated positions and poses from the proposed system and camera are in good agreement. In the 10-s sensing process, the drift in rotation was at most 6.76 degrees and the drift in translation was at most 37.36 mm. For velocity sensing, the proposed system clearly outperformed the camera. The velocities acquired by camera vibrated largely because the slight errors in the position and pose were divided by a small time interval. This problem does not exist in the proposed system. Finally, the velocities from the proposed system were much smoother and more accurate as illustrated. As a reference, a static object was also measured using the proposed method. The results are presented in Table 1.

 

Fig. 8 Results for six motion patterns.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Results of 10-s motion measurements

These drifts can be attributed to two factors. The first is the error in velocity measurement and calibration, which causes static drift even when the object is not moving. The second is the change in acceleration between two velocity samples on one laser beam. Since a linear interpolation was performed to calculate the velocities, the acceleration was assumed to be constant for the two samples. However, this assumption may not be accurate. There are ways to further improve the accuracy of the proposed method. For example, if six LDVs are used to simultaneously measure the velocity from different directions, then the change in acceleration will not influence the accuracy of velocity measurement.

4.3. Velocity range performance

In this experiment, we evaluated the repeatability of motion measurement for different velocities. The target was initially placed 1500 mm away from the proposed system. DC motors were used to fix the translation and rotation velocities of the target as illustrated in Fig. 9, and the system measured the motion 500 times at each velocity. To be specific, in this experiment the rotational velocity is around z-axis, and the translational velocity is along z-axis. The true value was calculated by averaging the change in rotation and translation of the chessboard from the camera.

 

Fig. 9 Experimental setup for providing constant rotational and translational velocities.

Download Full Size | PPT Slide | PDF

The mean value and standard deviation of the measured velocities are illustrated in Fig. 10. The rotational and translational speeds are represented by the norm of the corresponding speed vectors, i.e. ‖ω‖ and ‖v‖. The differences between the mean values from the proposed system and the true values were less than 1.3 deg/s in rotation and 3.2 mm/s in translation. The maximum standard deviations in the results were approximately 4.7 deg/s in rotation and 6.7 mm/s in translation. When the velocity increased, the standard deviation of the results in rotation also increased slightly. However, there was no obvious relationship between the standard deviation and translational velocity.

 

Fig. 10 Experiment results over different velocities.

Download Full Size | PPT Slide | PDF

4.4. Sensitivity to different measurement distances

The distance between the system and target is not related to the accuracy of the proposed system, according to the formulation in Eq. (5). Nevertheless, when the target is far from the system, the laser beams may not be able to hit the object; hence, the configuration of the system needs to be adjusted, which can influence the accuracy of the system. In this experiment, we tried to evaluate the performance of the proposed method for measuring a fixed velocity at different distances. The target was rotating at a fixed velocity around z-axis, driven by a DC motor using the same experiment setup as in Fig. 9(a), and was placed at different distances. Because the translational motion changed with distance, we only evaluated rotational velocity measurements. The distance was changed from 1000 to 1500 mm for one system configuration, and from 1750 to 2250 mm for the other configuration, where the angle of MR2 and MR4 are changed. Five hundred samples of motion measurement results were taken for analysis at each distance.

The results illustrated in Fig. 11 validate the assumption that the distance has almost no influence on the accuracy of the proposed system when the system configuration was not changed. The standard deviation were nearly the same for all distances and there are only small differences in the mean value (less than 1.7 deg/s).

 

Fig. 11 Experiment over different distances when the target was rotating at constant velocity. The rotational velocity is represented by the norm of rotational speed ‖ω‖. The experiment was performed under two different system configurations, namely config1 (κA = 520) and config2 (κA = 1100). The direction of MR2 and MR4 were changed between these two configurations in order to enable all laser beams to hit the same target when it was farther from the system.

Download Full Size | PPT Slide | PDF

As the change in the shape and position of the target only influences the distances between the viewpoints and the target, this result also proves that the proposed method is robust against targets with different shapes.

4.5. Throughput and latency

During the above experiment, the throughput of the proposed system was approximately 250 Hz and the latency was approximately 20 ms. The proposed method was not computationally expensive, and the throughput was limited mainly by the scanning speed of the galvanometer scanner. Thus, the throughput can be increased by using a smaller scanning pattern, which will affect the arrangement of laser beams, and may cause reduction of accuracy. However, there are other ways to boost throughput without affecting the accuracy. For instance, multiple LDVs can be used to simultaneously measure the velocities in different directions to reduce the time for switching between different laser beams. If six or more LDVs are used, the 6-DOF motion of the object at each time can be acquired by one shot with the proposed method. In such case, the throughput and latency would be limited only by the throughput of the LDVs and the computational time.

5. Conclusion

A robust, contactless, and 6-DOF motion sensing method based on multi-view laser Doppler measurement was presented. Benefiting from the use of the laser Doppler effect for velocity sensing, the proposed method is robust to non-cooperative objects and environments, which makes it a universal solution for 6-DOF motion sensing. The principles of the LDV-based motion sensing and the system configuration were analyzed in detail. The 6-DOF motion sensing was modeled as a linear system, and the necessity of three viewpoints for 6-DOF motion reconstruction was proven by rank analysis. The specific system configuration was optimized by minimizing the condition number of the measurement matrix, which strongly contributed to the accuracy of the proposed system. The proposed method was implemented by a simple system consisting of an LDV, a galvanometer scanner, and four mirrors, and the techniques in system configuration were introduced. The experimental results show that the proposed system can achieve accurate motion sensing for any 6-DOF motion, robust to non-cooperative target and adaptive to different distances and velocities with almost no loss on accuracy.

References and links

1. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571. [CrossRef]  

2. M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012). [CrossRef]  

3. C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22. [CrossRef]  

4. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.

5. G. Adiv, “Determining three-dimensional motion and structure from optical flow generated by several moving objects,” IEEE Trans. PAMI. pp. 384–401 (1985). [CrossRef]  

6. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981). [CrossRef]  

7. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008). [CrossRef]  

8. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

9. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

10. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992). [CrossRef]  

11. S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678. [CrossRef]  

12. M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

13. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152. [CrossRef]  

14. D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548. [CrossRef]  

15. F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

16. S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016). [CrossRef]  

17. H.-L. Hsieh and S.-W. Pan, “Development of a grating-based interferometer for six-degree-of-freedom displacement and angle measurements,” Opt. Express 23, 2451–2465 (2015). [CrossRef]   [PubMed]  

18. Y. Lou, L. Yan, B. Chen, and S. Zhang, “Laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors for precision linear stage metrology,” Opt. Express 25, 6805–6821 (2017). [CrossRef]   [PubMed]  

19. E. Zhang, B. Chen, J. Sun, L. Yan, and S. Zhang, “Laser heterodyne interferometric system with following interference units for large X-Y-θ planar motion measurement,” Opt. Express 25, 13684–13690 (2017). [CrossRef]   [PubMed]  

20. F. Qibo, Z. Bin, C. Cunxing, K. Cuifang, Z. Yusheng, and Y. Fenglin, “Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide,” Opt. Express 21, 25805–25819 (2013). [CrossRef]   [PubMed]  

21. S. Han, Y.-J. Kim, and S.-W. Kim, “Parallel determination of absolute distances to multiple targets by time-of-flight measurement using femtosecond light pulses,” Opt. Express 23, 25874–25882 (2015). [CrossRef]   [PubMed]  

22. Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964). [CrossRef]  

23. S. Rothberg, “Numerical simulation of speckle noise in laser vibrometry,” Appl. Opt. 45, 4523–4533 (2006). [CrossRef]   [PubMed]  

24. C.-H. Cheng, C.-W. Lee, T.-W. Lin, and F.-Y. Lin, “Dual-frequency laser Doppler velocimeter for speckle noise reduction and coherence enhancement,” Opt. Express 20, 20255–20265 (2012). [CrossRef]   [PubMed]  

25. W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973). [CrossRef]  

26. B. E. Truax, F. C. Demarest, and G. E. Sommargren, “Laser Doppler velocimeter for velocity and length measurements of moving surfaces,” Appl. Opt. 23, 67–73 (1984). [CrossRef]   [PubMed]  

27. S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994). [CrossRef]  

28. L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015). [CrossRef]  

29. F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970). [CrossRef]  

30. N. J. Higham, Accuracy and Stability of Numerical Algorithms (SIAM, 2002). [CrossRef]  

31. A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017). [CrossRef]  

32. P. E. Gill, W. Murray, and M. H. Wright et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).

33. J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008). [CrossRef]  

34. D. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, 2011).

References

  • View by:
  • |
  • |
  • |

  1. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
    [Crossref]
  2. M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
    [Crossref]
  3. C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
    [Crossref]
  4. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.
  5. G. Adiv, “Determining three-dimensional motion and structure from optical flow generated by several moving objects,” IEEE Trans. PAMI. pp. 384–401 (1985).
    [Crossref]
  6. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981).
    [Crossref]
  7. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
    [Crossref]
  8. B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.
  9. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).
  10. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992).
    [Crossref]
  11. S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
    [Crossref]
  12. M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.
  13. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
    [Crossref]
  14. D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
    [Crossref]
  15. F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.
  16. S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
    [Crossref]
  17. H.-L. Hsieh and S.-W. Pan, “Development of a grating-based interferometer for six-degree-of-freedom displacement and angle measurements,” Opt. Express 23, 2451–2465 (2015).
    [Crossref] [PubMed]
  18. Y. Lou, L. Yan, B. Chen, and S. Zhang, “Laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors for precision linear stage metrology,” Opt. Express 25, 6805–6821 (2017).
    [Crossref] [PubMed]
  19. E. Zhang, B. Chen, J. Sun, L. Yan, and S. Zhang, “Laser heterodyne interferometric system with following interference units for large X-Y-θ planar motion measurement,” Opt. Express 25, 13684–13690 (2017).
    [Crossref] [PubMed]
  20. F. Qibo, Z. Bin, C. Cunxing, K. Cuifang, Z. Yusheng, and Y. Fenglin, “Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide,” Opt. Express 21, 25805–25819 (2013).
    [Crossref] [PubMed]
  21. S. Han, Y.-J. Kim, and S.-W. Kim, “Parallel determination of absolute distances to multiple targets by time-of-flight measurement using femtosecond light pulses,” Opt. Express 23, 25874–25882 (2015).
    [Crossref] [PubMed]
  22. Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964).
    [Crossref]
  23. S. Rothberg, “Numerical simulation of speckle noise in laser vibrometry,” Appl. Opt. 45, 4523–4533 (2006).
    [Crossref] [PubMed]
  24. C.-H. Cheng, C.-W. Lee, T.-W. Lin, and F.-Y. Lin, “Dual-frequency laser Doppler velocimeter for speckle noise reduction and coherence enhancement,” Opt. Express 20, 20255–20265 (2012).
    [Crossref] [PubMed]
  25. W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973).
    [Crossref]
  26. B. E. Truax, F. C. Demarest, and G. E. Sommargren, “Laser Doppler velocimeter for velocity and length measurements of moving surfaces,” Appl. Opt. 23, 67–73 (1984).
    [Crossref] [PubMed]
  27. S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994).
    [Crossref]
  28. L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
    [Crossref]
  29. F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970).
    [Crossref]
  30. N. J. Higham, Accuracy and Stability of Numerical Algorithms (SIAM, 2002).
    [Crossref]
  31. A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
    [Crossref]
  32. P. E. Gill, W. Murray, M. H. Wright, and et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).
  33. J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
    [Crossref]
  34. D. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, 2011).

2017 (3)

2016 (1)

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

2015 (3)

2013 (1)

2012 (2)

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

C.-H. Cheng, C.-W. Lee, T.-W. Lin, and F.-Y. Lin, “Dual-frequency laser Doppler velocimeter for speckle noise reduction and coherence enhancement,” Opt. Express 20, 20255–20265 (2012).
[Crossref] [PubMed]

2008 (2)

V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
[Crossref]

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

2006 (1)

1994 (1)

S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994).
[Crossref]

1992 (1)

P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992).
[Crossref]

1984 (1)

1981 (1)

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981).
[Crossref]

1973 (1)

W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973).
[Crossref]

1970 (1)

F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970).
[Crossref]

1964 (1)

Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964).
[Crossref]

Adiv, G.

G. Adiv, “Determining three-dimensional motion and structure from optical flow generated by several moving objects,” IEEE Trans. PAMI. pp. 384–401 (1985).
[Crossref]

Andrews, F.

F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970).
[Crossref]

Besl, P. J.

P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992).
[Crossref]

Bin, Z.

Bradski, G.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
[Crossref]

Calonder, M.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

Chen, B.

Cheng, C.-H.

Chetverikov, D.

D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
[Crossref]

Cremers, D.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.

Cristalli, C.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Cuifang, K.

Cummins, H.

Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964).
[Crossref]

Cunxing, C.

Demarest, F. C.

Droeschel, D.

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

Eberhardt, F.

F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970).
[Crossref]

Engel, J.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.

Fenglin, Y.

Fitzgibbon, A. W.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

Forster, C.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
[Crossref]

Forsyth, D.

D. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, 2011).

Fua, P.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
[Crossref]

Fuchs, S.

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

George, W. K.

W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973).
[Crossref]

Gill, P. E.

P. E. Gill, W. Murray, M. H. Wright, and et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).

Halliwell, N. A.

S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994).
[Crossref]

Han, S.

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Hartley, R. I.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

Heide, F.

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

Heidrich, W.

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

Higham, N. J.

N. J. Higham, Accuracy and Stability of Numerical Algorithms (SIAM, 2002).
[Crossref]

Holz, D.

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

Horn, B. K.

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981).
[Crossref]

Hsieh, H.-L.

Huang, Q.

A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
[Crossref]

Hullin, M.

F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

Ishikawa, M.

L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
[Crossref]

Khawaja, A. H.

A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
[Crossref]

Kim, S.-W.

Kim, Y.-J.

Konolige, K.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
[Crossref]

Krsek, P.

D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
[Crossref]

Lee, C.-W.

Lepetit, V.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
[Crossref]

Levoy, M.

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

Li, J.

A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
[Crossref]

Lin, F.-Y.

Lin, T.-W.

Lou, Y.

Lumley, J. L.

W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973).
[Crossref]

May, S.

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

McKay, N. D.

P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992).
[Crossref]

McLauchlan, P. F.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

Miyashita, L.

L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
[Crossref]

Moreno-Noguer, F.

V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
[Crossref]

Murray, W.

P. E. Gill, W. Murray, M. H. Wright, and et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).

Nüchter, A.

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

Ozuysal, M.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

Pan, S.-W.

Pizzoli, M.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
[Crossref]

Pollefeys, M.

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

Ponce, J.

D. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, 2011).

Qibo, F.

Rabaud, V.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
[Crossref]

Randall, R.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Ren, L.

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

Rothberg, S.

S. Rothberg, “Numerical simulation of speckle noise in laser vibrometry,” Appl. Opt. 45, 4523–4533 (2006).
[Crossref] [PubMed]

S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994).
[Crossref]

Rublee, E.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
[Crossref]

Rusinkiewicz, S.

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

Scaramuzza, D.

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
[Crossref]

Schöps, T.

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.

Schunck, B. G.

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981).
[Crossref]

Shrestha, S.

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

Šmíd, R.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Sommargren, G. E.

Sovka, P.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Stepanov, D.

D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
[Crossref]

Strecha, C.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

Sun, J.

Svirko, D.

D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
[Crossref]

Torcianti, B.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Triggs, B.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

Truax, B. E.

Trzcinski, T.

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

Vass, J.

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Wang, X.

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

Watanabe, Y.

L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
[Crossref]

Wetzstein, G.

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

Wright, M. H.

P. E. Gill, W. Murray, M. H. Wright, and et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).

Yan, L.

Yang, R.

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

Ye, M.

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

Yeh, Y.

Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964).
[Crossref]

Yonezawa, R.

L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
[Crossref]

Yusheng, Z.

Zhang, E.

Zhang, S.

Zhang, Z.

A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
[Crossref]

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

ACM Trans. Graphic (1)

S. Shrestha, F. Heide, W. Heidrich, and G. Wetzstein, “Computational Imaging with Multi-camera Time-of-flight Systems,” ACM Trans. Graphic 35, 33 (2016).
[Crossref]

ACM Trans. Graphic. (1)

L. Miyashita, R. Yonezawa, Y. Watanabe, and M. Ishikawa, “3D Motion Sensing of Any Object Without Prior Knowledge,” ACM Trans. Graphic. 34, 218 (2015).
[Crossref]

Appl. Opt. (2)

Appl. Phys. Lett. (1)

Y. Yeh and H. Cummins, “Localized fluid flow measurements with an He–Ne laser spectrometer,” Appl. Phys. Lett. 4, 176–178 (1964).
[Crossref]

Artificial Intelligence (1)

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence 17, 185–203 (1981).
[Crossref]

IEEE Trans. Magnetics (1)

A. H. Khawaja, Q. Huang, J. Li, and Z. Zhang, “Estimation of current and sag in overhead power transmission lines with optimized magnetic field sensor array placement,” IEEE Trans. Magnetics 53, 1–10 (2017).
[Crossref]

IEEE Trans. PAMI. (2)

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Trans. PAMI. 34, 1281–1298 (2012).
[Crossref]

P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. PAMI. 14, 239–256 (1992).
[Crossref]

International Journal of Computer Vision (1)

V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision 81, 155 (2008).
[Crossref]

J. Acoustical Soc. Am. (1)

F. Eberhardt and F. Andrews, “Laser heterodyne system for measurement and analysis of vibration,” J. Acoustical Soc. Am. 48, 603–609 (1970).
[Crossref]

J. Fluid Mechanics (1)

W. K. George and J. L. Lumley, “The laser-Doppler velocimeter and its application to the measurement of turbulence,” J. Fluid Mechanics 60, 321–362 (1973).
[Crossref]

J. Vibration Acoustics (1)

S. Rothberg and N. A. Halliwell, “Vibration measurements on rotating machinery using laser Doppler velocimetry,” J. Vibration Acoustics 116, 326–331 (1994).
[Crossref]

Mechanical Sys. Signal Process. (1)

J. Vass, R. Šmíd, R. Randall, P. Sovka, C. Cristalli, and B. Torcianti, “Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics,” Mechanical Sys. Signal Process. 22, 647–671 (2008).
[Crossref]

Opt. Express (6)

Other (14)

S. May, D. Droeschel, S. Fuchs, D. Holz, and A. Nüchter, “Robust 3D-mapping with time-of-flight cameras,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 1673–1678.
[Crossref]

M. Ye, X. Wang, R. Yang, L. Ren, and M. Pollefeys, “Accurate 3D pose estimation from a single depth image,” in IEEE International Conference on Computer Vision (IEEE, 2011), pp. 731–738.

S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (IEEE, 2001), pp. 145–152.
[Crossref]

D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The Trimmed Iterative Closest Point algorithm,” in Object Recognition Supported by User Interaction for Service Robots, vol. 3 (IEEE, 2002), vol. 3, pp. 545–548.
[Crossref]

F. Heide, G. Wetzstein, M. Hullin, and W. Heidrich, “Doppler Time-of-flight Imaging,” in Proceedings of ACM SIGGRAPH 2015 Emerging Technologies (ACM, 2015), pp. 9.

B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” in Proceedings of International Workshop on Vision Algorithms (Springer, 1999), pp. 298–372.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (IEEE, 2014), pp. 15–22.
[Crossref]

J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014 (Springer, 2014), pp. 834–849.

G. Adiv, “Determining three-dimensional motion and structure from optical flow generated by several moving objects,” IEEE Trans. PAMI. pp. 384–401 (1985).
[Crossref]

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 2564–2571.
[Crossref]

N. J. Higham, Accuracy and Stability of Numerical Algorithms (SIAM, 2002).
[Crossref]

D. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, 2011).

P. E. Gill, W. Murray, M. H. Wright, and et al., Numerical Linear Algebra and Optimization, vol. 1 (Addison-Wesley, 1991).

Supplementary Material (1)

NameDescription
» Visualization 1       An intuitive demonstration of the experiment in Sec. 4.2.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 The 6-DOF motion of a rigid body measured by a laser from the LDV. The rigid body is moving with rotational velocity ω and translational velocity v, and is hit by a laser beam Li from the LDV at point pi. Points on Li can be denoted by oi + sli, where oi is any point on Li, li is the direction vector, and s is a scalar. For ease of description, we refer to oi as “viewpoint” and li as “direction” for each laser beam Li. The velocity of pi, namely vi, consists of two parts introduced by the rotation velocity and translation velocity of the object. The measured velocity by LDV is the projection of vi in the laser beam’s direction li, namely vi = vi · li.
Fig. 2
Fig. 2 Numerical simulation of different system configurations. (a) Illustration of constraints of system configuration during simulation. The three viewpoints o1, o2, and o3 are on the gray plane αz=0. In the simulation, where the laser beams hit the target is assumed to be fixed at certain points on the surface of the target. The color of a laser beam represents the viewpoint that it passes through. (b) Illustration of κA when d changes.
Fig. 3
Fig. 3 Schematic of the proposed 6-DOF motion sensing system with a single LDV.
Fig. 4
Fig. 4 System implementation. The corresponding distances between viewpoints, i.e. ‖o2o1‖ and ‖o3o1‖ are 685 mm and 651 mm, respectively. The angle between the two laser beams from one viewpoint is 4.31 deg and κA = 520.
Fig. 5
Fig. 5 Simulation of κAa(c) when the origin of the coordinate system is changed. The origins are randomly generated in a cube with size 1000 × 1000 × 3000. The corresponding condition number is illustrated by the color.
Fig. 6
Fig. 6 An example of the calculation pipeline.
Fig. 7
Fig. 7 Experiment with six different motion patterns. See Visualization 1 for details. Note that the target is a chessboard in the practical experiment for the camera, but the proposed method does not need the chessboard; hence, a white board is used in the video to demonstrate this property.
Fig. 8
Fig. 8 Results for six motion patterns.
Fig. 9
Fig. 9 Experimental setup for providing constant rotational and translational velocities.
Fig. 10
Fig. 10 Experiment results over different velocities.
Fig. 11
Fig. 11 Experiment over different distances when the target was rotating at constant velocity. The rotational velocity is represented by the norm of rotational speed ‖ω‖. The experiment was performed under two different system configurations, namely config1 (κA = 520) and config2 (κA = 1100). The direction of MR2 and MR4 were changed between these two configurations in order to enable all laser beams to hit the same target when it was farther from the system.

Tables (1)

Tables Icon

Table 1 Results of 10-s motion measurements

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

v i = ω × p i + v
v i = ( ω × p i + v ) l i = p i × l i ω + l i v
v i = ( ( p i o i ) × l i + o i × l i ) ω + l i v
v i = ( o i × l i ) ω + l i v
( ( o 1 × l 1 ) T l 1 T ( o 2 × l 2 ) T l 2 T ( o N × l N ) T l N T ) ( ω v ) = ( v 1 v 2 v N )
X = A + b
r j ( 1 ) = [ ( o 1 × l j ) T l j T ]
dim ( span { r j ( 1 ) } ) 3
r k ( 2 ) = [ ( ( o 1 + q 1 ) × l k ) T l k T ] = [ ( o 1 × l k ) T l k T ] + [ ( q 1 × l k ) T 0 ]
dim ( span { r j ( 1 ) , r k ( 2 ) } ) 5
r m ( 3 ) = [ ( ( o 1 + q 2 ) × l m ) T l m T ] = [ ( o 1 × l m ) T l m T ] + [ ( q 2 × l m ) T 0 ]
X ˜ X 2 X 2 κ A * b ˜ b 2 b 2
A ( a ) ( c ) = ( ( ( o 1 c ) × l 1 ) T l 1 T ( ( o 2 c ) × l 2 ) T l 2 T ( ( o 6 c ) × l 6 ) T l 6 T )
c * = arg min c ( κ A ( a ) ( c ) )
p w = n T n ( K c 1 p i ) K c 1 p i

Metrics