Abstract

In this paper, a new measurement method of the rotation angle of a rotor, which is named the ‘visual encoder,’ was proposed. This method is based on the principles of the vision-based method and the optical encoder, and realized by using a high-speed vision system. The visual encoder shows advantageous features such as non-contact, high-resolution and robustness against the free motion and the fluctuation of the rotation axis. A high resolution method to increase the measurement resolution was also suggested. The accuracy and the robustness of the visual encoder were confirmed through the experimental verifications, and the operation was possible at 6,000 rpm even under the fluctuation of rotation axis.

© 2016 Optical Society of America

1. Introduction

The measurement of the rotation angle of a rotor has played an essential role in machine control of many mechanical systems, and has led to development of various types of rotary encoders, as Dimmler et al. gave a summary [1]. Among conventional encoders, the optical type of rotary encoder has been broadly used for servo motors because of its advantages over other types of encoders, such as its ability of easy signal processing and high resolution. However, the optical encoder has the structural limitation, in that the photosensors must be mounted close to the disc on a rotor, and only a small tolerance is allowed at the relative positions of them. This is the why most of encoders are embedded in the servo actuators and the sensing area is restricted by the inside volume of the encoder.

If the photosensor can be separated from the actuators and the working distance increases, it will enable the remote sensing of the rotation angle and leads to the more flexible applications. For example, a robot arm having multiple links can be controlled without attaching conventional encoder at its each joint, which will result in the reduction of the weight and the wiring-cost of the robot itself, and will also lead to improvement in the response time and the usability. As another possible application, the rotation speed of the wheel of fast-moving automobile or the fan of the drone-like flight vehicle in the air, can be measured at high resolution in world coordinates. Those measurements can help the manufacturers and users in failure detection as well as in automatic control of vehicles.

Recently, the computer vision has been used for visual tracking of an object [2], with the techniques such as SIFT [3] and SURF [4]. Since the vision-based method is applicable to the object which is moving relatively far from the observer, this method is more suitable for the applications aforementioned. Many researches focusing on the rotation measurement of a rotor using vision-based method have been also reported [5–10]. Kwon et al. suggested a high resolution angle-sensing mechanism using a gradient color track and RGB sensor [5]. Suzuki et al. reported the measurement method for small-rotation-angle at very high resolution, in the accuracy of 0.4 arcsec [6]. Li et al. measured the rotation angle of a rotor using calibration pattern with spot array [7]. Lee et al. developed a real time optical sensor for the three DOF motion [8]. Kadowaki et al. measured the rotation angle of flying golf ball using one-line scanner [9]. Watanabe et al. reported their angle-position measurement method based on the high-speed image processing, and showed that the rotation angle of sphere-shaped marker can be detected at up to 1,200 revolutions per minute (rpm) even under the free motion of the axis of rotation [10]. Above researches have each strong point on following features: cost-effectiveness and easy employment [5], high resolution [6, 7], ability of measuring three DOF motion [8], the fast motion of target object [9], and the ability of detecting high rotation speed and the robustness [10], respectively. However, because of the image processing cost, it was difficult to achieve the high resolution, the ability of high-speed rotation, and the robustness against the fluctuation of the rotation axis, simultaneously.

In this paper, we introduce an alternative method, named as ‘visual encoder’, that is able to detect the rotation angle of a rotor in more dynamic environment, using high-speed vision and RGB color pattern aligned on the rotor. The main concept of the visual encoder is derived from our previous works [11, 12], but generalized and verified by various experiments here. Visual encoder shows many advantageous features of aforementioned methods: non-contact, high-resolution, high-speed detection, and robust measurement which is resistant to any free motion and fluctuation of the axis of rotation. Each feature is not new, however, the combination of them can generate new possibilities, since these features can bring the expandability and flexibility of the vision-based method to conventional encoder system, or vice versa.

This paper explains three main parts: the principle of the visual encoder, the solution to improve the measurement resolution, and the experimental verification to confirm the performance of the visual encoder. In the experiments, the maximum measurement speed that is capable by the visual encoder and the measurement resolution of the suggested method is verified, and the robustness of the visual encoder against the fluctuation of the rotation axis is proved in dynamic condition.

2. Principle of visual encoder

The main concept of visual encoder is a combination of the conventional disc pattern of optical encoder and the vision-based method, which is achieved by the high-speed vision system. As shown in Fig. 1, visual encoder consists of a switching RGB pattern on the rotor and high-speed RGB camera, which recognizes the sequentially changing color in the RGB pattern during the rotation of the disc. Basically, the visual encoder adapted the design of the switching pattern of the conventional optical encoder, but has different method for the recognition of the rotation. In order to recognize the rotation of the pattern on a disc, at first the read-out position on the disc should be unique in the coordinates in respect to the photosensor. In conventional optical encoder, it is not a problem, because the relative position between the center of the rotating disc and the photosensor is always fixed. In visual encoder, however, the unique measurement point is determined by the image processing, and the high-speed vision system helps to improve the reliability of the determining process. In addition, the disc pattern used in the optical encoder is a designated one for the photosensor, which is located near the disc. Therefore, for applying it to visual encoder, the pattern itself should be modified to be recognized in the camera vision.

 

Fig. 1 Concept of visual encoder. (a) The shape of marker is based on the disc of the optical encoder but R-G-B colors replace the repeating transparent-opaque pattern. (b) Rotation angle of rotor is measured by the vision-based method. Tracking of the image center of the marker keeps the position of the color detecting point unique.

Download Full Size | PPT Slide | PDF

2.1. Pattern design by modifying the optical disc

There are several types of pattern to detect the rotation angle as shown in Fig. 2. The simplest pattern used in the optical disc is 1-phase incremental type where the pairs of transparent and opaque slits are arranged along the circumference of the disc. Depending on whether the light ray can pass through the current slit or not, the signal from the photosensor switches between two states, high or low level of voltage. Since the number of slits on the disc is known, the rotation angle can be calculated by counting the number that state changes. However, the direction of rotation cannot be detected, in that the state changes regardless of the direction. Therefore, to detect the direction, 2-phase of incremental pattern should be used.

 

Fig. 2 Pattern design of disc. (a) The pattern used on incremental optical encoder. 2-phase type is generally used to detect the rotating angle and the direction of rotation. (b) The color-gradient type. Reflection model of light is needed to measure the accurate rotation angle. (c) Switching RGB type. The structure of 3 states allows detecting of the direction of rotation, as well as the rotation angle.

Download Full Size | PPT Slide | PDF

Visual encoder requires the color patten instead of the optical pattern, and exploits three basic colors: red, green and blue (RGB). The colors are arranged along the circumference of the disc in turn, and the reflected light is captured by the RGB camera vision. Switching RGB color pattern has one-phase and three states, and takes advantage in that it can detect not only the rotation angle but also the direction of the rotation at the same time. The rotation angle is calculated by counting the number that state changes, the same way as in the optical encoder, and the direction of the rotation by checking the order that color changes. Since the switching RGB pattern exploits three basic colors to represent the state of the rotation, instead of the intensity of the light as in optical encoder or color-gradient model [5], the reflection model regarding the light intensity is not required for the angle measurement. Furthermore, since these three components of the basic color are always captured by camera, and only the comparison to determine the dominant color among three colors is required, the switching RGB color pattern is robust against the change of the surrounding brightness of the lighting condition. Actually, RGB color space can be converted into the HSV color space where the color is represented by hue as a kind of normalized color, which reduces the aspect of the variation in the brightness of light.

2.2. Angle measurement by vision-based method

The measurement of the rotation angle of a rotor using visual encoder, is based on the vision-based method and consists of two main parts: one is position matching of camera sensor and the color disc, to determine a detecting point, and the other is extracting the color on the detecting point to calculate the rotation angle. These two parts are explained in 2.2.1 and 2.2.2, respectively, and the whole process is presented in Fig. 3.

 

Fig. 3 Flow from image capturing to the calculation of the rotation angle. (a) Determination process to keep the position of detecting point unique. A pixel shifted from the centroid of the object area in the image plane is inspected to detect the dominant color on itself. ROI is set to reduce the calculation cost in order to increase the sampling rate. (b) Extraction process to acquire the rotation variables. Ncnt and Nstate indicate the number that the rotor rotates in turns and sub-turns, respectively. High resolution method can be used to enhance the measurement resolution of the rotation angle and Nsub is the variable for the process. The rotation angle N is calculated from Ncnt, Nstate (and Nsub in case of high resolution). (c) Example of (a). (d) Example of (b).

Download Full Size | PPT Slide | PDF

2.2.1. Position matching of camera sensor and the color disc

Considering the geometry of the rotating disc with the color pattern attached on its circumference, the intersection point of the rotation axis in three-dimension space and the surface plane of the disc pattern is projected onto the centroid of the disc in image plane. Since the centroid is determined uniquely, it plays a role as a basis of the coordinates when tracking the rotation. The simplest manner to get the position of the centroid in the image is to conduct the object extraction by background subtraction, binarization, and calculation of the image moment from the binary image. To simplify the object extraction, black background can be used.

Since our new disc pattern has a distinctive feature, in that it consists of three basic colors, which have unique hues, the disc can be easily recognized by camera vision and extracted from the black background. The hues of red, green, and blue are around 0 (or 360), 120 and 240 degrees, respectively, and the saturations of these basic colors are very high in HSV color space. The pixel intensity of the white area on the center of the disc can be also used as an alternative basis to reduce the calculation cost. After capturing the image, by image thresholding, the binary image is easily acquired. Then the centroid C(xc, yc) can be calculated in the binary image as in Eq. (1).

C(xc,yc)=(M10M90,M01M00)
where Mpq is the image moment of order (p + q) when p and q are non-negative integers. When I(x, y) is the intensity on pixel (x, y) in the image coordinates, Mpq is calculated by following equation.
Mpq=xyxpyqI(x,y)
In the binary image, since I(x, y) takes zero or one, the result of Eq. (1) indicates the centroid.

The detecting point D(xd, yd) for the color detection is determined by line-scanning from the centroid. As the base coordinates for detecting a rotation, we can choose a Cartesian coordinates that is translated from the image coordinates with the translation of its origin to the centroid C(xc, yc). Line-scanning is conducted from C(xc, yc) along the X direction in the image coordinates, to find the two color boundaries on the scanned line. Choosing the center point of the two boundary points as the detecting point is reasonable (as in Fig. 1). The rotation angle of the rotor is measured by detecting and tracing the color on the detecting point D(xd, yd).

2.2.2. Measurement of rotation angle using visual encoder

The mechanism for measuring the rotation angle by visual encoder is quite simple. Let Ncnt and Nstate be the counters that indicate the number of revolution the disc rotated and the number that color blocks passed, respectively, and M be the number of the RGB color blocks arranged along the circumference of the disc. If the color at the detecting point changes, the rotation direction is determined by the intended order of RGB pattern. For example, when we set the forward direction of rotation as the sequence of R → G → B, it is reverse direction of rotation if the color changes at the detecting point changes like B → G → R. Therefore, according to the rotating direction, Nstate is counted up or down by one. When Nstate equals M, since it indicates that the disc finished one revolution, Ncnt is increased or decreased by one depending on the rotating direction and Nstate is reset to zero. The rotation angle of the disc is calculated as in Eq. (3).

N[rad]=2π{Ncnt+(Nstate/M)}
Note that the measurement resolution is determined by M on the second term in the right side. Larger M assures higher measurement resolution, and the actual resolution equals 1/M[rev]. M can be designed according to user’s purpose, as long as the resolution of color printer allows, theoretically. However, empirically in many cases, M is limited to certain value, due to the limitation of the sampling rate of camera vision. As a principle, skipping a color block without detecting is not allowed, lest it misrecognizes the direction of rotation. In other words, the detecting point should read consecutive colors between the successive frames in a sampling time, the frame rate in case of the camera vision. Therefore, the frame rate determines the maximum rotating speed of the rotor, Vmax, and the visual encoder can measure the rotation angle by the following equation.
Vmax[rpm]=QM[frame/sec][frame/rev]×60[sec/min]
where Q is the frame rate of the camera, and M, the number of color block, is the minimum requirement for the skip-less detection. Since Q affects the Vmax in proportion, the higher Q allows the lower M when Vmax is constant. This is the reason why high-speed camera is used in visual encoder system to achieve the high resolution. For instance, if a high-speed camera offering 1k fps as a frame rate and the desired Vmax equals 1,000 rpm, the resolution (1/M) is 1/60 [rev]. It is obvious that the higher Q contributes to the higher resolution, as shown in Fig. 4. The map of MVmaxQ helps users to select the parameters for designing of visual encoder.

 

Fig. 4 The relation between M and Vmax when Q varies. M is the number of color blocks on the disc, and Vmax is the maximum rotation speed of the rotor that visual encoder can measure the rotation. Q represents the frame rate of the camera (fps) used in visual encoder. The higher Q helps to get the higher resolution in measurement of the rotation angle.

Download Full Size | PPT Slide | PDF

3. High resolution method of visual encoder

In aforementioned example (Q = 1k [fps] and Vmax = 1, 000 [rpm]), the resolution was 1/60 revolution (6 degrees). Although there is a trade-off between maximum speed and the measurement resolution in case the frame rate of the camera is fixed, the resolution of 1/60 revolution is too low and unacceptable in many cases. For this reason, the compensation method to improve the resolution is required. Here we suggest a high resolution method (HR) by searching color boundaries in both directions, forward and backward. Firstly, at the current detecting point, an imaginary curve is drawn, which approximately overlaps with the past trajectory of the position on the disc and with the estimated future trajectory. If the rotation axis is almost perpendicular to the image plane of the camera vision, the imaginary curve draws an arc. Suppose that the detecting point D(xd, yd) draws an arc whose center is located on the centroid C(xc, yc), and the searching point is located on the position S(xs, ys), as shown in Fig. 3 (d). Then, the relation of the three points is described as following:

[xsys]=[cosθsinθsinθcosθ][xdxcydyc]+[xcyc]
where θ (−2π/M < θ < 2π/M) is the angle between ∠SCD.

Changing θ from zero to the upper limit, the color on S(xs, ys) is inspected until the color is different from the color on D(xd, yd). Let the position be SU (xsu, ysu) and the angle be θU. The identical process is done about the lower limit of θ in order to determine SL(xsl, ysl) and θL. With θU and θL, the modified version of Eq. (3) can be acquired as in Eq. (6).

N[rad]=2π{Ncnt+(Nstate/M)+Ndir(θU/(θU+θL))/M}
where Ndir is a variable to indicate the direction of the rotation as following.
Ndir={1ifNiNi11ifNi<Ni1
Ni indicates the rotation angle at a discrete time i, and Ni−1 at time i − 1. As noticed in the Eq. (6), the resolution is improved by the factor θU /(θU + θL). Since this factor depends on the the resolution of camera image and the size of the color blocks in the image plane, it is reasonable to verify the performance in actual experiments, rather than theoretical evaluation.

4. Experimental results

4.1. Experimental setup

Two main experiments were conducted to verify the performance of the visual encoder. In the first experiment, the performance of the visual encoder is verified under the condition that the rotation axis of rotor is fixed, which is the same condition that the conventional encoders have been used. In the second experiment, the robustness of the visual encoder was tested by measuring the rotation angle under the fluctuation of the rotation axis, which has not been handled by the conventional encoders.

For the experiments, we set up a testbed consisting of the visual encoder and the reference rotary system, as shown in Fig. 5. A high-speed camera, a rotary disc, and a PC for image processing formed the visual encoder system. We used a high-speed RGB camera (Mikrotron EoSens MC1363) with the camera-link interface and acquired the images at 1,000 fps with the resolution of 400 × 600 pixels. The image capturing via high-speed camera and the image processing on PC were finished within 1 millisecond and the calculated rotation angle was shared with a real-time controller through 100 Mbps of ethernet. The PC is equipped with Intel Xeon E5-2609 processor with a base frequency of 2.4 GHz, and was running Windows 7. The printed color pattern was attached on the rotary disc and placed in front of the camera. The number of color blocks used in the experiments are six (M = 6) in both cases. The ring-type LED lamps were placed around the camera lens, to supply the sufficient light to secure enough brightness to the measurement system.

 

Fig. 5 Testbed of visual encoder with real-time controller. The whole parts of this system works at 1 kHz, as both of the sampling rate and the control frequency. (a) Visual encoder with high-speed camera of 1,000 fps. (b) Reference system with optical encoder.

Download Full Size | PPT Slide | PDF

The conventional servo actuator, which consists of an electric motor and an optical encoder, was used to read the referential rotation angle of a rotor. The rotary system was controlled by the motor driver connected to a real-time controller (dSPACE). The real-time controller operated at 1 kHz as the system frequency and received the rotation angle measured by the visual encoder through the 100 Mbps ethernet. The receiving frequency was also 1 kHz and all the processes were synchronized by the real-time controller at the system frequency.

The measurement stability is important for the visual encoder, because it is a sensor based on the incremental-type of counter. Once the counting is missed for any reasons, the measurement error due to this counting failure continues to exist from that moment as an accumulated error, unless the error is fixed by using other references, such as the Z-phase of the optical encoder. The faster the rotor rotates, the higher probability of failure it shows, for the consecutive colors in the visual encoder are blended due to the motion blur. However, the use of high-speed camera can reduce the motion blur and the counting failure.

4.2. Accuracy verification at various speeds

In order to verify the stability of the measurement and the accuracy of the visual encoder, we conducted two kinds of experiments: In the first experiment, the angle of the rotor is measured, varying the constant rotating speed. In the second experiment, the rotor is accelerated to its maximum speed.

4.2.1. Rotation at various constant speeds

At various speeds, the forward and backward rotation of the rotor continued for 10 seconds, and the measured rotation angles by the visual encoder and the optical encoder were compared. The rotation speed is increased from 1,000 rpm to 8,000 rpm, with a 1,000 rpm increment. In the rotation at various constant speeds, the visual encoder was able to measure the rotation angle correctly at the speed of up to 7,000 rpm within the designed resolution, as shown in Fig. 6. Each dashed line represents the true rotation angle as a reference, and the measured rotation angle by the visual encoder shown in the continuous line was entirely overlapped at each case. The number of the color pattern was six (M = 6), and the resolution equals π/3. As a reference, two red dashed lines were drawn to indicate the measurement tolerance. Most of the error stayed inside of the two lines, except for the rotation speed of 8,000 rpm, indicating the measurement failure.

 

Fig. 6 Measurement of rotation angle at constant speed of the rotor. (upper) Each dashed line indicates the reference angle measured by the optical encoder, and each continuous line by the visual encoder, at each rotation speed [rpm]. (lower) Errors between the two measurements. Two red dashed lines indicate the designed resolution. As long as the errors stay inside two lines, the measurement using the visual encoder is stable.

Download Full Size | PPT Slide | PDF

4.2.2. Rotation with speed acceleration

The rotor was accelerated up to 10,000 rpm, the maximum speed that the actuator can generate, and the rotation angle was measured simultaneously by both the visual encoder and the the optical encoder. The result shown in Fig. 7 helps to confirm the speed limitation more precisely. When the reference speed of the rotor is increased up to 10,000 rpm, the measurement is failed at a certain speed. Although the theoretical limitation was 10,000 rpm, as calculated from Eq. (4) when M = 6 and Q = 1, 000, the experimental result was turned out to be about 8,000 rpm. The measurement degradation is considered to be due to the motion blur, which makes it difficult to determine the current color at the detecting point by the color-mixture. The use of higher frame rate with shorter exposure time can help to improve the maximum speed.

 

Fig. 7 Acceleration test to verify the maximum detectable speed. (upper) The rotation angle measured by the optical encoder (blue) as ground truth, and by the visual encoder (red). (middle) Reference rotor was rotated at constant acceleration. (lower) As the rotation speed is increased, the error can exceed the tolerance at a certain moment (here, at 12.8 sec). This results in the measurement failure (at 7740 rpm).

Download Full Size | PPT Slide | PDF

4.3. Verification of robustness and high resolution method

As measuring the rotation angle, sinusoidal oscillations were added to the rotation axis, in X and Y direction in the image coordinates. The trajectory of the rotation axis by the oscillations drew a circle as shown in Fig. 8. The fluctuation in the circular motion was intended to make the shape of trajectory more complicated. To achieve the motion, we used a robotic finger having three joints and attached the visual encoder on the tip of the robotic finger. The amplitude and frequency of the oscillation were 5 mm and 1 Hz, respectively. Two rotation speeds of the rotor, 1,000 rpm and 6,000 rpm, were measured by the visual encoder as in the aforementioned condition. The measurement result when the rotation speed was 6,000 rpm, is shown in Fig. 9. We can notice that the visual encoder worked correctly even under the motion and fluctuation of the rotation axis, because all the error stays within the measurement tolerance.

 

Fig. 8 The motion of the rotation axis in fluctuation. When sinusoidal oscillation is introduced in X and Y direction, the trajectory of the rotation axis draws a circle. The amplitude and the frequency were 5 mm and 1 Hz, respectively. The fluctuation in the motion was intentionally generated, to test the robustness of the visual encoder.

Download Full Size | PPT Slide | PDF

 

Fig. 9 Rotation angle and the measurement error during the motion shown in Fig. 8. (upper) The rotation angle of the rotor measured by the visual encoder, at 6,000 rpm. Magnified graph shows that the high resolution method improved the measurement accuracy. (lower) The measurement error was substantially reduced by the high resolution method.

Download Full Size | PPT Slide | PDF

The high resolution method (HR) was also verified and the result is shown as a red line in Fig. 9. The magnified view in the upper graph represents that HR contributes to the reduction of the quantization error which exists in normal measurement method (blue line). The comparison of two measurement results, with HR and normal measurement, was shown in Table. 1. The measurement error was significantly decreased by the high resolution method, and the measurement resolution enhanced by more than 10 times. The actual sequential images used in the image processing for the HR are shown in Fig. 10.

Tables Icon

Table 1. Performance of High Resolution Method (HR)

 

Fig. 10 Sequential images of color pattern on the disc in image processing. Images were taken at the rotation speed of (a) 1,000 rpm and (b) 6,000 rpm, respectively. A color-filled dot indicates the detecting point, and two white dots represent SU and SL in section 3.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this paper, we suggested ‘visual encoder’ as a new measurement method of the rotation angle of a rotor with a high resolution method, and proved its performance with experimental verifications.

The principle of the visual encoder is based on the conventional optical encoder and the vision-based method, and has the beneficial features of both methods: non-contact, high-resolution, and robustness against the motion and fluctuation of the rotation axis. The visual encoder is not a simple combination of two existing methods, but gives new possibility to various research fields such as the robotic control based on visual servoing and remote sensing of high speed rotor. In our previous works [11, 12], although the different type of pattern and the rudimentary sensing mechanism were used, a new robotic manipulation via flexible thread was possible by using the principle of visual encoder.

The performance of the visual encoder can be adjusted according to the users intention, by designing color pattern of the disc. In our experimental condition, visual encoder was able to measure the rotation angle at up to around 8,000 rpm when the rotation axis is fixed, and up to 6,000 rpm even under the fluctuation of the rotation axis, in real time. By applying high resolution method, the resolution was further improved, by more than 10 times in comparison with the normal measurement.

References and links

1. M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996). [CrossRef]  

2. A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011). [CrossRef]  

3. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004). [CrossRef]  

4. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008). [CrossRef]  

5. Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014). [CrossRef]  

6. T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006). [CrossRef]  

7. W. Li, J. Jin, X. Li, and B. Li., “Method of rotation angle measurement in machine vision based on calibration pattern with spot array,” Appl. Opt. 49(6), 1001–1006 (2010). [CrossRef]   [PubMed]  

8. K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004). [CrossRef]  

9. T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.

10. Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

11. H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

12. H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

References

  • View by:
  • |
  • |
  • |

  1. M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996).
    [Crossref]
  2. A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
    [Crossref]
  3. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
    [Crossref]
  4. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
    [Crossref]
  5. Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014).
    [Crossref]
  6. T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
    [Crossref]
  7. W. Li, J. Jin, X. Li, and B. Li., “Method of rotation angle measurement in machine vision based on calibration pattern with spot array,” Appl. Opt. 49(6), 1001–1006 (2010).
    [Crossref] [PubMed]
  8. K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004).
    [Crossref]
  9. T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.
  10. Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).
  11. H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.
  12. H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

2014 (1)

Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014).
[Crossref]

2011 (1)

A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
[Crossref]

2010 (1)

2008 (1)

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

2006 (1)

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

2005 (1)

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

2004 (2)

K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004).
[Crossref]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

1996 (1)

M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996).
[Crossref]

Bay, H.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

Collet, A.

A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
[Crossref]

Dayer, C.

M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996).
[Crossref]

Dimmler, M.

M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996).
[Crossref]

Endo, T.

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

Ess, A.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

Gool, L. V.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

Greivenkamp, J. E.

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

Ishikawa, M.

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

Jin, J.

Kadowaki, T.

T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.

Kagami, S.

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

Kim, H.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

Kim, W.

Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014).
[Crossref]

Kobayashi, K.

T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.

Komuro, T.

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

Kwon, Y.

Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014).
[Crossref]

Lee, K.

K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004).
[Crossref]

Li, W.

Li, X.

Li., B.

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

Martinez, M.

A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
[Crossref]

Sasaki, O.

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

Senoo, T.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

Srinivasa, S. S.

A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
[Crossref]

Suzuki, T.

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

Tuytelaars, T.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

Watanabe, K.

T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.

Watnabe, Y.

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

Yamakawa, Y.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

Zhou, D.

K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004).
[Crossref]

Appl. Opt. (1)

Comput. Vision Image Understanding (1)

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understanding 110(3), 346–359 (2008).
[Crossref]

IEEE/ASME Trans. Mechatron. (3)

Y. Kwon and W. Kim, “Development of a new high-resolution angle-sensing mechanism using RGB sensor,” IEEE/ASME Trans. Mechatron. 19(5), 1707–1715 (2014).
[Crossref]

M. Dimmler and C. Dayer, “Optical encoders for small drives,” IEEE/ASME Trans. Mechatron. 1(3), 278–283 (1996).
[Crossref]

K. Lee and D. Zhou, “A real-time optical sensor for simultaneous measurement of three-DOF motions,” IEEE/ASME Trans. Mechatron. 9(3), 499–507 (2004).
[Crossref]

Int. J. Comput. Vision (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

Int. J. Rob. Res. (1)

A. Collet, M. Martinez, and S. S. Srinivasa, “The MOPED framework: object recognition and pose estimation for manipulation,” Int. J. Rob. Res. 30(10), 1284–1306 (2011).
[Crossref]

J. Adv. Comput. Intelli. Intelli. Inform. (1)

Y. Watnabe, T. Komuro, S. Kagami, and M. Ishikawa, “Multi-target tracking using a vision chip and its application to real-time visual measurement,” J. Adv. Comput. Intelli. Intelli. Inform. 17(2), 121–129 (2005).

Opt. Eng. (1)

T. Suzuki, T. Endo, O. Sasaki, and J. E. Greivenkamp, “Two-dimensional small-rotation-angle measurement using an imaging method,” Opt. Eng. 45(4), 043604 (2006).
[Crossref]

Other (3)

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Manipulation model of thread-rotor object by a robotic hand for high-speed visual feedback control,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (2014), pp. 924–930.

H. Kim, Y. Yamakawa, T. Senoo, and M. Ishikawa, “Robotic manipulation of rotating object via twisted thread using high-speed visual sensing and feedback,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2015), pp. 265–270.

T. Kadowaki, K. Kobayashi, and K. Watanabe, “Rotation angle measurement of high-speed flying object,” in Proceedings of SICE-ICASE International Joint Conference (2006), 5256–5259.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Concept of visual encoder. (a) The shape of marker is based on the disc of the optical encoder but R-G-B colors replace the repeating transparent-opaque pattern. (b) Rotation angle of rotor is measured by the vision-based method. Tracking of the image center of the marker keeps the position of the color detecting point unique.
Fig. 2
Fig. 2 Pattern design of disc. (a) The pattern used on incremental optical encoder. 2-phase type is generally used to detect the rotating angle and the direction of rotation. (b) The color-gradient type. Reflection model of light is needed to measure the accurate rotation angle. (c) Switching RGB type. The structure of 3 states allows detecting of the direction of rotation, as well as the rotation angle.
Fig. 3
Fig. 3 Flow from image capturing to the calculation of the rotation angle. (a) Determination process to keep the position of detecting point unique. A pixel shifted from the centroid of the object area in the image plane is inspected to detect the dominant color on itself. ROI is set to reduce the calculation cost in order to increase the sampling rate. (b) Extraction process to acquire the rotation variables. Ncnt and Nstate indicate the number that the rotor rotates in turns and sub-turns, respectively. High resolution method can be used to enhance the measurement resolution of the rotation angle and Nsub is the variable for the process. The rotation angle N is calculated from Ncnt, Nstate (and Nsub in case of high resolution). (c) Example of (a). (d) Example of (b).
Fig. 4
Fig. 4 The relation between M and Vmax when Q varies. M is the number of color blocks on the disc, and Vmax is the maximum rotation speed of the rotor that visual encoder can measure the rotation. Q represents the frame rate of the camera (fps) used in visual encoder. The higher Q helps to get the higher resolution in measurement of the rotation angle.
Fig. 5
Fig. 5 Testbed of visual encoder with real-time controller. The whole parts of this system works at 1 kHz, as both of the sampling rate and the control frequency. (a) Visual encoder with high-speed camera of 1,000 fps. (b) Reference system with optical encoder.
Fig. 6
Fig. 6 Measurement of rotation angle at constant speed of the rotor. (upper) Each dashed line indicates the reference angle measured by the optical encoder, and each continuous line by the visual encoder, at each rotation speed [rpm]. (lower) Errors between the two measurements. Two red dashed lines indicate the designed resolution. As long as the errors stay inside two lines, the measurement using the visual encoder is stable.
Fig. 7
Fig. 7 Acceleration test to verify the maximum detectable speed. (upper) The rotation angle measured by the optical encoder (blue) as ground truth, and by the visual encoder (red). (middle) Reference rotor was rotated at constant acceleration. (lower) As the rotation speed is increased, the error can exceed the tolerance at a certain moment (here, at 12.8 sec). This results in the measurement failure (at 7740 rpm).
Fig. 8
Fig. 8 The motion of the rotation axis in fluctuation. When sinusoidal oscillation is introduced in X and Y direction, the trajectory of the rotation axis draws a circle. The amplitude and the frequency were 5 mm and 1 Hz, respectively. The fluctuation in the motion was intentionally generated, to test the robustness of the visual encoder.
Fig. 9
Fig. 9 Rotation angle and the measurement error during the motion shown in Fig. 8. (upper) The rotation angle of the rotor measured by the visual encoder, at 6,000 rpm. Magnified graph shows that the high resolution method improved the measurement accuracy. (lower) The measurement error was substantially reduced by the high resolution method.
Fig. 10
Fig. 10 Sequential images of color pattern on the disc in image processing. Images were taken at the rotation speed of (a) 1,000 rpm and (b) 6,000 rpm, respectively. A color-filled dot indicates the detecting point, and two white dots represent SU and SL in section 3.

Tables (1)

Tables Icon

Table 1 Performance of High Resolution Method (HR)

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

C ( x c , y c ) = ( M 10 M 90 , M 01 M 00 )
M p q = x y x p y q I ( x , y )
N [ rad ] = 2 π { N cnt + ( N state / M ) }
V max [ rpm ] = Q M [ frame / sec ] [ frame / rev ] × 60 [ sec / min ]
[ x s y s ] = [ cos θ sin θ sin θ cos θ ] [ x d x c y d y c ] + [ x c y c ]
N [ rad ] = 2 π { N cnt + ( N state / M ) + N dir ( θ U / ( θ U + θ L ) ) / M }
N dir = { 1 if N i N i 1 1 if N i < N i 1

Metrics