Abstract

We present a catadioptric beacon localization system that can provide mobile network nodes with omnidirectional situational awareness of neighboring nodes. In this system, a receiver composed of a hyperboloidal mirror and camera is used to estimate the azimuth, elevation, and range of an LED beacon. We provide a general framework for understanding the propagation of error in the angle-of-arrival estimation and then present an experimental realization of such a system. The situational awareness provided by the proposed system can enable the alignment of communication nodes in an optical wireless network, which may be particularly useful in addressing RF-denied environments.

© 2016 Optical Society of America

1. Introduction

Although wireless communication has traditionally been dominated by radio frequency (RF) technology, increasingly there is interest in using optical wireless (OW) technology to serve as an adjunct to RF in certain applications [1]. Among the advantages of OW communication over existing wireless RF communication are its access to a wide, unregulated spectrum and its relative immunity to eavesdropping and jamming [2]. OW systems may be especially useful in “RF-denied” environments, in which RF communication is either prohibited or undesirable. This may include settings such as hospital rooms that contain electromagnetically-sensitive equipment or tactical military environments.

Interest in OW systems has been especially keen in light of the recent maturation of light-emitting diode (LED) technology. There are predictions that LEDs will become the dominant lighting source [3], used in applications from interior lighting to traffic lights. This emerging ubiquity of LED technology presents an opportunity to utilize LEDs as a means for implementing OW communication systems, which may include indoor local area networks [4], smart transportation systems [5], and communication between mobile platforms such as vehicles or robots [6].

A challenge presented by OW communication is that it often requires a higher concentration of power at the receiver than RF links because of the fundamental differences in the detection mechanisms of the two technologies [1]. Typically, optical receivers are orders of magnitude less sensitive than their RF counterparts [2]. Establishing optical links beyond very short ranges thus often requires the transmitted energy to be directed towards the receiver so that the concentration of received power is sufficient [1]. The consequent alignment demands present a major challenge for OW links, especially with mobile nodes [7–9]. Even in stationary systems in which transceivers are nominally fixed atop buildings, building sway can present a challenge [10, 11].

Creating and maintaining optical links between mobile nodes thus requires individual nodes to have constantly updated awareness of the locations of neighboring nodes. This could be achieved, for example, with nodes that share GPS information via RF links [12–17]. However, sharing of locations via RF links may not be feasible in RF-denied settings, or the nodes may not have precise self-localization information in a common frame of reference. This drawback motivated our previous work [18], in which we explored the use of LED-based communication links with wide beams and relaxed alignment constraints as an alternative means of addressing the alignment challenges of OW links.

Expanding upon this interest in using all-optical means to address the alignment demands of OW systems, we study the application of an imaging optical system [19, 20] to provide optical wireless nodes with location information of neighboring nodes. In this system, a curved mirror and a camera constitute an imaging receiver used to estimate the angle-of-arrival of light-emitting sources (beacons) placed on neighboring nodes. Such a system maintains a 360° field of view in azimuth without the need for mechanical scanning [21–23]. Equipped with such a system, a given node can estimate the angular bearings of nearby cooperative nodes, enabling the alignment of OW links. The ranges to nearby nodes can also be estimated with such a system. This information could be used, for example, to estimate the data rate achievable in an optical link [18]. Applications may include the use of such a system to align OW links between robots [24, 25], vehicles [6], and other platforms. And in addition to utilization in the alignment of point-to-point optical links, such a device could be used in LED-based indoor positioning systems [26, 27].

In this work, we describe the geometry and operating principles of this beacon localization system and expand upon the work in [20] to develop a general analytical model for propagation of Gaussian error in the system and the effect on angle-of-arrival estimation. While the type of errors present in any given implementation may vary considerably, this analytical model may serve as a useful first-order approximation for system modeling and performance prediction. We then present an experimental realization of a catadioptric system and measure the estimation error performance of the angles-of-arrival and range of an LED beacon.

2. System geometry

We propose the use of a rotationally symmetric curved mirror and a camera to act as a means of providing OW links with the omnidirectional awareness necessary for localization of nearby nodes. Systems that combine the use of refractive and reflective components are known as catadioptric systems, and their use to provide expanded fields of view is analyzed in [28]. While a wide field of view can be provided by many types of curved mirrors, we focus specifically on hyperboloidal mirrors. Such mirrors can provide geometrically correct perspective images from a single viewpoint [28], and have been used in previous research to provide mobile robots with knowledge of obstacles, rolling, and swaying by optical flow analysis [19,29–32]. Within a Cartesian coordinate system (x, y, z), the surface of a hyperboloid mirror is described by

x2+y2a2z2b2=1,z>0.

The parameters a and b parameterize the shape of the mirror. One of the foci of the hyperboloid, denoted Fm, lies on the z axis at (0,0,c), where c=a2+b2. The point Fc at (0,0,−c) is the opposite focal point of the “other half” of the hyperboloid, which is not manifested as a mirror surface. A schematic of this geometry is shown in Fig. 1, which defines azimuth ϕ and elevation θ.

 figure: Fig. 1

Fig. 1 The hyperboloidal-mirror-camera system: (a) side view, defining the elevation angle θ, and (b) top view, defining the azimuth angle ϕ. The beacon is located at the point S, while the foci Fm and Fc are located at (0,0,c) and (0,0,−c), respectively.

Download Full Size | PPT Slide | PDF

In this system, a ray originating from a source at point S directed towards Fm is reflected by the mirror and directed towards Fc, intersecting the image plane at (x, y). To find the value of ϕ that corresponds to the source that appears on the image plane at (x, y), we use the relation [30]

ϕ={tan1(y/x)ifx0π+tan1(y/x)ifx<0.

The image plane coordinates of the source can also be used to calculate the elevation angle θ, defined in Fig. 1. In particular,

θ=tan1(b2+c2)sinγc2bc(b2c2)cosγc,
where
γc=tan1fx2+y2
and the focal length of the camera lens is denoted by f. In Fig. 2, we plot the dependence of elevation angle θ on the radius rx2+y2 assuming a = 23.4125 mm, b = 28.095 mm, and f = 8 mm.

 figure: Fig. 2

Fig. 2 Elevation angle θ as a function of r, as given by Eq. (3). In this figure, we assume a = 23.4125 mm, b = 28.095 mm, and f = 8 mm, where these values were chosen as being consistent with the experimental system that we present in Section 4. An elevation angle of 0° corresponds to a horizontal vector pointing radially outward from Fm (see Fig. 1), while an elevation angle of 90° corresponds to a vector pointed towards Fc from Fm. The lower limit of observable elevation angle using this particular mirror is θ = −16°.

Download Full Size | PPT Slide | PDF

3. Propagation of Gaussian error in angle estimation

Given the geometry of this catadioptric system, knowledge of the location (x, y) of a feature of interest (e.g., the beacon of a neighboring node) in the image plane can be used to calculate its angular bearing (θ, ϕ). In practice, the methods of estimating x and y are quite varied and the appropriate model for the noise in this estimation depends strongly on the estimation algorithm and the properties of the particular hardware implementation. We construct an analytical model for the case of Gaussian noise in the estimation of x and y coordinates, as this noise model is commonly used in computer vision research and may serve as a first-order approximation for other forms of noise [33]. In our model, we assume that measurements of x and y follow independent Gaussian distribution functions fX(x) and fY(y), respectively:

fX(x)=1σ2πe(xμx)22σ2
and
fY(y)=1σ2πe(yμy)22σ2.

Here, (x, y) = (μx, μy) is defined as the location of the beacon image, while σ is a measure of the noise in the measurement of the beacon image location. To describe the noise in the estimation of ϕ and θ that results from noise in the measurements of x and y, we define random variables Φ and Θ and corresponding probability distributions fΦ(ϕ) and fΘ(θ). In our model, we assume that fXY(x, y) = fX(x) fY(y).

Eqs. (2) and (3) show that ϕ can be expressed as a function of w ≡ y/x, and θ can be expressed as a function of r. We define random variables W and R with corresponding probability distributions fW(w) and fR(r), respectively. We can use the general relation between two random variables [34] to relate W and R to Θ and Φ:

fΦ(ϕ)=fW(w)dwdϕ
and
fΘ(θ)=fR(r)drdθ.

Given Eqs. (5) and (6), the probability density function fW(w) is given by [35]:

fW(w)=b(w)c(w)a3(w)12πσ2[2Φ(a(w)b(w))1]+1a2(w)πσ2exp[12(μx2+μy2σ2)]
where
a(w)=w2+1σ2
b(w)=μyw+μxσ2
c(w)=exp{12[b2(w)a2(w)(μx2+μy2σ2)]}
Φ(w)=w12πexp(12u2)du.

As ϕ depends exclusively on w, fW(w) can be used to solve for fΦ(ϕ):

fΦ(ϕ)=fW(w)|dwdϕ|.

Following Eq. (2) w = tan(θ) and dw/dθ = sec2(θ). Thus, we can compute fΦ(ϕ) given values of μx, μy, and σ.

The estimation of the elevation angle θ can be analyzed similarly. Eqs. (3) and (4) show that the dependence of the elevation angle on x and y can be expressed as a dependence on r, the radius in the image plane. This allows Eqs. (3) and (4) to be expressed in terms of only one variable, r, with a corresponding random variable R. Given our model for X and Y, the distribution for R is given by [36]:

fR(r)=rσ2exp(μx2+μy2+r22σ2)I0(rμx2+μy2σ2)
where I0 is the 0th-order modified Bessel function of the first kind.

Thus fR(r) can be used to solve for the distribution of the elevation angle fΘ(θ) using the relation

fΘ(θ)=fR(r)|drdθ|,
which can be reexpressed as
fΘ(θ)=fR(r)|dθdr|1.

It follows that differentiation of Eq. (3) with respect to r and Eq. (15) can be used to solve for fΘ(θ). While the performance of experimental systems will be impacted by various error sources and depends strongly on the angle-of-arrival estimation algorithm, this analytical understanding of the propagation of Gaussian error may serve as a useful first-order approximation for modeling the cumulative error effects in general.

4. Experimental implementation

To study the use of this catadioptric system for localization of beacons, we constructed an experimental system, shown in Fig. 3 with a commercially available hyperboloidal mirror [37] and camera [38]. The camera (Prosilica GC1600H) has 1620×1220 resolution, and its lens has an 8 mm focal length. The base of the mirror is 6 cm in diameter. As shown in Fig. 3, the distance from the base of the mirror to the top of camera is approximately 16 cm. The relatively compact size of the system allows for mounting onto mobile platforms such as robots, and such catadioptric systems have been studied for robot navigation in [19, 29–32]. To calibrate and align the system, we mounted it onto a gimbal capable of precisely rotating in azimuth and elevation. This camera-mirror system was used to receive signals from a red LED beacon (Luxeon Rebel - Endor Star) [39]. The camera sensor is fitted with a Bayer filter for color image processing. The filter pattern is such that 1/4 of the pixels are dedicated to detecting blue light, 1/4 of the pixels are dedicated to detecting red light, and 1/2 of the pixels are dedicated to detecting green light. Each pixel reports an 8-bit intensity value. Thus the system only observes the beacon using the 1/4 of the total pixels that are designed to detect red light. Despite this reduction in resolution, color-specific detection could be one of many ways to identify multiple beacons simultaneously. All experiments performed using this system were performed in an indoor hallway approximately 70 m in length, with the beacon pointed directly at the mirror. The beacon was at an elevation angle of 0° relative to the mirror.

 figure: Fig. 3

Fig. 3 Experimental system, mounted onto a gimbal.

Download Full Size | PPT Slide | PDF

While there are many methods for isolating a beacon against the background, in our experiments we implement a simple on-off modulation to drive the LED beacon and subtract consecutive “on” and “off” frames to create a difference image [40]. The difference image is mostly dark, except for the pixels illuminated by the beacon. The LED is driven with a 350 mA current during “on” frames, and no current is applied during the “off” frames.

An example of a difference image is shown in Fig. 4. In the figure, the spot corresponds to a beacon at approximately 40° azimuth and 0° elevation. The inset shows a mesh plot of the pixels illuminated by the beacon. In our setup, the images captured by the camera and the modulation of the LED were synchronized via coaxial cable, enabling controlled experimental study of angle estimation accuracy. We use this synchronized algorithm to study the behavior of the system in static scenarios, in which neither the beacon nor the catadioptric system is moving. However, in general, asynchronous techniques could be implemented [41]. In moving scenarios, the differencing processes used to isolate a beacon suffer interference from motion of objects in the field of view, as well as motion of the receiver itself. Some methods for addressing these challenges are median filtering and the use of colored beacons and colored filters to isolate the beacon from the environment. These techniques are discussed in detail in references such as [41]. Other works that have utilized LED beacons for extraction of location information include [27, 40, 42].

 figure: Fig. 4

Fig. 4 Example of experimental difference image, where the beacon image region is enclosed in a white square and the inset is a mesh plot of the beacon image region. The radius r is used to calculate the elevation angle θ, which is approximately 0° in this example. The azimuth angle ϕ is approximately 40°.

Download Full Size | PPT Slide | PDF

4.1. Dark pixels

The system shows frame to frame variations even when observing unchanging scenes. These variations arise due to small response variations before frame differencing. The resulting noise can be observed by examining the “dark pixels” of the difference images, away from the pixel illuminated by the beacon (see Fig. 4). Figure 5 shows a typical histogram of the pixel intensities of the dark portions of a difference image. The mean of the pixel intensities in the histogram is μI ≈ 1.52 and the sample standard deviation is σI ≈ 1.56. To reduce the effect of this type of noise on the angle estimation, we ignore any pixels below a threshold. The camera detector yields intensity (0 to 255) per pixel, and in data presented in this paper, the threshold imposed is a pixel intensity of 10.

4.2. Angles-of-arrival estimation

A reasonable first step in estimating the angle of arrival is the estimation of the location (x, y) of the beacon in the image plane. There are many approaches to estimating (x, y) from the information in a difference image; we take the centroid as our location estimate. This approach has been utilized frequently as a method of estimating the location of an object in an image [43–48] and has the practical appeal of computational simplicity. For an M-by-N-pixel window of interest, each of the pixels has an x-coordinate xij and an intensity aij. Here, x^ (the estimate of the beacon image’s x-coordinate) is defined as

x^=j=1Mi=1Naijxijj=1Mi=1Naij
and ŷ is defined similarly. Here, M and N define a minimum bounding rectangle that encloses the illuminated region. Using the coordinates of the centroid in the image plane, we then use Eqs. (2) and (3) to calculate the estimates of the angles of arrival. The algorithm can be summarized as (1) capturing a frame with the LED beacon on, (2) capturing a frame with the LED beacon off, (3) image frame subtraction, (4) applying a threshold, (5) calculating the centroid for pixels within the window of interest, and (6) transforming the centroid into an angle of arrival estimate.

Due to sources of noise in the system, we observe small variations in the estimates of x and y, even when the receiver (mirror and camera) and beacon are fixed in orientation and position. This small variation may be due to a variety of physical effects, including instability in the LED brightness, sensor noise in the camera, etc. We measured these small variations as a function of range from a set of 100 difference images at each range. Here, we define range as the distance between the source at S and the focal point Fm. In general, the variation in centroid estimation (both x and y) is non-Gaussian. For each subset of 100 measurements taken at each range, we define sample standard deviations in the centroid estimation in x and y as σ^x and σ^y, respectively. This variation in centroid estimation results in variation in angle estimates, as the location in the image plane is related to the angles-of-arrival via Eqs. (2) and (3). We define the resulting sample standard deviations in estimation of ϕ and θ as σ^ϕ and σ^θ, respectively. We plot σ^ϕ and σ^θ as a function of range in Fig. 6. As the signal becomes weaker with range, the variation in the estimates of the angle of arrival generally increases.

 figure: Fig. 5

Fig. 5 Histogram of the dark pixel intensities of the difference image in Fig. 4. Dark pixels are those not illuminated by the beacon.

Download Full Size | PPT Slide | PDF

To examine the fidelity of the Gaussian error model developed in Section 3 in modeling the error characterized by σ^x and σ^y, we define σσ^x2+σ^y2. If we take the variation in centroid estimation to be a circular Gaussian with a variance defined as σ2, we can define distributions in angular estimates fΘ(θ) and fΦ(ϕ) using the model developed in Section 3. The variation in these distributions can be characterized by their standard deviations σ^θ and σ^ϕ. These values of σ^θ and σ^ϕ are plotted as a function of range in Fig. 6. Although the variation in centroid estimates in the image plane is typically non-Gaussian, we observe that modeling this noise as a circular Gaussian yields reasonable results as a first-order approximation of the consequent error in angular estimation.

 figure: Fig. 6

Fig. 6 Sample standard deviations in angle-of-arrival estimates of azimuth ϕ and elevation θ as a function of range. Here, σ^ϕ and σ^θ are sample standard deviations that result directly from the estimation algorithm; σ^ϕ and σ^θ are standard deviations in angle estimates that result from taking the distribution of measurements in the image plane to be a circular Gaussian with variance σσ^x2+σ^y2.

Download Full Size | PPT Slide | PDF

4.3. Range estimation

Given our particular implementation, a simple and straightforward method for range estimation utilizes the observed signal strength. We define the signal strength as the sum of the reported pixel values of the pixels within the centroiding window of the difference image. In general, the signal strength monotonically decreases with range, and this one-to-one mapping from signal strength to range allows for the possibility of using signal strength observations to create range estimates. The exact dependence of signal strength on range is a function of many parameters, including elevation angle and system hardware parameters (e.g., camera sensitivity and camera exposure time, beacon brightness, etc.). However, if all these parameters are known, then the dependence of signal strength on range can be specified, and range can be estimated using signal strength observations. Such signal-strength-based techniques could also be used to estimate range to individual nodes; in such an application scenario, signal strength would be estimated for each beacon, as opposed to observing only the aggregate signal power.

The precision of such a range estimation method is limited by the repeatability of signal strength observations (which is dictated by factors such as pixel noise and the stability of the strength as a proxy for range, we recorded observations of signal strength at nine different ranges ri, in increments of 7.6 m. At the ith range, the receiver (mirror and camera) and beacon were fixed in orientation and position, and 100 observations of signal strength were taken. The ith subset of measurements yields a mean v¯i and sample standard deviation Δvi. This data is shown in Fig. 7, in which the top inset plots the mean signal strength v¯i against range ri. The sensitivity of signal strength to range, which we define as the steepness of the curve underlying the data points, generally decreases with range. The bottom portion of the figure describes the repeatability of the measurements. The ratio of sample standard deviation to measured mean signal strength (Δvi/v¯i) grows from less than 1% at short ranges to about 5.5% at the longest range examined (67 m).

 figure: Fig. 7

Fig. 7 The top inset is a semilog plot of the mean signal strength v¯i as a function of range. The bottom inset plots Δvi/v¯i as a function of range.

Download Full Size | PPT Slide | PDF

In a calibrated system, the dependence of signal strength on range is known empirically, and thus range can be estimated using subsequent measurements of signal strength. At any particular range, the precision of estimation is a function of the variability (Δvi) in the observations of signal strength and the sensitivity of range to signal strength. To estimate the precision achievable using this estimation method, we estimate the sensitivity of signal strength at a range ri as:

si=(|v¯iv¯i1riri1|+|v¯iv¯i+1riri+1|)/2

This is an empirical approximation of the steepness of the curve underlying the points sampled at ranges ri. Combined with the stability estimated by Δvi, we construct an estimate of the precision in range estimation given by: Δri ≡ siΔvi. The values of Δri evaluated using our system are shown below in Table 1, for the middle seven of the nine ranges studied. The sensitivity si is undefined for the first (i = 1) and last (i = 9) ranges studied.

Tables Icon

Table 1. Uncertainty in range estimations.

The table shows that this simple method for ranging can yield sub-meter precision except at the longest range when the signal is weakest. At long ranges, the relative flatness (small si) of the curve increases the uncertainty Δri of the range estimation beyond one meter. Ranging precision on the order of one meter would be useful in many applications, including optical wireless communications. For instance, a transmitter could use this information to determine the minimum required transmission power needed to achieve a desired data rate. In general, the ranging precision achievable with this catadioptric system is dependent on the particular hardware parameters of the system, and the estimation could be improved with more sophisticated algorithms. For example, multi-frame integration could enhance SNR and potentially lead to an enhanced range estimation.

5. Conclusion

We have presented an all-optical means of providing nodes in a network with situational awareness of neighboring nodes, a capability that could be especially useful for OW systems operating in RF-denied environments. In this system, a receiver composed of a hyperboloidal mirror and camera is used to estimate the azimuth, elevation, and range information of an LED beacon. We developed a general framework for understanding the propagation of Gaussian error in angle-of-arrival estimation and then presented an experimental realization of such a system. For this experimental system, we used a computationally simple algorithm for estimating angles-of-arrival and range and assessed the the error and repeatability of such measurements. We believe that systems such as these can provide OW nodes with situational awareness of multiple neighboring nodes and support operation of OW systems in RF-denied environments. While the experiments discussed here rely on frame synchronization and explore static scenarios, we consider the exploration of dynamic application scenarios to be an interesting topic for future study.

Acknowledgments

Research at the University of Maryland was supported by U.S. Army Research Office (ARO) under grant number W911NF-13-1-0003.

References and links

1. D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012). [CrossRef]  

2. M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003). [CrossRef]  

3. S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009). [CrossRef]  

4. T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004). [CrossRef]  

5. N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

6. K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.

7. S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

8. H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).

9. S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

10. S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

11. H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001). [CrossRef]  

12. J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013). [CrossRef]  

13. B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE 6304, 630418 (2006). [CrossRef]  

14. S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009). [CrossRef]  

15. G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004). [CrossRef]  

16. W.L. Saw, H.H. Refai, and J.J. Sluss Jr., “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005). [CrossRef]  

17. T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004). [CrossRef]  

18. T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014). [CrossRef]   [PubMed]  

19. K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

20. T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015). [CrossRef]  

21. T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005). [CrossRef]  

22. H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

23. K.B. Sarachik, “Characterising an indoor environment with a mobile robot and uncalibrated stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1989), pp. 984–989.

24. M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

25. I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.

26. S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.

27. D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

28. S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999). [CrossRef]  

29. M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004). [CrossRef]  

30. Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

31. K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

32. J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).

33. L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987). [CrossRef]  

34. R.J. Muirhead, Aspects of Multivariate Statistical Theory(John Wiley & Sons, 2009).

35. D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika 56(3), 635–639 (1969). [CrossRef]  

36. P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009). [CrossRef]  

37. http://www.neovision.cz/prods/panoramic/h3s.html.

38. https://www.alliedvision.com/en/products/cameras/detail/Prosilica%20GC/1600H.html.

39. http://www.ledsupply.com/leds/luxeon-rebel-color-leds.

40. Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

41. G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001). [CrossRef]  

42. D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

43. M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).

44. N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments 57(6), 1152–1157 (1986). [CrossRef]  

45. J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989). [CrossRef]   [PubMed]  

46. R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987). [CrossRef]  

47. B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991). [CrossRef]  

48. S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE 4635, 65–71 (2002). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
    [Crossref]
  2. M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003).
    [Crossref]
  3. S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
    [Crossref]
  4. T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004).
    [Crossref]
  5. N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.
  6. K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.
  7. S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.
  8. H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).
  9. S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.
  10. S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).
  11. H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001).
    [Crossref]
  12. J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
    [Crossref]
  13. B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE 6304, 630418 (2006).
    [Crossref]
  14. S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
    [Crossref]
  15. G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
    [Crossref]
  16. W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
    [Crossref]
  17. T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
    [Crossref]
  18. T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
    [Crossref] [PubMed]
  19. K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.
  20. T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
    [Crossref]
  21. T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
    [Crossref]
  22. H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.
  23. K.B. Sarachik, “Characterising an indoor environment with a mobile robot and uncalibrated stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1989), pp. 984–989.
  24. M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.
  25. I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.
  26. S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.
  27. D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.
  28. S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999).
    [Crossref]
  29. M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004).
    [Crossref]
  30. Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.
  31. K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.
  32. J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).
  33. L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987).
    [Crossref]
  34. R.J. Muirhead, Aspects of Multivariate Statistical Theory(John Wiley & Sons, 2009).
  35. D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika 56(3), 635–639 (1969).
    [Crossref]
  36. P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
    [Crossref]
  37. http://www.neovision.cz/prods/panoramic/h3s.html .
  38. https://www.alliedvision.com/en/products/cameras/detail/Prosilica%20GC/1600H.html .
  39. http://www.ledsupply.com/leds/luxeon-rebel-color-leds .
  40. Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.
  41. G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001).
    [Crossref]
  42. D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.
  43. M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).
  44. N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments 57(6), 1152–1157 (1986).
    [Crossref]
  45. J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
    [Crossref] [PubMed]
  46. R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
    [Crossref]
  47. B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991).
    [Crossref]
  48. S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE 4635, 65–71 (2002).
    [Crossref]

2015 (1)

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

2014 (1)

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

2013 (1)

J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
[Crossref]

2012 (1)

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

2010 (1)

H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).

2009 (3)

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
[Crossref]

P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
[Crossref]

2007 (1)

J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).

2006 (1)

B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE 6304, 630418 (2006).
[Crossref]

2005 (2)

W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
[Crossref]

T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
[Crossref]

2004 (4)

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004).
[Crossref]

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004).
[Crossref]

2003 (2)

M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003).
[Crossref]

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

2002 (1)

S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE 4635, 65–71 (2002).
[Crossref]

2001 (2)

G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001).
[Crossref]

H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001).
[Crossref]

1999 (1)

S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999).
[Crossref]

1993 (1)

M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).

1991 (1)

B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991).
[Crossref]

1989 (1)

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

1987 (2)

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987).
[Crossref]

1986 (1)

N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments 57(6), 1152–1157 (1986).
[Crossref]

1969 (1)

D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika 56(3), 635–639 (1969).
[Crossref]

Aguiar, R.L.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

Alexander, B.F.

B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991).
[Crossref]

Alexander, J.W.

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

Alves, L.N.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

Asada, H.H

I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.

Bai, B.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Baker, S.

S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999).
[Crossref]

Basu, A.

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004).
[Crossref]

Bloom, S.

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

Bobroff, N.

N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments 57(6), 1152–1157 (1986).
[Crossref]

Borah, D.K.

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

Boucouvalas, A.C.

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

Chen, G.

D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Chen, Y.Y.

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

Chuang, J.H.

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

Cui, K.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Das, S.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Davis, C.C

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

Davis, C.C.

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
[Crossref]

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
[Crossref]

T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
[Crossref]

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

DenBaars, S.P.

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

Deng, T.P.

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

Dennison, E.W.

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

Detweiler, C.

M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

Dharmawansa, P.

P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
[Crossref]

Doniec, M.

M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

Drost, R.J.

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

Epple, B.

B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE 6304, 630418 (2006).
[Crossref]

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Ertem, M.C.

J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
[Crossref]

Farrell, J.A

D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

Farrell, J.A.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Fiala, M.

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004).
[Crossref]

Friedl, A.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Ghuman, B.S.

H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001).
[Crossref]

Glavich, T.A.

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

Grubor, J.

K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.

Henniger, H.

H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Hinkley, D.V.

D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika 56(3), 635–639 (1969).
[Crossref]

Ho, T.H.

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

Ho, T.J.

T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
[Crossref]

Hovland, L.F.

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

Hranilovic, S.

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

Ishiguro, H.

H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

Jenkins, E.B.

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

Jung, S.

S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.

Kim, J.

J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).

Komine, T.

T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004).
[Crossref]

Korevaar, E.

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

Kress, D.

M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003).
[Crossref]

Kumar, N.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

Lan, K.M.

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

Langer, K.D.

K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.

Lee, S.

S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE 4635, 65–71 (2002).
[Crossref]

S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.

Leitgeb, E.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Liu, H.

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

Liu, H.H.S

G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001).
[Crossref]

Llorca, J.

S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
[Crossref]

Lourenço, N.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

Lu, G.

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

Lu, Y.

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

Matthies, L.

L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987).
[Crossref]

Milner, S.

S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
[Crossref]

Milner, S.D.

T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
[Crossref]

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

Moore, C.I.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Morgan, J.S.

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

Muhammad, S.S.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Muirhead, R.J.

R.J. Muirhead, Aspects of Multivariate Statistical Theory(John Wiley & Sons, 2009).

Nakagawa, M.

T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004).
[Crossref]

Nakamura, S.

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

Nayar, S.K.

S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999).
[Crossref]

Ng, K.C.

B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991).
[Crossref]

Nishii, W.

Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

Pai, H.I.

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

Pang, G.K.H.

G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001).
[Crossref]

Pimputkar, S.

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

Plank, T.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Pline, A.

M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).

Rabinovich, W.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Rajatheva, N.

P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
[Crossref]

Refai, H.H.

W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
[Crossref]

Rus, D.

M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

Rust, I.C.

I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.

Rzasa, J.

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
[Crossref]

Sadler, B.M.

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

Sarachik, K.B.

K.B. Sarachik, “Characterising an indoor environment with a mobile robot and uncalibrated stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1989), pp. 984–989.

Saw, W.L.

W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
[Crossref]

Schmitt, N.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Schuster, J.

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

Shafer, S.A.

L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987).
[Crossref]

Shen, T.C.

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

Slater, D.C.

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

Sluss, J.J.

W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
[Crossref]

Smolyaninov, I.

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

Sova, R.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Speck, J.S.

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

Stanton, R.H.

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

Suga, Y.

J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).

Tellambura, C.

P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
[Crossref]

Terra, D.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

Timothy, J.G.

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

Tomaž, J.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Trisno, S.

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

Tsuji, S.

H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

Vasilescu, I.

M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

Wernet, M.P.

M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).

Wilfert, O.

H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).

Willebrand, H.

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

Willebrand, H.A.

H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001).
[Crossref]

Wolf, M.

M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003).
[Crossref]

Yachida, M.

K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

Yagi, Y.

Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

Yamamoto, M.

H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

Yamazawa, K.

K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

Yiannopoulos, K.

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

Young, D.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

Yuan, C.Y.

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

Zettl, K.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

Zheng, D.

D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Applied Optics (1)

J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics 28(6), 1178–1192 (1989).
[Crossref] [PubMed]

Biometrika (1)

D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika 56(3), 635–639 (1969).
[Crossref]

EURASIP Journal on Wireless Communications and Networking (1)

D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking 2012(1), 1–28 (2012).
[Crossref]

Experiments in Fluids (1)

M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids 15(4), 295–307 (1993).

IEEE Circuits and Systems Magazine (1)

S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine 9(2), 10–26 (2009).
[Crossref]

IEEE Journal of Robotics and Automation (1)

L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation 3(3), 239–248 (1987).
[Crossref]

IEEE Spectrum (1)

H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001).
[Crossref]

IEEE Transactions on Communications (1)

P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications 57(4), 915–921 (2009).
[Crossref]

IEEE Transactions on Consumer Electronics (1)

T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics 50(1), 100–107 (2004).
[Crossref]

IEEE Transactions on Intelligent Transportation Systems (1)

G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems 2(3), 135–150 (2001).
[Crossref]

IEEE Wireless Communications (1)

M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications 10(2), 8–14 (2003).
[Crossref]

International Journal of Computer Vision (1)

S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision 35(2), 175–196 (1999).
[Crossref]

International Journal of Control Automation and Systems (1)

J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems 5(6), 663–673 (2007).

Journal of Optical Networking (1)

S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking 2(6), 178–200 (2003).

Nature Photonics (1)

S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics 3(4), 180–182 (2009).
[Crossref]

Optical Engineering (2)

R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering 26(9), 269930 (1987).
[Crossref]

B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering 30(9), 1320–1331 (1991).
[Crossref]

Optics Express (1)

T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express 22(9), 11107–11118 (2014).
[Crossref] [PubMed]

Pattern Recognition (1)

M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition 37(11), 2195–2215 (2004).
[Crossref]

Proc. SPIE (8)

T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE 9354, 93540M (2015).
[Crossref]

T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE 5712, 81–92 (2005).
[Crossref]

J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE 8874, 88740 (2013).
[Crossref]

B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE 6304, 630418 (2006).
[Crossref]

G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE 5160, 432–438 (2004).
[Crossref]

W.L. Saw, H.H. Refai, and J.J. Sluss, “Free space optical alignment system using GPS,” Proc. SPIE 5712, 101–109 (2005).
[Crossref]

T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE 5237, 147–158 (2004).
[Crossref]

S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE 4635, 65–71 (2002).
[Crossref]

Radioengineering (1)

H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering 19(2), 203–212 (2010).

Review of Scientific Instruments (1)

N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments 57(6), 1152–1157 (1986).
[Crossref]

Other (19)

D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

http://www.neovision.cz/prods/panoramic/h3s.html .

https://www.alliedvision.com/en/products/cameras/detail/Prosilica%20GC/1600H.html .

http://www.ledsupply.com/leds/luxeon-rebel-color-leds .

Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.

S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

K.B. Sarachik, “Characterising an indoor environment with a mobile robot and uncalibrated stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1989), pp. 984–989.

M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.

S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.

D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

R.J. Muirhead, Aspects of Multivariate Statistical Theory(John Wiley & Sons, 2009).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The hyperboloidal-mirror-camera system: (a) side view, defining the elevation angle θ, and (b) top view, defining the azimuth angle ϕ. The beacon is located at the point S, while the foci Fm and Fc are located at (0,0,c) and (0,0,−c), respectively.
Fig. 2
Fig. 2 Elevation angle θ as a function of r, as given by Eq. (3). In this figure, we assume a = 23.4125 mm, b = 28.095 mm, and f = 8 mm, where these values were chosen as being consistent with the experimental system that we present in Section 4. An elevation angle of 0° corresponds to a horizontal vector pointing radially outward from Fm (see Fig. 1), while an elevation angle of 90° corresponds to a vector pointed towards Fc from Fm. The lower limit of observable elevation angle using this particular mirror is θ = −16°.
Fig. 3
Fig. 3 Experimental system, mounted onto a gimbal.
Fig. 4
Fig. 4 Example of experimental difference image, where the beacon image region is enclosed in a white square and the inset is a mesh plot of the beacon image region. The radius r is used to calculate the elevation angle θ, which is approximately 0° in this example. The azimuth angle ϕ is approximately 40°.
Fig. 5
Fig. 5 Histogram of the dark pixel intensities of the difference image in Fig. 4. Dark pixels are those not illuminated by the beacon.
Fig. 6
Fig. 6 Sample standard deviations in angle-of-arrival estimates of azimuth ϕ and elevation θ as a function of range. Here, σ ^ ϕ and σ ^ θ are sample standard deviations that result directly from the estimation algorithm; σ ^ ϕ and σ ^ θ are standard deviations in angle estimates that result from taking the distribution of measurements in the image plane to be a circular Gaussian with variance σ σ ^ x 2 + σ ^ y 2.
Fig. 7
Fig. 7 The top inset is a semilog plot of the mean signal strength v ¯ i as a function of range. The bottom inset plots Δ v i / v ¯ i as a function of range.

Tables (1)

Tables Icon

Table 1 Uncertainty in range estimations.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

x 2 + y 2 a 2 z 2 b 2 = 1 , z > 0.
ϕ = { tan 1 ( y / x ) if x 0 π + tan 1 ( y / x ) if x < 0.
θ = tan 1 ( b 2 + c 2 ) sin γ c 2 b c ( b 2 c 2 ) cos γ c ,
γ c = tan 1 f x 2 + y 2
f X ( x ) = 1 σ 2 π e ( x μ x ) 2 2 σ 2
f Y ( y ) = 1 σ 2 π e ( y μ y ) 2 2 σ 2 .
f Φ ( ϕ ) = f W ( w ) d w d ϕ
f Θ ( θ ) = f R ( r ) d r d θ .
f W ( w ) = b ( w ) c ( w ) a 3 ( w ) 1 2 π σ 2 [ 2 Φ ( a ( w ) b ( w ) ) 1 ] + 1 a 2 ( w ) π σ 2 exp [ 1 2 ( μ x 2 + μ y 2 σ 2 ) ]
a ( w ) = w 2 + 1 σ 2
b ( w ) = μ y w + μ x σ 2
c ( w ) = exp { 1 2 [ b 2 ( w ) a 2 ( w ) ( μ x 2 + μ y 2 σ 2 ) ] }
Φ ( w ) = w 1 2 π exp ( 1 2 u 2 ) d u .
f Φ ( ϕ ) = f W ( w ) | d w d ϕ | .
f R ( r ) = r σ 2 exp ( μ x 2 + μ y 2 + r 2 2 σ 2 ) I 0 ( r μ x 2 + μ y 2 σ 2 )
f Θ ( θ ) = f R ( r ) | d r d θ | ,
f Θ ( θ ) = f R ( r ) | d θ d r | 1 .
x ^ = j = 1 M i = 1 N a i j x i j j = 1 M i = 1 N a i j
s i = ( | v ¯ i v ¯ i 1 r i r i 1 | + | v ¯ i v ¯ i + 1 r i r i + 1 | ) / 2

Metrics