## Abstract

We present a catadioptric beacon localization system that can provide mobile network nodes with omnidirectional situational awareness of neighboring nodes. In this system, a receiver composed of a hyperboloidal mirror and camera is used to estimate the azimuth, elevation, and range of an LED beacon. We provide a general framework for understanding the propagation of error in the angle-of-arrival estimation and then present an experimental realization of such a system. The situational awareness provided by the proposed system can enable the alignment of communication nodes in an optical wireless network, which may be particularly useful in addressing RF-denied environments.

© 2016 Optical Society of America

## 1. Introduction

Although wireless communication has traditionally been dominated by radio frequency (RF) technology, increasingly there is interest in using optical wireless (OW) technology to serve as an adjunct to RF in certain applications [1]. Among the advantages of OW communication over existing wireless RF communication are its access to a wide, unregulated spectrum and its relative immunity to eavesdropping and jamming [2]. OW systems may be especially useful in “RF-denied” environments, in which RF communication is either prohibited or undesirable. This may include settings such as hospital rooms that contain electromagnetically-sensitive equipment or tactical military environments.

Interest in OW systems has been especially keen in light of the recent maturation of light-emitting diode (LED) technology. There are predictions that LEDs will become the dominant lighting source [3], used in applications from interior lighting to traffic lights. This emerging ubiquity of LED technology presents an opportunity to utilize LEDs as a means for implementing OW communication systems, which may include indoor local area networks [4], smart transportation systems [5], and communication between mobile platforms such as vehicles or robots [6].

A challenge presented by OW communication is that it often requires a higher concentration of power at the receiver than RF links because of the fundamental differences in the detection mechanisms of the two technologies [1]. Typically, optical receivers are orders of magnitude less sensitive than their RF counterparts [2]. Establishing optical links beyond very short ranges thus often requires the transmitted energy to be directed towards the receiver so that the concentration of received power is sufficient [1]. The consequent alignment demands present a major challenge for OW links, especially with mobile nodes [7–9]. Even in stationary systems in which transceivers are nominally fixed atop buildings, building sway can present a challenge [10, 11].

Creating and maintaining optical links between mobile nodes thus requires individual nodes to have constantly updated awareness of the locations of neighboring nodes. This could be achieved, for example, with nodes that share GPS information via RF links [12–17]. However, sharing of locations via RF links may not be feasible in RF-denied settings, or the nodes may not have precise self-localization information in a common frame of reference. This drawback motivated our previous work [18], in which we explored the use of LED-based communication links with wide beams and relaxed alignment constraints as an alternative means of addressing the alignment challenges of OW links.

Expanding upon this interest in using all-optical means to address the alignment demands of OW systems, we study the application of an imaging optical system [19, 20] to provide optical wireless nodes with location information of neighboring nodes. In this system, a curved mirror and a camera constitute an imaging receiver used to estimate the angle-of-arrival of light-emitting sources (beacons) placed on neighboring nodes. Such a system maintains a 360° field of view in azimuth without the need for mechanical scanning [21–23]. Equipped with such a system, a given node can estimate the angular bearings of nearby cooperative nodes, enabling the alignment of OW links. The ranges to nearby nodes can also be estimated with such a system. This information could be used, for example, to estimate the data rate achievable in an optical link [18]. Applications may include the use of such a system to align OW links between robots [24, 25], vehicles [6], and other platforms. And in addition to utilization in the alignment of point-to-point optical links, such a device could be used in LED-based indoor positioning systems [26, 27].

In this work, we describe the geometry and operating principles of this beacon localization system and expand upon the work in [20] to develop a general analytical model for propagation of Gaussian error in the system and the effect on angle-of-arrival estimation. While the type of errors present in any given implementation may vary considerably, this analytical model may serve as a useful first-order approximation for system modeling and performance prediction. We then present an experimental realization of a catadioptric system and measure the estimation error performance of the angles-of-arrival and range of an LED beacon.

## 2. System geometry

We propose the use of a rotationally symmetric curved mirror and a camera to act as a means of providing OW links with the omnidirectional awareness necessary for localization of nearby nodes. Systems that combine the use of refractive and reflective components are known as catadioptric systems, and their use to provide expanded fields of view is analyzed in [28]. While a wide field of view can be provided by many types of curved mirrors, we focus specifically on hyperboloidal mirrors. Such mirrors can provide geometrically correct perspective images from a single viewpoint [28], and have been used in previous research to provide mobile robots with knowledge of obstacles, rolling, and swaying by optical flow analysis [19,29–32]. Within a Cartesian coordinate system (*x*, *y*, *z*), the surface of a hyperboloid mirror is described by

The parameters *a* and *b* parameterize the shape of the mirror. One of the foci of the hyperboloid, denoted *F*_{m}, lies on the *z* axis at (0,0,*c*), where
$c=\sqrt{{a}^{2}+{b}^{2}}$. The point *F*_{c} at (0,0,−*c*) is the opposite focal point of the “other half” of the hyperboloid, which is not manifested as a mirror surface. A schematic of this geometry is shown in Fig. 1, which defines azimuth *ϕ* and elevation *θ*.

In this system, a ray originating from a source at point *S* directed towards *F*_{m} is reflected by the mirror and directed towards *F*_{c}, intersecting the image plane at (*x*, *y*). To find the value of *ϕ* that corresponds to the source that appears on the image plane at (*x*, *y*), we use the relation [30]

The image plane coordinates of the source can also be used to calculate the elevation angle *θ*, defined in Fig. 1. In particular,

*f*. In Fig. 2, we plot the dependence of elevation angle

*θ*on the radius $r\equiv \sqrt{{x}^{2}+{y}^{2}}$ assuming

*a*= 23.4125 mm,

*b*= 28.095 mm, and

*f*= 8 mm.

## 3. Propagation of Gaussian error in angle estimation

Given the geometry of this catadioptric system, knowledge of the location (*x, y*) of a feature of interest (e.g., the beacon of a neighboring node) in the image plane can be used to calculate its angular bearing (*θ*, *ϕ*). In practice, the methods of estimating *x* and *y* are quite varied and the appropriate model for the noise in this estimation depends strongly on the estimation algorithm and the properties of the particular hardware implementation. We construct an analytical model for the case of Gaussian noise in the estimation of *x* and *y* coordinates, as this noise model is commonly used in computer vision research and may serve as a first-order approximation for other forms of noise [33]. In our model, we assume that measurements of *x* and *y* follow independent Gaussian distribution functions *f _{X}*(

*x*) and

*f*(

_{Y}*y*), respectively:

Here, (*x, y*) = (*μ*_{x}*, μ*_{y}) is defined as the location of the beacon image, while *σ* is a measure of the noise in the measurement of the beacon image location. To describe the noise in the estimation of *ϕ* and *θ* that results from noise in the measurements of *x* and *y*, we define random variables Φ and Θ and corresponding probability distributions *f*_{Φ}(*ϕ*) and *f*_{Θ}(*θ*). In our model, we assume that *f _{XY}*(

*x, y*) =

*f*(

_{X}*x*)

*f*(

_{Y}*y*).

Eqs. (2) and (3) show that *ϕ* can be expressed as a function of *w ≡ y/x*, and *θ* can be expressed as a function of *r*. We define random variables *W* and *R* with corresponding probability distributions *f _{W}*(

*w*) and

*f*(

_{R}*r*), respectively. We can use the general relation between two random variables [34] to relate

*W*and

*R*to Θ and Φ:

Given Eqs. (5) and (6), the probability density function *f _{W}*(

*w*) is given by [35]:

As *ϕ* depends exclusively on *w*, *f _{W}*(

*w*) can be used to solve for

*f*

_{Φ}(

*ϕ*):

Following Eq. (2) *w* = tan(*θ*) and d*w/*d*θ* = sec^{2}(*θ*). Thus, we can compute *f*_{Φ}(*ϕ*) given values of *μ*_{x}, *μ*_{y}, and *σ*.

The estimation of the elevation angle *θ* can be analyzed similarly. Eqs. (3) and (4) show that the dependence of the elevation angle on *x* and *y* can be expressed as a dependence on *r*, the radius in the image plane. This allows Eqs. (3) and (4) to be expressed in terms of only one variable, *r*, with a corresponding random variable *R*. Given our model for *X* and *Y*, the distribution for *R* is given by [36]:

*I*

_{0}is the 0th-order modified Bessel function of the first kind.

Thus *f _{R}*(

*r*) can be used to solve for the distribution of the elevation angle

*f*

_{Θ}(

*θ*) using the relation

It follows that differentiation of Eq. (3) with respect to *r* and Eq. (15) can be used to solve for *f*_{Θ}(*θ*). While the performance of experimental systems will be impacted by various error sources and depends strongly on the angle-of-arrival estimation algorithm, this analytical understanding of the propagation of Gaussian error may serve as a useful first-order approximation for modeling the cumulative error effects in general.

## 4. Experimental implementation

To study the use of this catadioptric system for localization of beacons, we constructed an experimental system, shown in Fig. 3 with a commercially available hyperboloidal mirror [37] and camera [38]. The camera (Prosilica GC1600H) has 1620×1220 resolution, and its lens has an 8 mm focal length. The base of the mirror is 6 cm in diameter. As shown in Fig. 3, the distance from the base of the mirror to the top of camera is approximately 16 cm. The relatively compact size of the system allows for mounting onto mobile platforms such as robots, and such catadioptric systems have been studied for robot navigation in [19, 29–32]. To calibrate and align the system, we mounted it onto a gimbal capable of precisely rotating in azimuth and elevation. This camera-mirror system was used to receive signals from a red LED beacon (Luxeon Rebel - Endor Star) [39]. The camera sensor is fitted with a Bayer filter for color image processing. The filter pattern is such that 1/4 of the pixels are dedicated to detecting blue light, 1/4 of the pixels are dedicated to detecting red light, and 1/2 of the pixels are dedicated to detecting green light. Each pixel reports an 8-bit intensity value. Thus the system only observes the beacon using the 1/4 of the total pixels that are designed to detect red light. Despite this reduction in resolution, color-specific detection could be one of many ways to identify multiple beacons simultaneously. All experiments performed using this system were performed in an indoor hallway approximately 70 m in length, with the beacon pointed directly at the mirror. The beacon was at an elevation angle of 0° relative to the mirror.

While there are many methods for isolating a beacon against the background, in our experiments we implement a simple on-off modulation to drive the LED beacon and subtract consecutive “on” and “off” frames to create a difference image [40]. The difference image is mostly dark, except for the pixels illuminated by the beacon. The LED is driven with a 350 mA current during “on” frames, and no current is applied during the “off” frames.

An example of a difference image is shown in Fig. 4. In the figure, the spot corresponds to a beacon at approximately 40° azimuth and 0° elevation. The inset shows a mesh plot of the pixels illuminated by the beacon. In our setup, the images captured by the camera and the modulation of the LED were synchronized via coaxial cable, enabling controlled experimental study of angle estimation accuracy. We use this synchronized algorithm to study the behavior of the system in static scenarios, in which neither the beacon nor the catadioptric system is moving. However, in general, asynchronous techniques could be implemented [41]. In moving scenarios, the differencing processes used to isolate a beacon suffer interference from motion of objects in the field of view, as well as motion of the receiver itself. Some methods for addressing these challenges are median filtering and the use of colored beacons and colored filters to isolate the beacon from the environment. These techniques are discussed in detail in references such as [41]. Other works that have utilized LED beacons for extraction of location information include [27, 40, 42].

#### 4.1. Dark pixels

The system shows frame to frame variations even when observing unchanging scenes. These variations arise due to small response variations before frame differencing. The resulting noise can be observed by examining the “dark pixels” of the difference images, away from the pixel illuminated by the beacon (see Fig. 4). Figure 5 shows a typical histogram of the pixel intensities of the dark portions of a difference image. The mean of the pixel intensities in the histogram is *μ*_{I} ≈ 1.52 and the sample standard deviation is *σ*_{I} ≈ 1.56. To reduce the effect of this type of noise on the angle estimation, we ignore any pixels below a threshold. The camera detector yields intensity (0 to 255) per pixel, and in data presented in this paper, the threshold imposed is a pixel intensity of 10.

#### 4.2. Angles-of-arrival estimation

A reasonable first step in estimating the angle of arrival is the estimation of the location (*x, y*) of the beacon in the image plane. There are many approaches to estimating (*x, y*) from the information in a difference image; we take the centroid as our location estimate. This approach has been utilized frequently as a method of estimating the location of an object in an image [43–48] and has the practical appeal of computational simplicity. For an *M*-by-*N*-pixel window of interest, each of the pixels has an x-coordinate *x _{ij}* and an intensity

*a*. Here, $\widehat{x}$ (the estimate of the beacon image’s

_{ij}*x*-coordinate) is defined as

*ŷ*is defined similarly. Here,

*M*and

*N*define a minimum bounding rectangle that encloses the illuminated region. Using the coordinates of the centroid in the image plane, we then use Eqs. (2) and (3) to calculate the estimates of the angles of arrival. The algorithm can be summarized as (1) capturing a frame with the LED beacon on, (2) capturing a frame with the LED beacon off, (3) image frame subtraction, (4) applying a threshold, (5) calculating the centroid for pixels within the window of interest, and (6) transforming the centroid into an angle of arrival estimate.

Due to sources of noise in the system, we observe small variations in the estimates of *x* and *y*, even when the receiver (mirror and camera) and beacon are fixed in orientation and position. This small variation may be due to a variety of physical effects, including instability in the LED brightness, sensor noise in the camera, etc. We measured these small variations as a function of range from a set of 100 difference images at each range. Here, we define range as the distance between the source at *S* and the focal point *F*_{m}. In general, the variation in centroid estimation (both *x* and *y*) is non-Gaussian. For each subset of 100 measurements taken at each range, we define sample standard deviations in the centroid estimation in *x* and *y* as
${\widehat{\mathrm{\sigma}}}_{\mathrm{x}}$ and
${\widehat{\mathrm{\sigma}}}_{\mathrm{y}}$, respectively. This variation in centroid estimation results in variation in angle estimates, as the location in the image plane is related to the angles-of-arrival via Eqs. (2) and (3). We define the resulting sample standard deviations in estimation of *ϕ* and *θ* as
${\widehat{\mathrm{\sigma}}}_{\varphi}$ and
${\widehat{\mathrm{\sigma}}}_{\theta}$, respectively. We plot
${\widehat{\mathrm{\sigma}}}_{\varphi}$ and
${\widehat{\mathrm{\sigma}}}_{\theta}$ as a function of range in Fig. 6. As the signal becomes weaker with range, the variation in the estimates of the angle of arrival generally increases.

To examine the fidelity of the Gaussian error model developed in Section 3 in modeling the error characterized by
${\widehat{\mathrm{\sigma}}}_{\mathrm{x}}$ and
${\widehat{\mathrm{\sigma}}}_{\mathrm{y}}$, we define
$\mathrm{\sigma}\equiv \sqrt{{\widehat{\mathrm{\sigma}}}_{\mathrm{x}}^{2}+{\widehat{\mathrm{\sigma}}}_{\mathrm{y}}^{2}}$. If we take the variation in centroid estimation to be a circular Gaussian with a variance defined as *σ*^{2}, we can define distributions in angular estimates *f*_{Θ}(*θ*) and *f*_{Φ}(*ϕ*) using the model developed in Section 3. The variation in these distributions can be characterized by their standard deviations
${\widehat{\mathrm{\sigma}}}^{\prime}{}_{\theta}$ and
${\widehat{\mathrm{\sigma}}}^{\prime}{}_{\varphi}$. These values of
${\widehat{\mathrm{\sigma}}}^{\prime}{}_{\theta}$ and
${\widehat{\mathrm{\sigma}}}^{\prime}{}_{\varphi}$ are plotted as a function of range in Fig. 6. Although the variation in centroid estimates in the image plane is typically non-Gaussian, we observe that modeling this noise as a circular Gaussian yields reasonable results as a first-order approximation of the consequent error in angular estimation.

#### 4.3. Range estimation

Given our particular implementation, a simple and straightforward method for range estimation utilizes the observed signal strength. We define the signal strength as the sum of the reported pixel values of the pixels within the centroiding window of the difference image. In general, the signal strength monotonically decreases with range, and this one-to-one mapping from signal strength to range allows for the possibility of using signal strength observations to create range estimates. The exact dependence of signal strength on range is a function of many parameters, including elevation angle and system hardware parameters (e.g., camera sensitivity and camera exposure time, beacon brightness, etc.). However, if all these parameters are known, then the dependence of signal strength on range can be specified, and range can be estimated using signal strength observations. Such signal-strength-based techniques could also be used to estimate range to individual nodes; in such an application scenario, signal strength would be estimated for each beacon, as opposed to observing only the aggregate signal power.

The precision of such a range estimation method is limited by the repeatability of signal strength observations (which is dictated by factors such as pixel noise and the stability of the strength as a proxy for range, we recorded observations of signal strength at nine different ranges *r _{i}*, in increments of 7.6 m. At the

*i*th range, the receiver (mirror and camera) and beacon were fixed in orientation and position, and 100 observations of signal strength were taken. The

*i*th subset of measurements yields a mean ${\overline{v}}_{i}$ and sample standard deviation Δ

*v*. This data is shown in Fig. 7, in which the top inset plots the mean signal strength ${\overline{v}}_{i}$ against range

_{i}*r*. The sensitivity of signal strength to range, which we define as the steepness of the curve underlying the data points, generally decreases with range. The bottom portion of the figure describes the repeatability of the measurements. The ratio of sample standard deviation to measured mean signal strength $(\Delta {v}_{i}/{\overline{v}}_{i})$ grows from less than 1% at short ranges to about 5.5% at the longest range examined (67 m).

_{i}In a calibrated system, the dependence of signal strength on range is known empirically, and thus range can be estimated using subsequent measurements of signal strength. At any particular range, the precision of estimation is a function of the variability (Δ*v _{i}*) in the observations of signal strength and the sensitivity of range to signal strength. To estimate the precision achievable using this estimation method, we estimate the sensitivity of signal strength at a range

*r*as:

_{i}This is an empirical approximation of the steepness of the curve underlying the points sampled at ranges *r _{i}*. Combined with the stability estimated by Δ

*v*, we construct an estimate of the precision in range estimation given by: Δ

_{i}*r*Δ

_{i}≡ s_{i}*v*. The values of Δ

_{i}*r*evaluated using our system are shown below in Table 1, for the middle seven of the nine ranges studied. The sensitivity

_{i}*s*is undefined for the first (

_{i}*i*= 1) and last (

*i*= 9) ranges studied.

The table shows that this simple method for ranging can yield sub-meter precision except at the longest range when the signal is weakest. At long ranges, the relative flatness (small *s _{i}*) of the curve increases the uncertainty Δ

*r*of the range estimation beyond one meter. Ranging precision on the order of one meter would be useful in many applications, including optical wireless communications. For instance, a transmitter could use this information to determine the minimum required transmission power needed to achieve a desired data rate. In general, the ranging precision achievable with this catadioptric system is dependent on the particular hardware parameters of the system, and the estimation could be improved with more sophisticated algorithms. For example, multi-frame integration could enhance SNR and potentially lead to an enhanced range estimation.

_{i}## 5. Conclusion

We have presented an all-optical means of providing nodes in a network with situational awareness of neighboring nodes, a capability that could be especially useful for OW systems operating in RF-denied environments. In this system, a receiver composed of a hyperboloidal mirror and camera is used to estimate the azimuth, elevation, and range information of an LED beacon. We developed a general framework for understanding the propagation of Gaussian error in angle-of-arrival estimation and then presented an experimental realization of such a system. For this experimental system, we used a computationally simple algorithm for estimating angles-of-arrival and range and assessed the the error and repeatability of such measurements. We believe that systems such as these can provide OW nodes with situational awareness of multiple neighboring nodes and support operation of OW systems in RF-denied environments. While the experiments discussed here rely on frame synchronization and explore static scenarios, we consider the exploration of dynamic application scenarios to be an interesting topic for future study.

## Acknowledgments

Research at the University of Maryland was supported by U.S. Army Research Office (ARO) under grant number W911NF-13-1-0003.

## References and links

**1. **D.K. Borah, A.C. Boucouvalas, C.C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” EURASIP Journal on Wireless Communications and Networking **2012**(1), 1–28 (2012). [CrossRef]

**2. **M. Wolf and D. Kress, “Short-range wireless infrared transmission: the link budget compared to RF,” IEEE Wireless Communications **10**(2), 8–14 (2003). [CrossRef]

**3. **S. Pimputkar, J.S. Speck, S.P. DenBaars, and S. Nakamura, “Prospects for LED lighting,” Nature Photonics **3**(4), 180–182 (2009). [CrossRef]

**4. **T. Komine and M. Nakagawa, “Fundamental analysis for visible-light communication system using LED lights,” IEEE Transactions on Consumer Electronics **50**(1), 100–107 (2004). [CrossRef]

**5. **N. Kumar, D. Terra, N. Lourenço, L.N. Alves, and R.L. Aguiar, “Visible light communication for intelligent transportation in road safety applications,” in Proceedings of IEEE Wireless Communications and Mobile Computing Conference (IEEE, 2011), pp. 1513–1518.

**6. **K.D. Langer and J. Grubor, “Recent developments in optical wireless communications using infrared and visible light,” in Proceedings of IEEE International Conference on Transparent Optical Networks, (IEEE, 2007) pp. 146–151.

**7. **S.S. Muhammad, T. Plank, E. Leitgeb, A. Friedl, K. Zettl, J. Tomaž, and N. Schmitt, “Challenges in establishing free space optical communications between flying vehicles,” in Proceedings of IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing (IEEE2008), pp.82–86.

**8. **H. Henniger and O. Wilfert, “An introduction to free-space optical communications,” Radioengineering **19**(2), 203–212 (2010).

**9. **S. Das, H. Henniger, B. Epple, C.I. Moore, W. Rabinovich, R. Sova, and D. Young, “Requirements and challenges for tactical free-space lasercomm,” in Proceedings of Military Communications Conference (IEEE, 2008), pp. 1–10.

**10. **S. Bloom, E. Korevaar, J. Schuster, and H. Willebrand, “Understanding the performance of free-space optics,” Journal of Optical Networking **2**(6), 178–200 (2003).

**11. **H.A. Willebrand and B.S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum **38**(8), 40–45 (2001). [CrossRef]

**12. **J. Rzasa, M.C. Ertem, and C.C. Davis, “Pointing, acquisition, and tracking considerations for mobile directional wireless communications systems,” Proc. SPIE **8874**, 88740 (2013). [CrossRef]

**13. **B. Epple, “Using a GPS-aided inertial system for coarse-pointing of free-space optical communication terminals,” Proc. SPIE **6304**, 630418 (2006). [CrossRef]

**14. **S. Milner, J. Llorca, and C.C. Davis, “Autonomous reconfiguration and control in directional mobile ad hoc networks,” IEEE Circuits and Systems Magazine **9**(2), 10–26 (2009). [CrossRef]

**15. **G. Lu, Y. Lu, T.P. Deng, and H. Liu, “Automatic alignment of optical-beam-based GPS for free-space laser communication system,” Proc. SPIE **5160**, 432–438 (2004). [CrossRef]

**16. **W.L. Saw, H.H. Refai, and J.J. Sluss Jr., “Free space optical alignment system using GPS,” Proc. SPIE **5712**, 101–109 (2005). [CrossRef]

**17. **T.H. Ho, S. Trisno, I. Smolyaninov, S.D. Milner, and C.C. Davis, “Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks,” Proc. SPIE **5237**, 147–158 (2004). [CrossRef]

**18. **T.C. Shen, R.J. Drost, C.C. Davis, and B.M. Sadler, “Design of dual-link (wide- and narrow-beam) LED communication systems,” Optics Express **22**(9), 11107–11118 (2014). [CrossRef] [PubMed]

**19. **K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional imaging with hyperboloidal projection,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 1993), pp. 1029–1034.

**20. **T.C. Shen, R.J. Drost, J. Rzasa, B.M. Sadler, and C.C Davis, “Panoramic alignment system for optical wireless communication systems,” Proc. SPIE **9354**, 93540M (2015). [CrossRef]

**21. **T.J. Ho, S.D. Milner, and C.C. Davis, “Fully optical real-time pointing, acquisition, and tracking system for free space optical link,” Proc. SPIE **5712**, 81–92 (2005). [CrossRef]

**22. **H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-directional stereo for making global map,” in Proceedings of IEEE Third International Conference on Computer Vision (IEEE, 1990), pp. 540–547.

**23. **K.B. Sarachik, “Characterising an indoor environment with a mobile robot and uncalibrated stereo,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1989), pp. 984–989.

**24. **M. Doniec, C. Detweiler, I. Vasilescu, and D. Rus, “Using optical communication for remote underwater robot operation,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 4017–4022.

**25. **I.C. Rust and H.H Asada, “A dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2012), pp. 2445–2450.

**26. **S. Lee and S. Jung, “Location awareness using angle-of-arrival based circular-PD-array for visible light communication,” in Proceedings of IEEE Asia-Pacific Conference on Communications (IEEE, 2012), pp. 480–485.

**27. **D. Zheng, K. Cui, B. Bai, G. Chen, and J.A. Farrell, “Indoor localization based on LEDs,” in Proceedings of International Conference on Control Applications (IEEE, 2011), pp. 573–578.

**28. **S. Baker and S.K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision **35**(2), 175–196 (1999). [CrossRef]

**29. **M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition **37**(11), 2195–2215 (2004). [CrossRef]

**30. **Y. Yagi, W. Nishii, K. Yamazawa, and M. Yachida, “Rolling motion estimation for mobile robot by using omnidirectional image sensor hyperomnivision,” in Proceedings of the IEEE International Conference on Pattern Recognition (IEEE, 1996), pp. 946–950.

**31. **K. Yamazawa, Y. Yagi, and M. Yachida, “Obstacle detection with omnidirectional image sensor hyperomni vision,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 1995), pp. 1062–1067.

**32. **J. Kim and Y. Suga, “An omnidirectional vision-based moving obstacle detection in mobile robot,” International Journal of Control Automation and Systems **5**(6), 663–673 (2007).

**33. **L. Matthies and S.A. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation **3**(3), 239–248 (1987). [CrossRef]

**34. **R.J. Muirhead, *Aspects of Multivariate Statistical Theory*(John Wiley & Sons, 2009).

**35. **D.V. Hinkley, “On the ratio of two correlated normal random variables,” Biometrika **56**(3), 635–639 (1969). [CrossRef]

**36. **P. Dharmawansa, N. Rajatheva, and C. Tellambura, “Envelope and phase distribution of two correlated gaussian variables,” IEEE Transactions on Communications **57**(4), 915–921 (2009). [CrossRef]

**37. **http://www.neovision.cz/prods/panoramic/h3s.html.

**38. **https://www.alliedvision.com/en/products/cameras/detail/Prosilica%20GC/1600H.html.

**39. **http://www.ledsupply.com/leds/luxeon-rebel-color-leds.

**40. **Y.Y. Chen, K.M. Lan, H.I. Pai, J.H. Chuang, and C.Y. Yuan, “Robust light objects recognition based on computer vision,” in IEEE International Symposium on Pervasive Systems, Algorithms, and Networks (IEEE, 2009), pp. 508–514.

**41. **G.K.H. Pang and H.H.S Liu, “LED location beacon system based on processing of digital images,” IEEE Transactions on Intelligent Transportation Systems **2**(3), 135–150 (2001). [CrossRef]

**42. **D. Zheng, G. Chen, and J.A Farrell, “Navigation using linear photo detector arrays,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2013), pp. 533–538.

**43. **M.P. Wernet and A. Pline, “Particle displacement tracking technique and Cramer-Rao lower bound error in centroid estimates from CCD imagery,” Experiments in Fluids **15**(4), 295–307 (1993).

**44. **N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Review of Scientific Instruments **57**(6), 1152–1157 (1986). [CrossRef]

**45. **J.S. Morgan, D.C. Slater, J.G. Timothy, and E.B. Jenkins, “Centroid position measurements and subpixel sensitivity variations with the MAMA detector,” Applied Optics **28**(6), 1178–1192 (1989). [CrossRef] [PubMed]

**46. **R.H. Stanton, J.W. Alexander, E.W. Dennison, T.A. Glavich, and L.F. Hovland, “Optical tracking using charge-coupled devices,” Optical Engineering **26**(9), 269930 (1987). [CrossRef]

**47. **B.F. Alexander and K.C. Ng, “Elimination of systematic error in subpixel accuracy centroid,” Optical Engineering **30**(9), 1320–1331 (1991). [CrossRef]

**48. **S. Lee, “Pointing accuracy improvement using model-based noise reduction method,” Proc. SPIE **4635**, 65–71 (2002). [CrossRef]