Abstract

Indoor robotic localization is one of the most active areas in robotics research nowadays. Visible light positioning (VLP) is a promising indoor localization method, as it provides high positioning accuracy and allows for leveraging the existing lighting infrastructure. Apparently, accurate positioning performance is mostly shown by the VLP system with multiple LEDs, while such strict requirement of LED numbers is more likely to lead to VLP system failure in actual environments. In this paper, we propose a single-LED VLP system based on image sensor with the help of angle sensor estimation, which efficiently relaxes the assumption on the minimum number of simultaneously captured LEDs from several to one. Aiming at improving the robustness and accuracy of positioning in the process of continuous change of robot pose, two methods of visual-inertial message synchronization are proposed and used to obtain the well-matched positioning data packets. Various schemes of single-LED VLP system based on different sensor selections and message synchronization methods have been listed and compared in an actual environment. The effectiveness of the proposed single-LED VLP system based on odometer and image sensor as well as the robustness under LED shortage, handover situation and background non-signal light interference, are verified by real-world experiments. The experimental results show that our proposed system can provide an average accuracy of 2.47 cm and the average computational time in low-cost embedded platforms is around 0.184 s.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Nowadays, robotics localization has become a research hotspot to meet increasing life needs and business demands because localization is a fundamental problem for mobile robots to get autonomous behavior ability [1]. And obviously works in the field of robot navigation are important, as mobile robot platforms depend heavily on the navigation capabilities, and environmental navigation in turn provides a basis to carry out tasks. Therefore, a precise and real-time positioning system for indoor settings is critical in the field of location-based services and robotics at present, without which the research on the flexibility and multiscenarios of a mobile robot equipped with the ability of autonomous navigation would become meaningless. Previous research on robot positioning systems has mainly focused on high-cost external sensors such as RGB-D cameras, laser range finders, and complex algorithms, which require more computation [1] to achieve higher accuracy. Different from conventional robot indoor positioning technologies mentioned above, visible light positioning (VLP) technology is a kind of indoor positioning technology based on visible light communication (VLC) technology [2,3]. By comparison, VLP has high real-time and multiscene applicability because of its high transmission rate, low hardware cost investment, and nonelectromagnetic interference, which shows great commercial prospects.

At present, VLP broadly falls into two categories: photodiode-based (PD-based) [411] and image sensor-based (IS-based) [1221]. The PD-based VLP systems use a photodetector as the receiver to detect the received signal strength (RSS) [4,6,7], time of arrival (TOA), or time difference of arrival (TDOA) [8], which have been deeply studied by researchers before. As is known, the TOA and TDOA approaches need a rigorous synchronization clock cycle, which is very hard to execute in a commercial system [9], and the performance of RSS positioning is limited by the power fluctuation of the light source. Although some researchers have come up with the global optimization algorithm used in the PD-based VLP system to reduce the complexity of calculation and improve the positioning accuracy [5,10,11], the method is still limited by the multiple LED requirement. And through the previous works exploring PD-based VLP systems, we found that these methods need complex circuits in the receiver and are easily affected by the environment, such as ambient light and reflected light so that the systems are not robust. By contrast, the IS-based VLP systems are free from the problems mentioned above. They are more convenient, since they only need a CMOS camera, with which most mobile robots can be easily equipped.

However, there exist two main problems in IS-based VLP systems: (1) Most systems fail to achieve satisfactory performance in terms of positioning accuracy, real-time ability, and robustness, which are three essential factors in indoor positioning system. (2) Few VLP systems consider the case where the number of required LEDs is limited because of some scenarios that suffer from LED shortages.

In [13], focused on improving the real-time capability, the authors proposed a lightweight image-processing algorithm to apply in the VLP system using double LEDs, which can achieve a positioning accuracy of 3.93 cm by moving the speed up to 38.5 km/h. In [14,15], the VLP system based on double and triple LEDs was proposed and adopted into the robotics platform, which receives centimeter-level positioning accuracy. In [16], the proposed VLP system is mainly based on a dual-luminaries localization algorithm, which can achieve a high accuracy of 7.5 cm and reduce the computational time to 22.7 ms. But when only one LED luminaire is captured, the position of the receiver cannot be accurately estimated through a specific algorithm. Therefore, for these VLP systems, multiple LEDs should be captured simultaneously by the image sensor to calculate the positioning of the receiver, which means the available range of the IS-based VLP systems is limited by the field of view (FOV) of the imaging lens and the distribution of LEDs. For example, if the VLP system suffers from LED shortage, such as a missing signal or blocking obstacles, it may easily cease to be effective. Thus, several schemes have been researched to address the bottleneck of VLP, such as using a beacon-attached light source [1720] and sensors combination [2124]. Note that the authors employed the AprilTag [20] fiducial markers, instead of real LEDs, for the experiments. However, an AprilTag marker is not equivalent to a point-source LED, since each marker, with four distinctive corners, can provide four-point feature measurements. In [18], the proposed single-light positioning algorithm provides an accuracy of 7.39 cm, and the computational time is 43.05 ms. In the meantime, the LED needs a marginal marker point at the circle margin. Reference [23] used a tightly coupled fusion method through graph optimization for VLP, with the aim of coping with the LED shortage problem. However, the proposed method needs work with two or more LEDs. The work in [24] proposed a tightly coupled visual-inertial fusion method for IS-based VLP systems with the aid of a rigidly connected inertial measurement unit (IMU). But the method requires at least four LED observations to compute the initial camera position and orientation.

In this paper, we propose a single-LED VLP system based on angle sensor estimation (SLAS-VLP). The proposed SLAS-VLP system efficiently relaxes the acquirement about the minimum number of simultaneously captured LEDs from several to one and keeps a good balance among accuracy, real-time ability, and coverage. On the one hand, considering the advantages of inertial sensor such as low cost, reasonable accuracy, and no need of an external reference, we propose different methods, respectively, based on IMU, geomagnetic meter, and odometer to calculate the yaw angle during robot movement, which supplement the positioning data required by the single-LED VLP algorithm and enhance the algorithm reusability and portability. On the other hand, the inertial sensor can also obtain a reasonable 3D position of the robot, which can make up for the positioning failure when the LED is blocked or lost, and further enhance the dynamic continuity of positioning. Different from [25,26], we focus on the methods of message synchronization for the data with different published frequency, rather than calibrating a temporal offset between visual and inertial measurements. More specifically, we propose two methods of visual-inertial message synchronization to ensure that the data published by the image sensor and angle sensor are synchronized and matched, without which accuracy and dynamic continuity of positioning often cease to be effective. One is the filtering method, which is to use the message filter based on adaptive strategy to package and transmit the positioning data published by different message sources under the same time stamp so as to ensure that the positioning data and the actual motion state of the robot are synchronously matched in continuous time. Another one is the asynchronous multithreading method, which is to open different threads to subscribe to the data from different message sources. Compared with the sequential subscription of the same thread, the interval time of receiving different data is greatly shortened, and the matching degree between data is enhanced. This method achieves better real-time performance at the expense of a small precision cost. With improved robustness under LED shortage, the proposed SLAS-VLP system has good potential for practical applications in terms of better usability in various indoor environments with LED shortage problems.

We highlight the following contributions:

  • (1) Different methods of visual-inertial message synchronization are proposed for the SLAS-VLP system under LED shortage. The proposed methods contribute to high-accuracy VLP calculation, since they can address the question of multisensor message synchronization and obtain the well-matched positioning data package transmitted to our proposed single-LED positioning calculation. And then we can efficiently relax the assumption on the minimum number of simultaneously captured LEDs from several to one.
  • (2) Various schemes of SLAS-VLP systems based on different sensor selections and message synchronization methods are proposed and compared in an actual environment, which provides an important reference for the selection of robotic sensor equipment.
  • (3) With a prototyping VLC network composed of customized LEDs, our SLAS-VLP system is evaluated under the harsh environment (LED shortage, handover, and background light interference). The effectiveness for VLC beaconing and accurate three-dimensional pose tracking, as well as robustness under an LED shortage, is verified with extensive experiments.

The rest of the paper is organized as follows. Section 2 describes the principle of the proposed SLAS-VLP system. In Section 3, we present the experimental evaluation. Section 4 is the conclusion.

 figure: Fig. 1.

Fig. 1. Overall architecture of the SLAS-VLP system.

Download Full Size | PPT Slide | PDF

2. METHODOLOGY

A. Overall Architecture of the SLAS-VLP System

Founded on the loosely coupled conceptual framework of robot operating system (ROS), the proposed SLAS-VLP system combines vision and inertia. And it can be divided into three parts: VLC with a CMOS image sensor, VLP preliminaries by lightweight image processing, and an SLAS-VLP algorithm for robust robotic localization. As shown in Fig. 1, the CMOS image sensor captured the VLC-compatible LED lamp in real time to transmit the global positioning information brought by the images of LED. Using the adopted LED region of interest (ROI) extraction method, the proposed system extracts the LED-ROI region from the image in real time, avoiding the retrieval of the entire image every time. And the yaw angle of the robot in motion state can be obtained with the help of IMU (gyroscope and accelerometer), magnetometer, and odometer, respectively. In order to further improve the positioning accuracy, we adopted the attitude and heading reference systems (AHRS) algorithm [27] to make geomagnetic data correct the direction angle calculated by IMU to reduce the drift and accumulation error of the angle. Then we give the specified diagram of correcting the direction angle in Fig. 1. Its bias drift could be compensated for by simple orientation filters through the integral feedback of the error in the rate of change of orientation. Through the various message synchronization methods proposed, the matched image-sensor data and angle-sensor data are processed and packaged into a qualified positioning data packet. It would be used in the single-LED VLP algorithm to calculate the global position of the robot so that the proposed system can achieve dynamic real-time positioning and maintain high positioning accuracy even in the face of an LED shortage.

B. VLC with a CMOS Image Sensor

As is shown in Fig. 1, with the proposed positioning system utilizing the rolling shutter mechanism of the CMOS image sensor to receive the on-off keying (OOK) modulated light signals, the time-varying light signals from LEDs are perceived as spatially varying strip patterns. More specifically, the “ON” light signal or the “OFF” light signal can be transformed into a bright stripe or a dark stripe and then we could retrieve the encoded VLC messages from such barcode-like patterns captured by the CMOS image sensor mounted vertically on the mobile robot. To do so, we first extract candidate regions from the image that may contain possible LEDs; the region is referred to as an LED-ROI. For each of them, we try to decode its unique identity (ID) which is associated with the actual coordinate of the LED and find the number of its bright stripe as feature measurements. After that, we can obtain its absolute 3D position from the registered LED map.

C. VLP Preliminaries by Lightweight Image Processing

As shown in Fig. 2, an efficient method for reducing the computational complexity of image processing is to extract the LED-ROI with VLP valid information even under the light interference situation. This LED-ROI extraction method is composed of initialization and tracking. The initialization step is only resumed when the tracking target (LED-ROI) is lost or initiated; otherwise the global information of the image does not need to be considered during the ROI extraction period, only using the tracking step to detect and extract the LED-ROI in time. For the VLP calculation, the centroid pixel coordinates and the real-world coordinates of the LED are both necessary positioning information. Thus, we adopted an efficient decoding scheme to obtain the associated world coordinates of the LED, which is carried by the vertically varying strip widths within the LED-ROI. This decoding scheme includes a staged threshold scheme that provides good thresholding performance at long transmission distance as well as a synchronous decoding operation to synchronize the clock between transmitter and receiver automatically. The principle of the whole lightweight image processing to extract and decode the LED-ROI is also shown in Fig. 2. More experimental evaluation and comparison about the performance of ROI extraction is available in our previous work, which focuses on the LED-ROI extraction algorithm [28].

 figure: Fig. 2.

Fig. 2. Flow diagram of ROI extraction under background light interference and LED-ID decoding.

Download Full Size | PPT Slide | PDF

D. Single-LED VLP Algorithm with Angle Sensor (SLAS-VLP) for Robust Robotic Localization

In the proposed SLAS-VLP algorithm consisting of visual-inertial message synchronization and global positioning calculation, we first obtained the well-matched positioning data through the proposed synchronization methods and then transmitted the qualified positioning data packet to the global positioning calculation based on a single LED. In generally, designing the deployment of LEDs in the actual scene, we certainly not only consider the positioning requirement, but also take the light intensity, the number of LEDs, and the aesthetics of different indoor functional areas into consideration. Thus, with contrast to image sensor-based or PD-based VLP schemes requiring multiple LEDs detected simultaneously, our proposed single-LED positioning scheme minimizes the requirements on the number and layout of LED fixtures and can achieve real time centimeter-level indoor location tracking continuously and robustly with sparse LEDs in an actual indoor scenario.

1. Visual-Inertial Message Synchronization

We proposed two methods of visual-inertial message synchronization: the filtering method and the asynchronous multithreading method. The function of the proposed methods is to synchronize the positioning data, which are asynchronously input by different message sources at different published frequencies, so as to ensure that the positioning data are matched in continuous time.

Filtering Method: The key problem of visual-inertial positioning is that the positioning data from different sensors are published at different frequencies, which is likely to lead to mismatching between the captured data for positioning calculation and the real positioning result of the robot. Aiming at achieving accurate message synchronization, we proposed a method to filter out the invalid positioning data so as to obtain the positioning data with the same time stamp. By comparing the time stamp carried by the image sensor data and the angle sensor data, the proposed filtering method extracts the positioning data with a similar time. Then the matched positioning data are packaged and transmitted to the positioning calculation part, which can effectively avoid the interference of asynchronous data. As shown in Fig. 3, it will do harm to the continuity of positioning and even the positioning will be stuck if we only permit the positioning data with exactly the same time stamp into the proposed SLAS-VLP system. Therefore, we set a maximum interval duration between the time stamp of the image-sensor data and the angle-sensor data to judge whether the set of the data which need to be synchronized is considered as the candidate positioning data package or discarded directly. And the maximum interval duration would be dynamically adjusted within a certain range to improve the real time of positioning but pay for the synchronization quality of the positioning data package. Then when the proposed positioning system detects that the time interval between two adjacent positioning data is too large, the proposed filtering method also allows the positioning data with the similar time stamp into the positioning system in order to compensate for the loss of positioning continuity. Thus, the filtering method with adaptive ability successfully provides synchronous image-sensor data and angle-sensor data for the SLAS-VLP system.

 figure: Fig. 3.

Fig. 3. Schematic diagram of filtering method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Schematic diagram of asynchronous multithreading method.

Download Full Size | PPT Slide | PDF

2. Asynchronous Multithreading Method

Different from the filtering method, which aligns the positioning data from different message sources on the basis of single thread, we further propose a message synchronization method based on an asynchronous multithreading mechanism to effectively enhance the continuity and real-time performance of positioning. Rather than sequential subscription of one thread, the proposed method will open up the corresponding threads according to the number of pipes subscribing to the positioning data. At the same time, considering the compatibility of more hardware devices with a different performance, we choose to adopt an asynchronous multithreading strategy so that we avoid blocking our threads and are able to keep reusing them in our positioning system, which reduces the resource consumption quite a bit. What is more, since the published frequency of the angle-sensor data is generally much higher than that of image-sensor data, we assume that the angle-sensor data can always catch up with the image-sensor data. In other words, the current image-sensor data and the current angle-sensor data can be considered to be approximately synchronized, if the proposed positioning system maintains a high processing speed. Therefore, there are multiple threads executing concurrently in the process of positioning calculation with the proposed asynchronous multithreading method, which effectively improves the speed of updating and processing sensor data.

As shown in Fig. 4, our SLAS-VLP system requests any two available threads from thread pool to subscribe the published data, respectively, from image sensor and angle sensor for obtaining the needed positioning data. Note that these opened threads could come back to the thread pool and be reused when there are no data subscription tasks instead of always being occupied as the data subscription channels. In other words, asynchronous strategy makes full use of the idle interval of data subscription and reduces the consumption of resources. When our SLAS-VLP system works normally, the required sensors publish data constantly at different frequencies, so that the two opened threads will subscribe to the positioning data continuously. Because the published frequency of the angle sensors is higher, the yaw angle will be calculated at a higher updated speed. Therefore, we take the LED-ROI with slower updated frequency as the criterion to synchronize the positioning data of the different sensors, since the proposed SLAS-VLP system needs to obtain these positioning data at the same time for robot global position calculation using a single LED. Once the positioning system receives the LED-ROI from the image sensor, it will capture the current yaw from angle sensor immediately and then package and transmit them to part of the positioning calculation. According to the experimental results in Section 4, the asynchronous multithreading method can effectively enhance the real-time performance of the proposed SLAS-VLP system and still maintain high positioning accuracy.

3. Global Positioning Calculation

First, after obtaining the information from the LED in the image plane and the real scenario by camera and lightweight image processing, the vertical distance H between the lens center of image sensor and the fixed point of the LED can be obtained according to the similar triangle principle. Then the z coordinate of the image sensor mounted vertically in the robot can be obtained on the basis of the image formula,

$${\mu = \frac{d}{{d^\prime}} = \frac{H}{z}}$$
$${\frac{1}{H} + \frac{1}{z} = \frac{1}{f}}$$
$${z = \left({\frac{1}{\mu} + 1} \right)*f}$$
where $\mu$ is the ratio of the actual size of the object to the image size, and $f$ represents the focal length of the image sensor. Aiming at further calculating the horizontal coordinates $({x,y})$ of the image sensor in the global, we list the transformation relations among different coordinate systems in Fig. 5, which is more helpful to understand the transformation process of the positioning data. The origin point of the image coordinate system is the intersection point between the optical axis of the camera and the imaging plane of the image sensor. It is also the center of the point in the imaging plane of the image sensor. The unit of the mentioned image coordinate system is millimeters. And the unit of the pixel coordinate is pixel, described by the rows and lines of the pixels.
 figure: Fig. 5.

Fig. 5. Transformation among coordinate systems. (a) Relationship between $XcYcZc$ camera coordinate system, $ij$ pixel coordinate system, $mn$ image coordinate system, and $XwYwZw$ world coordinate system; (b) rotation angle $\alpha$ of image coordinate system $({Xc,Yc})$ relative to world coordinate system $({Xw,Yw})$.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Single-LED positioning system model.

Download Full Size | PPT Slide | PDF

Therefore, the coordinates of the LED in the image coordinate system can be calculated according to the relationship between the pixel coordinate system and the image coordinate system,

$${m = \left({i - {i_0}} \right)dm}$$
$${n = \left({j - {j_0}} \right)dn}$$
where $m,n$ are coordinates in the image coordinate system, and $i,j$ are coordinates in the pixel coordinate system, which can be obtained from the image through image processing. $dm$ and $dn$ represent unit transformation of two coordinate systems, respectively, namely, $1{\rm pixel} = djmm$. $({{i_0},{j_0}})$ are the coordinates of the image sensor in the pixel coordinate system, which is in the center of the image. Thus, we can calculate the coordinates of the LED in the image coordinate system. According to the similar triangles marked 2 in Fig. 6, the horizontal coordinates $({x,y})$ of the image sensor in the camera coordinate system can be calculated if each axis of the image coordinate system is parallel to the world coordinate system’s one. But generally speaking, the image coordinate system is inconsistent with the coordinate axis direction of the world coordinate system in an actual situation; that is, there exists a rotation angle $\alpha$ of the $Z$ axis. Therefore, considering the advantages of the inertial sensor such as low cost, reasonable accuracy, and no need of external reference, we propose utilizing the yaw angle measured by the inertial sensor on the robot to describe the rotation angle between the above two coordinate systems. Then the SLAS-VLP algorithm could remain effective under different motion states of the robot so that the horizontal coordinates $({x,y})$ of the image sensor in the camera coordinate system can be calculated as follows:
$$\begin{array}{*{20}{c}}{\left[{\begin{array}{*{20}{c}}{m^\prime}\\{n^\prime}\end{array}} \right] = \left[{\begin{array}{*{20}{c}}{\cos {\alpha}}&\quad{- \sin {\alpha}}\\{\sin {\alpha}}&\quad{\cos {\alpha}}\end{array}} \right]\left[{\begin{array}{*{20}{c}}m\\n\end{array}} \right]}\end{array}$$
$$\mu = \frac{x}{{m^\prime}} = \frac{y}{{n^\prime}}$$

$({m^\prime ,n^\prime})$ represents the coordinates of LED in the image coordinate system, which is transformed into paralleling the world coordinate system through formula (6). Associating the actual coordinates $({{x_0},{y_0}})$ of the LED registering in the map with the unique LED-ID, $({x,y})$, which represents the horizontal coordinates of the image sensor in the camera coordinate system, can be transformed into the world coordinate system,

$${\left\{{\begin{array}{l}{X = x + {x_0}}\\{Y = y + {y_0}}\\{Z = z}\end{array}} \right.}$$

Since the image sensor is fixed on the robot, the coordinates of the image sensor can be converted to coordinates of the robot by fixed TF (coordinate transformation). Therefore, the position of the robot can be obtained.

3. EXPERIMENT AND ANALYSIS

A. System Setup

Experiments are carried out to verify the real-time performance, dynamic continuity, accuracy, and robustness of the proposed SLAS-VLP system. Our experimental platform is shown in Fig. 7. In the yellow boxes of Fig. 7, three modulated LEDs installed on the $2.73m$-high LED positioning anchors are placed for the VLP, which can cover the positioning space of $6.8m \times 2.7m$ even when a lot of nonsignal light interference exists in the background, which is shown in the red boxes of Fig. 7. Since the strip light marked with a red box in Fig. 7 would not provide any effective positioning information for our proposed positioning system and even affects the extraction of LED-ROI, its function is to test the robustness of our SLAS-VLP system when the LEDs with VLC function and other nonsignal background light interference exist in the FOV of the industrial camera mounted on the robot at the same time. Through a VLC controller, each LED has a unique identification frequency (UIF) correlating to the detailed position information. And as shown in the dark blue boxes of Fig. 7, the two-wheeled differential driving mobile robot Turtlebot3 Burger is used to conduct the experiment. In VLP, the LEDs work as anchor to transmit the identify information, while the mobile robot terminal collects the VLP information via the CMOS image sensor through the rolling shutter effect. The images of LEDs were shot by a MindVision UB-300 industrial camera by prior extrinsic calibration [29] and transmitted by a Raspberry Pi 3 Model B. More specific parameters are shown in Table 1.

 figure: Fig. 7.

Fig. 7. Experimental platform of the SLAS-VLP system.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Parameters of the SLAS-VLP System

We demonstrate the performance of the SLAS-VLP system on ROS, which is an open-source robotic framework that has been widely adopted across academia, industry, and the military around the world. According to the description in Section 2, the proposed positioning system could be divided into three function packages: a camera with the function of the LED-ROI extraction package, a yaw angle calculation package, and a single-LED positioning package. When the VLP program starts, we run the camera package to extract the LED-ROI from the captured image, which includes other nonsignal interference and publish this cut one to the recognition node, which is the smallest processing unit in ROS. As the same time, the angle node calculates the yaw angle of the mobile robot based on the inertial data subscribed from odometer. Then through our proposed asynchronous multithreading method, the matched LED-ROI and the yaw angle with a similar time stamp would be packaged and transmitted to positioning calculation node based on a single LED in real time. By establishing a one-to-one mapping relationship between the characteristics of the LED light stripe and the LED-ID, the LED-ROI would be transformed to a series of unique numbers after image processing. Since the LED-ID is associated with the coordinates of the installed LED, enough of the related LED information is collected for the proposed SLAS-VLP algorithm after obtaining the coordinates of the recognized LED. And with the help of the yaw angle, the position of the mobile robot in the world coordinate system can be obtained.

B. Dynamic Positioning

1. Accuracy Analysis of Yaw Angle from Different Inertial Sensors

Aiming at providing the yaw angle for VLP, experiments are carried out to compare the accuracy of the yaw angle calculated by different inertial sensors. As shown in Fig. 7, we remotely control the robot by computer to move and rotate freely in a large enough space of motion and record the current yaw angle calculated, respectively, based on IMU, IMU $+$ magnetometer, and odometer in real time by utilizing the predesigned yaw angle calculation function package. We stop the robot every 3 min for 1 min, and manually record the three yaw angles printed by the terminal running the designed function package at this time. Repeating this operation, a total of 20 groups of yaw angles under different robot motion attitudes are recorded, and three yaw angles in each group correspond to ${\alpha _{{\rm IMU}}}$, ${\alpha _{{\rm IMU}\& {\rm MAG}}}$, and ${\alpha _{{\rm Odometer}}}$. As data labels, the 60 groups of yaw angles can be found from the whole angle data automatically recorded by the function package, which reflects the angle data segment within 1 min of the robot stay so as to obtain a larger sample size of angle measurement values reflecting the robot in the same motion posture. Therefore, the corresponding 10 angle measurement values can be found by recording each yaw angle data label; that is to say, the yaw angles calculated by the three methods all have $20 \times 10$ data. We averaged the 10 measurements of the robot in the same motion posture as the yaw angle of the current attitude and drew three sets of yaw angle curves based on different sensors. As shown in Fig. 8, the change curves of the three groups of yaw angles almost coincide.

 figure: Fig. 8.

Fig. 8. Comparison of yaw angle measured by different sensors.

Download Full Size | PPT Slide | PDF

According to the experimental results, the difference of yaw angle between any two groups is within 0.005° under the same motion attitude of the robot, which shows that the yaw angle with reasonable accuracy can be calculated by the above three inertial sensors to supplement the positioning information required by the single-lamp positioning algorithm. In other words, any sensor that can calculate the yaw angle with reasonable accuracy has the opportunity to supplement the positioning information for the single-lamp positioning algorithm, which reflects the scalability and flexibility of the single-lamp positioning system.

2. Real-Time Performance

The time difference between two adjacent location results is selected as a real-time index. The index named positioning time includes the acquisition of raw data, the simultaneous processing of different sensor data, the extraction of effective information from the original data, and the calculation of positioning. The real-time performance of the positioning can be well evaluated by this index. And for the proposed SLAS-VLP system, image sensor data and inertial sensor data need to be matched before they can be input for positioning calculation. Thus, the selection of synchronization method has a great impact on the real-time performance of positioning. We use this index to measure the actual effect of the proposed two synchronization methods in the SLAS-VLP system. At the same time, as shown in Fig. 8, a variety of angle sensors can provide the yaw angle with reasonable accuracy for the SLAS-VLP algorithm, but the calculation complexity of yaw angle based on various angle sensors is different, which also has an important impact on the real-time performance of positioning. As shown in Fig. 9, aiming at selecting the most suitable combination of synchronization method and angle sensor for the proposed SLAS-VLP system, we carried out seven groups of real-time experiments.

 figure: Fig. 9.

Fig. 9. Architecture of real-time experiment.

Download Full Size | PPT Slide | PDF

Six groups of real-time experiments were obtained by combining the two message synchronization methods and three inertial sensors, which are, respectively, the IMU, odometer, and magnetometer proposed in Section 3. In the seventh group of experiments, IMU and magnetometer data with a similar time stamp were obtained by the filtering method, and then the inertial data packet and image-sensor data were packaged by the asynchronous multithreading method and transmitted into the positioning calculation module. According to the principle of single variable, each group of experiments makes the robot move along the same specified trajectory, and adds rotation, speed change, and other motion states on the basis of linear trajectory, so that the real-time performance index is more reliable. Therefore, the real-time cumulative distribution function (CDF) curves based on different angle sensors under the two synchronization methods in Fig. 10 are drawn by recording the positioning time of the robot in the specified path in real time. As shown in Fig. 10(a), 90% of the positioning time of the proposed SLAS-VLP system using only the filtering method for message synchronization is within 0.459 s. And in Fig. 10(b), 90% of the positioning time of the proposed system using only the asynchronous multithreading method is within 0.207 s, which shows that the effect of message synchronization based on asynchronous multithreading is always better than that based on filtering.

 figure: Fig. 10.

Fig. 10. Comparison of real-time CDF curves based on different angle sensors under the two synchronization methods.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. Comparison of real-time performance between the filtering method and the asynchronous multithreading method.

Download Full Size | PPT Slide | PDF

Also, the effect of the sensor selection for yaw angle calculation on the real-time performance of the proposed SLAS-VLP system with the different two synchronization methods is consistent. Among them, the real-time loss of yaw angle calculation based on the odometer is the smallest, followed by IMU and the fusion of IMU and magnetometer. The fusion of IMU and magnetometer after filtering is to filter the inertial sensor data according to the time stamp label by the filtering method, and then capture and transmit the inertial sensor data packet and image sensor data by the asynchronous multithreading method. As the red line in Fig. 10(b) shows, 90% of the positioning time is within 0.208 s, which is smaller than the real-time loss when using the filtering method but is larger than the real-time loss when using the asynchronous multithreading method. This experimental result shows that the filtering method has good real-time effect on the synchronous processing of data with similar publishing frequency, such as data from inertial sensors, while the asynchronous multithreading method can still maintain good real-time effect, even for data with far different publishing frequencies, such as data from image sensors and inertial sensors. From the discussion in the last part, in a sufficient time span, the yaw angle calculated by each sensor using the method described in this paper is the difference between the two decimal places. The accuracy is reasonable, which meets the accuracy requirements of VLP. Therefore, we can consider the choice of inertial sensors from the real-time loss, that is, the computational complexity of the yaw angle. In order to choose an optimal combination of synchronization method and angle sensor more reasonably, we calculate the average positioning time and standard deviation of each group of experiments and judge the real-time effect of various combinations from the whole and local.

As shown in Fig. 11, the average positioning time and standard deviation of the asynchronous multithreading method are less than that of the filtering method under the same sensor selection. In Fig. 12, we randomly selected 300 positioning times as samples for local analysis and drew the trend line of each polygonal line, which also gave the formula of the trend line. We found that no matter what kind of the sensor selection was chosen, the center value of fluctuation of the filtering method (the orange line) was close to twice that of the asynchronous multithreading method (the blue line), and the fluctuation range of the filtering method is larger. Since the function of our filtering method is to select the image-sensor data and the angle-sensor data with similar time stamps for positioning calculation, our filtering method is sensitive to the published frequency of the data, which are affected by the sensor types, network quality, and other uncontrollable noise interference. Thus, a large part of the fluctuation is due to the difference of published frequency between the data to be synchronized and the variability in the published frequency of the same data topic. What is more, the maximum positioning time based on the filtering method can reach 1.163 s. It means that the filtering method is likely to cause positioning discontinuity, especially in the long-term movement process. Therefore, we carried out two experiments of the proposed angle-sensor-based single-light positioning system using the two proposed synchronization methods in the actual scenario shown in Fig. 7 and displayed the robot positioning results on Rviz, which is a visualization tool in ROS [30]. That is, Rviz can display the positioning results calculated by our positioning systems in real time when the robot moves randomly in the actual environment. As shown in Fig. 13, the movement of the robot model is controlled by simultaneous localization and mapping (SLAM). Calculated by our SLAS-VLP system, the purple points represent the positioning results of the industrial camera mounted on the robot. This setup was to more clearly display the positioning results, since the size of the camera is much smaller than the entire robot model. (We had transformed the positioning points from the camera to the base of robot in other experiments of positioning accuracy.)

 figure: Fig. 12.

Fig. 12. Measured positioning time under each sensor selection with the two synchronization methods.

Download Full Size | PPT Slide | PDF

 figure: Fig. 13.

Fig. 13. Comparison of the positioning continuity performance based on the two proposed visual-inertial message synchronization methods.

Download Full Size | PPT Slide | PDF

In other words, we used two positioning schemes (our SLAS-VLP and SLAM) to separately reflect the actual movement of robot in a real environment and used the SLAM as the baseline to compare the positioning continuity of our SLAS-VLP system based on the two proposed visual-inertial message synchronization methods. In order to make up for the dynamic continuity of positioning, we have relaxed the condition of the filtering method to catch the positioning data with similar time stamps, not only the positioning data with strictly equal time stamps. However, as shown in the red box of Fig. 13, the purple dots represent the positioning results of the robot calculated by our SLAS-VLP system based on the filtering method. When we controlled the robot to go forward as shown in picture ① inside the red box, a series of purple dots show the robot’s forward trajectory slightly discontinuously. Then its positioning stutters when we controlled the robot to retreat. After our SLAS-VLP system received the positioning data package through the filtering method, it restarted to calculate the robot position, of which the positioning results were shown in picture ③. Thus, it is still possible that the positioning data from different sensors would be misplaced due to the different published frequency and start time of the image sensor and the inertial sensor in the positioning process. Then it would further cause the lack of data that needs to be input into the positioning system, that is, positioning discontinuity. If the hardware equipment of the image sensor is upgraded and then the frequency of collecting and publishing images is higher, the real-time performance of the SLAS-VLP system based on the filtering method for soft synchronization should be significantly improved.

Instead, as shown in blue box of Fig. 13, no matter how the motion state of the robot changes, the purple dots, the positioning results of robot calculated by our SLAS-VLP system based on the asynchronous multithreading method, would always follow the industrial camera of the robot model, which indicates our SLAS-VLP system could show good real-time performance. It also verifies that the proposed asynchronous multithreading method can keep high processing speed. As described in Section 2.D.1, it is an effective method for real-time improvement to use sensor data with high publishing frequency to follow the image sensor data with low publishing frequency to achieve variable step synchronization. Furthermore, not only the average positioning time but also the upper limit of 90% positioning time obtained by the combination of the asynchronous multithreading method and the odometer are minimal, which means we selected the optimal combination of synchronization method and the sensors used to calculate the yaw angle.

3. SLAS-VLP System Positioning Accuracy Analysis

As described in the last section, we have ensured the odometer-based single-LED VLP system using the asynchronous multithreading method as the final choice. In order to evaluate the position accuracy of the proposed SLAS-VLP system, two series of experiments were carried out. The first series was used to test the stationary positioning performance of the mobile robot. We randomly stopped the moving robot at 65 test locations in the coordinate-paper area of Fig. 7. While the experimental environment exists in a lot of nonsignal light interference, the proposed angle-sensor-based single-light positioning system can find the best matched LED with VLP function, and each location is only one LED perceived by the image sensor. Then the positioning error of each location was calculated by comparing the actual spatial position and the estimated position. Each test location is measured 5 times, obtaining 325 stationary positioning results in total. These can be seen in Fig. 14, where 90% of the positioning errors are within 3.66 cm, and the average accuracy of the proposed positioning system is around 2.47 cm, while the maximum positioning error is 6.75 cm.

 figure: Fig. 14.

Fig. 14. CDF curves of positioning errors.

Download Full Size | PPT Slide | PDF

To further verify the positioning performance of our SLAS-VLP system, we showed some of the test points, especially the locations where the LED is at the edge of the camera’s FOV. Disregarding other possible interference in the environment for the time being, we could find small fluctuations in positioning accuracy as the distance between the LED and the robot changes, as shown in Fig. 15. This is because when the robot moved away from the LED, the brightness of the LED would become lower in the FOV of the image sensor mounted on the robot. The LED in the FOV would be at the edge position, which is not conducive to extracting the LED-ROI in the captured image, meaning the diameter of the LED in the image plane obtained by image processing would fluctuate. However, even if the robot moves to the boundary position of the positioning range covered by the LED, the maximum positioning error does not exceed 4 cm, which means our SLAS-VLP system has no restrictions on the distribution of the LED and maintains high positioning accuracy, no matter where the LED is at the camera’s FOV. The robustness positioning performance indicated that it is an effective positioning system that can use the least number of LEDs to achieve the maximum range of dynamic positioning.

 figure: Fig. 15.

Fig. 15. Distribution of positioning errors.

Download Full Size | PPT Slide | PDF

Therefore, we built the VLC network composed of customized LEDs and carried out the second series of experiments, which were to test our SLAS-VLP system with a moving mobile robot. The robot was controlled to move randomly to demonstrate the real positioning effect of the SLAS-VLP system with different speeds. There may exist the handover area between the two different LEDs, so that it possibly happens that there are exceptions, such as incomplete or even no LED in the captured image. Aiming at reducing the uncertainty of positioning by virtue of the exceptions, we use the odometer to obtain the robot position with reasonable accuracy when the LED is not complete or there is no LED in the camera’s FOV. Thus, our SLAS-VLP system could keep effective facing with a complex environment.

For more comprehensively evaluating our SLAS-VLP system, we developed an experiment about the dynamic trajectory of the robot in the environment within the no-LED region. As shown in Fig. 16, we set five positioning markers to ensure the robot moved in our specified trajectory as controlled by the program. The robot approximately moved as the specified trajectory shown by the yellow arrows, while the blue points were calculated by our SLAS-VLP and were composed of the predicted trajectory. Since we do not have the motion capture system to show the ground truth of the moving robot, we chose to control the robot by program and set some positioning markers to make the specified trajectory much closer to the actual trajectory. Also, we mapped the positioning markers to the result of the predicted trajectory and find that the predicted trajectory could well describe the specified trajectory, even with an LED outage. When the robot moves into the no-LED region, as shown in Fig. 16, our positioning scheme can estimate the global position of the robot with reasonable accuracy by odometer compensation, while relying on the proposed SLAS-VLP scheme for a high-precision localization solution after the LEDs are restored to normal.

 figure: Fig. 16.

Fig. 16. Trajectory estimation of our SLAS-VLP system compared with the specified trajectory controlled by program.

Download Full Size | PPT Slide | PDF

In Rviz, the movement of the robot model is calculated by our SLAS-VLP system based on odometer while we also displayed other SLAS-VLP schemes based on different yaw estimation solutions through various angle sensors, of which the positioning results are plotted as different colored dots to describe the actual position of robot in the indoor map and compare the positioning performance among the different schemes. The front points consistent with the direction of robot movement are the latest calculation of the single-LED VLP algorithm. The other points are retained in order to better display the positioning results and the comparison of the robot’s motion trajectory to show the effect of our SLAS-VLP system. The demonstration video of our proposed SLAS-VLP system is available at our website [31]. In order to more clearly show the dynamic performance of our SLAS-VLP, we tried our best to present the comparison between the actual trajectory and the predicted trajectory through overlaying a lot of consecutive frames in the same picture, which are sampled in our demo video. The comparison is shown as follows.

The right side of Fig. 17 displays the actual trajectory of the robot in the real environment, while the left side presents the predicted trajectory calculated by our SLAS-VLP system in the indoor map corresponding to the real environment. With the help of the position reference of coordinate paper and LEDs, we can find that the predicted trajectory in the indoor map is highly consistent with the actual trajectory. Admittedly, there are still some operational errors in the manual extraction and restoration of the trajectory based on the video stream processing. However, as shown in Fig. 17, we enlarged the predicted trajectory extracted from the demo video through the scale relationship between the indoor map and the real environment and compared the similarity by overlaying these two trajectories, which reflects the trajectory calculated by our SLAS-VLP system and could well describe the actual trajectory of the robot.

 figure: Fig. 17.

Fig. 17. Comparison between the predicted trajectory calculated by our SLAS-VLP and the actual trajectory captured in the demo video.

Download Full Size | PPT Slide | PDF

C. Discussions

In this section, we compare the performance of the proposed SLAS-VLP system with the state-of-the-art works in the VLP field in the Table 2. The required quantity of LEDs, average accuracy, time cost, and the density of the LEDs in the related experimental platform are also displayed objectively in Table 2. Compared with Refs. [4,1315], our SLAS-VLP system can relax the requirement on the minimum number of observable LEDs to one, which plays a great role on increasing the coverage of the effective positioning area. For example, only three LEDs are enough to ensure high-accuracy positioning for a large area of $({6.8 \times 2.7 \times 2.7}){m^3}$ under background light interference. It is apparent that the positioning accuracy (2.47 cm) of the proposed SLAS-VLP system achieves state-of-the-art functions. In addition, unlike Refs. [13,14,18,19], all the processing is completely loaded onto a computer with high computing ability. The time cost of our SLAS-VLP includes the image process and pose estimation (runtime in both Raspberry Pi 3B and laptop), and also the time cost of data transmission from mobile robot to the laptop. More specifically, both data collection and the whole image processing of LED-ROI extraction are run on the Raspberry (a low-cost embedded platform) where the input image size is $2048 \times 1536$ without any code optimization for Acorn RISC Machine (ARM) processors. Compared with the similar process platform with the same image size in Ref. [15], the time cost of the proposed SLAS-VLP system is less, so that our SLAS-VLP is more efficient and lightweight. As the comparison with Ref. [24] shows, without adopting the fusion model such as extended Kalman filter (EKF) or Bayesian filter to tightly fuse the visual and inertial data, our proposed system would not predict and update the positioning results until the matched visual-inertial data packet through the proposed message synchronization method arrives; this ensures high positioning accuracy but pays for the real-time performance. In our future work, we will be more focused on the solutions about how to better balance among accuracy, real-time performance, and robustness based on our SLAS-VLP system.

Tables Icon

Table 2. Performance Comparison with the State-of-the-Art Schemes

Admittedly, in this paper, we could not take the cumulative error of all the angle sensors mentioned into consideration. We have used an orientation filter to compensate for gyroscope bias drift through the integral feedback of the error, so that the direction angle estimated by the IMU and magnetometer would be corrected. However, the growing cumulative error of the odometer would affect the positioning accuracy over time. For our future study, we will deeply analyze this and explore an efficient method for error calibration based on multisensor fusion with VLP observation. We credit our contribution mainly to the robust SLAS-VLP scheme, which has a good potential for practical applications in terms of better usability in various indoor environments with LED shortage problems.

4. CONCLUSION

In this paper, we propose the SLAS-VLP system using the visual-inertial message synchronization method, which efficiently relaxes the assumption on the minimum number of simultaneously captured LEDs from several to one with the help of inertial angle sensor estimation. After ensuring the best sensor selection (rolling shutter camera and odometer) is applied to the SLAS-VLP system, the single-LED VLP system using an odometer for yaw angle calculation is evaluated on the robotic platform under the LED shortage and background nonsignal light interference. The experimental result shows its robust, accurate, and real-time positioning performance. The robustness performance indicated that it is an effective positioning system that can use the least number of LEDs to achieve the maximum range of dynamic positioning in various indoor environments with LED shortage problems.

There exist many works worthy of further investigation. For example, in this paper, we additionally use the robot position calculated by the odometer to compensate for the failure of the VLP technology in the situation without any LED information support. In the future, we will introduce the VLP-inertial fusion method for robotics localization using particle filter, which will apply to situations with single LEDs, or even without an LED.

Funding

National College Students Innovation and Entrepreneurship Training Program (202010561155); Guangdong Provincial Training Program of Innovation and Entrepreneurship for Undergraduates (S202010561272); Guangdong Science and Technology Project (2017B010114001).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. I. J. Cox, “Blanche-an experiment in guidance and navigation of autonomous robot vehicle,” IEEE Trans. Robot. Autom. 7, 193–204 (1991). [CrossRef]  

2. Y. Muhammad, H. Siu-Wai, and N. V. Badri, “Indoor positioning system using visible light and accelerometer,” J. Lightwave Technol. 32, 3306–3316 (2014). [CrossRef]  

3. W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014). [CrossRef]  

4. N. Huang, C. Gong, J. Luo, and Z. Xu, “Design and demonstration of robust visible light positioning based on received signal strength,” J. Lightwave Technol. 38, 5695–5707 (2020). [CrossRef]  

5. Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018). [CrossRef]  

6. W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017). [CrossRef]  

7. C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018). [CrossRef]  

8. S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011). [CrossRef]  

9. M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018). [CrossRef]  

10. Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017). [CrossRef]  

11. L. Huang, P. Wang, Z. Liu, X. Nan, L. Jiao, and L. Guo, “Indoor three-dimensional high-precision positioning system with bat algorithm based on visible light communication,” Appl. Opt. 58, 2226–2234 (2019). [CrossRef]  

12. D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016). [CrossRef]  

13. P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020). [CrossRef]  

14. W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019). [CrossRef]  

15. W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020). [CrossRef]  

16. J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017). [CrossRef]  

17. R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017). [CrossRef]  

18. W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

19. H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020). [CrossRef]  

20. E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in Proc. ICRA (IEEE, 2011), pp. 3400–3407.

21. H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020). [CrossRef]  

22. H. Huang, B. Lin, L. Feng, and H. Lv, “Hybrid indoor localization scheme with image sensor-based visible light positioning and pedestrian dead reckoning,” Appl. Opt. 58, 3214–3221 (2019). [CrossRef]  

23. C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018). [CrossRef]  

24. Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.

25. M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014). [CrossRef]  

26. Q. Tong and S. Shen, Online Temporal Calibration for Monocular Visual-Inertial Systems (IEEE, 2018).

27. S. Madgwick, “An efficient orientation filter for inertial and inertial/magnetic sensor arrays,” Report X-IO (University of Bristol, 2010), pp. 113–118.

28. H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021). [CrossRef]  

29. S. Chen and W. Guan, “High accuracy VLP based on image sensor using error calibration method,” arXiv:2010.00529 (2020).

30. http://wiki.ros.org/rviz.

31. https://www.bilibili.com/video/BV1rb4y1o79N.

References

  • View by:

  1. I. J. Cox, “Blanche-an experiment in guidance and navigation of autonomous robot vehicle,” IEEE Trans. Robot. Autom. 7, 193–204 (1991).
    [Crossref]
  2. Y. Muhammad, H. Siu-Wai, and N. V. Badri, “Indoor positioning system using visible light and accelerometer,” J. Lightwave Technol. 32, 3306–3316 (2014).
    [Crossref]
  3. W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
    [Crossref]
  4. N. Huang, C. Gong, J. Luo, and Z. Xu, “Design and demonstration of robust visible light positioning based on received signal strength,” J. Lightwave Technol. 38, 5695–5707 (2020).
    [Crossref]
  5. Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
    [Crossref]
  6. W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
    [Crossref]
  7. C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
    [Crossref]
  8. S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
    [Crossref]
  9. M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
    [Crossref]
  10. Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
    [Crossref]
  11. L. Huang, P. Wang, Z. Liu, X. Nan, L. Jiao, and L. Guo, “Indoor three-dimensional high-precision positioning system with bat algorithm based on visible light communication,” Appl. Opt. 58, 2226–2234 (2019).
    [Crossref]
  12. D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016).
    [Crossref]
  13. P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
    [Crossref]
  14. W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
    [Crossref]
  15. W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
    [Crossref]
  16. J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
    [Crossref]
  17. R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
    [Crossref]
  18. W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).
  19. H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
    [Crossref]
  20. E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in Proc. ICRA (IEEE, 2011), pp. 3400–3407.
  21. H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
    [Crossref]
  22. H. Huang, B. Lin, L. Feng, and H. Lv, “Hybrid indoor localization scheme with image sensor-based visible light positioning and pedestrian dead reckoning,” Appl. Opt. 58, 3214–3221 (2019).
    [Crossref]
  23. C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018).
    [Crossref]
  24. Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.
  25. M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014).
    [Crossref]
  26. Q. Tong and S. Shen, Online Temporal Calibration for Monocular Visual-Inertial Systems (IEEE, 2018).
  27. S. Madgwick, “An efficient orientation filter for inertial and inertial/magnetic sensor arrays,” Report X-IO (University of Bristol, 2010), pp. 113–118.
  28. H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
    [Crossref]
  29. S. Chen and W. Guan, “High accuracy VLP based on image sensor using error calibration method,” arXiv:2010.00529 (2020).
  30. http://wiki.ros.org/rviz .
  31. https://www.bilibili.com/video/BV1rb4y1o79N .

2021 (1)

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

2020 (5)

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

N. Huang, C. Gong, J. Luo, and Z. Xu, “Design and demonstration of robust visible light positioning based on received signal strength,” J. Lightwave Technol. 38, 5695–5707 (2020).
[Crossref]

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

2019 (3)

2018 (4)

C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018).
[Crossref]

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

2017 (4)

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

2016 (1)

D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016).
[Crossref]

2014 (3)

M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014).
[Crossref]

Y. Muhammad, H. Siu-Wai, and N. V. Badri, “Indoor positioning system using visible light and accelerometer,” J. Lightwave Technol. 32, 3306–3316 (2014).
[Crossref]

W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
[Crossref]

2011 (1)

S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
[Crossref]

1991 (1)

I. J. Cox, “Blanche-an experiment in guidance and navigation of autonomous robot vehicle,” IEEE Trans. Robot. Autom. 7, 193–204 (1991).
[Crossref]

Badri, N. V.

Cai, Y.

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

Chen, H.

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Chen, S.

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

S. Chen and W. Guan, “High accuracy VLP based on image sensor using error calibration method,” arXiv:2010.00529 (2020).

Chen, Y.

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Chen, Z.

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Cheng, H.

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

Chowdhury, M. I.

W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
[Crossref]

Cox, I. J.

I. J. Cox, “Blanche-an experiment in guidance and navigation of autonomous robot vehicle,” IEEE Trans. Robot. Autom. 7, 193–204 (1991).
[Crossref]

Fang, J.

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Fang, L.

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

Feng, L.

Gong, C.

Guan, W.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
[Crossref]

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

S. Chen and W. Guan, “High accuracy VLP based on image sensor using error calibration method,” arXiv:2010.00529 (2020).

Guo, L.

Hann, S.

S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
[Crossref]

Hou, W.

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

Hu, X.

Huang, H.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

H. Huang, B. Lin, L. Feng, and H. Lv, “Hybrid indoor localization scheme with image sensor-based visible light positioning and pedestrian dead reckoning,” Appl. Opt. 58, 3214–3221 (2019).
[Crossref]

Huang, L.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

L. Huang, P. Wang, Z. Liu, X. Nan, L. Jiao, and L. Guo, “Indoor three-dimensional high-precision positioning system with bat algorithm based on visible light communication,” Appl. Opt. 58, 2226–2234 (2019).
[Crossref]

Huang, N.

Ji, Y.

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

Jiang, Z. L.

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Jiao, L.

Jung, S.-Y.

S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
[Crossref]

Kemao, Q.

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

Lei, W.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Li, F.

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Li, H.

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Li, M.

M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014).
[Crossref]

Liang, F.

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Liang, Q.

Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.

Lin, B.

Lin, J.

Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.

Lin, P.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

P. Lin, X. Hu, Y. Ruan, H. Li, J. Fang, Y. Zhong, H. Zheng, J. Fang, Z. L. Jiang, and Z. Chen, “Real-time visible light positioning supporting fast moving speed,” Opt. Express 28, 14503–14510 (2020).
[Crossref]

Liu, L.

W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
[Crossref]

W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

Liu, M.

Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.

Liu, X.

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Liu, Z.

Long, S.

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Luo, J.

Lv, H.

Madgwick, S.

S. Madgwick, “An efficient orientation filter for inertial and inertial/magnetic sensor arrays,” Report X-IO (University of Bristol, 2010), pp. 113–118.

Mohsen, K.

W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
[Crossref]

Mourikis, A. I.

M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014).
[Crossref]

Muhammad, Y.

Myungsik, Y.

D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016).
[Crossref]

Nan, X.

Ni, J.

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

Olson, E.

E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in Proc. ICRA (IEEE, 2011), pp. 3400–3407.

Park, C.-S.

S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
[Crossref]

Peng, Q.

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Qin, C.

C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018).
[Crossref]

Ruan, Y.

Shen, S.

Q. Tong and S. Shen, Online Temporal Calibration for Monocular Visual-Inertial Systems (IEEE, 2018).

Siu-Wai, H.

Song, H.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

Tan, Z.

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

Tong, Q.

Q. Tong and S. Shen, Online Temporal Calibration for Monocular Visual-Inertial Systems (IEEE, 2018).

Trong-Hop, D.

D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016).
[Crossref]

Wang, P.

L. Huang, P. Wang, Z. Liu, X. Nan, L. Jiao, and L. Guo, “Indoor three-dimensional high-precision positioning system with bat algorithm based on visible light communication,” Appl. Opt. 58, 2226–2234 (2019).
[Crossref]

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

Wang, T.

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

Wei, Z.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Wen, S.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
[Crossref]

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

Wu, H.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Wu, Y.

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

Wu, Z.

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Xiao, C.

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

Xie, C.

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

Xu, Y.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Xu, Z.

Yan, Z.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

Yang, C.

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Yang, Z.

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Yuan, D.

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

Yuan, S.

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

Zhan, X.

C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018).
[Crossref]

Zhang, H.

W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
[Crossref]

W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

Zhang, M.

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Zhang, R.

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

Zhang, S.

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

Zhang, W.

W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
[Crossref]

Zhang, Z.

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Zhao, X.

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

Zheng, H.

Zhong, W.

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

Zhong, Y.

Appl. Opt. (2)

IEEE Photon. J. (6)

H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single LED lamp with a beacon,” IEEE Photon. J. 12, 7906512 (2020).
[Crossref]

W. Guan, S. Chen, S. Wen, Z. Tan, H. Song, and W. Hou, “High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning,” IEEE Photon. J. 12, 7901716 (2020).
[Crossref]

J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen, “High speed indoor navigation system based on visible light and mobile phone,” IEEE Photon. J. 9, 8200711 (2017).
[Crossref]

R. Zhang, W. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photon. J. 9, 7905209 (2017).
[Crossref]

C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photon. J. 10, 7902116 (2018).
[Crossref]

Y. Cai, W. Guan, Y. Wu, C. Xie, Y. Chen, and L. Fang, “Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization,” IEEE Photon. J. 9, 7908120 (2017).
[Crossref]

IEEE Photon. Technol. Lett. (2)

C. Qin and X. Zhan, “VLIP: tightly coupled visible-light/inertial positioning system to cope with intermittent outage,” IEEE Photon. Technol. Lett. 31, 129–132 (2018).
[Crossref]

H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A single LED visible light positioning system based on geometric features and CMOS camera,” IEEE Photon. Technol. Lett. 32, 1097–1100 (2020).
[Crossref]

IEEE Trans. Consum. Electron. (1)

S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-based optical wireless indoor localization using LED ceiling lamps,” IEEE Trans. Consum. Electron. 57, 1592–1597 (2011).
[Crossref]

IEEE Trans. Robot. Autom. (1)

I. J. Cox, “Blanche-an experiment in guidance and navigation of autonomous robot vehicle,” IEEE Trans. Robot. Autom. 7, 193–204 (1991).
[Crossref]

Int. J. Robot. Res. (1)

M. Li and A. I. Mourikis, “Online temporal calibration for camera-IMU systems: theory and algorithms,” Int. J. Robot. Res. 33, 947–964 (2014).
[Crossref]

J. Lightwave Technol. (2)

Opt. Commun. (1)

W. Guan, Y. Wu, S. Wen, H. Chen, C. Yang, Y. Chen, and Z. Zhang, “A novel three-dimensional indoor positioning algorithm design based on visible light communication,” Opt. Commun. 392, 282–293 (2017).
[Crossref]

Opt. Eng. (4)

Q. Peng, W. Guan, Y. Wu, Y. Cai, C. Xie, and P. Wang, “Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication,” Opt. Eng. 57, 016101 (2018).
[Crossref]

W. Zhang, M. I. Chowdhury, and K. Mohsen, “Asynchronous indoor positioning system based on visible light communications,” Opt. Eng. 53, 045105 (2014).
[Crossref]

W. Guan, S. Wen, L. Liu, and H. Zhang, “High-precision indoor positioning algorithm based on visible light communication using complementary metal oxide semiconductor image sensor,” Opt. Eng. 58, 024101 (2019).
[Crossref]

H. Song, S. Wen, D. Yuan, L. Huang, Z. Yan, and W. Guan, “Robust LED region-of-interest tracking for visible light positioning with low complexity,” Opt. Eng. 60, 053102 (2021).
[Crossref]

Opt. Express (1)

Optik (1)

M. Zhang, F. Li, W. Guan, Y. Wu, C. Xie, Q. Peng, and X. Liu, “A three-dimensional indoor positioning technique based on visible light communication using chaotic particle swarm optimization algorithm,” Optik 165, 54–73 (2018).
[Crossref]

Sensors (1)

D. Trong-Hop and Y. Myungsik, “An in-depth survey of visible light communication based positioning systems,” Sensors 16, 678 (2016).
[Crossref]

Other (8)

W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single LED,” in IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE) (IEEE, 2018).

S. Chen and W. Guan, “High accuracy VLP based on image sensor using error calibration method,” arXiv:2010.00529 (2020).

http://wiki.ros.org/rviz .

https://www.bilibili.com/video/BV1rb4y1o79N .

Q. Liang, J. Lin, and M. Liu, “Towards robust visible light positioning under LED shortage by visual-inertial fusion,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy (2019), pp. 1–8.

E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in Proc. ICRA (IEEE, 2011), pp. 3400–3407.

Q. Tong and S. Shen, Online Temporal Calibration for Monocular Visual-Inertial Systems (IEEE, 2018).

S. Madgwick, “An efficient orientation filter for inertial and inertial/magnetic sensor arrays,” Report X-IO (University of Bristol, 2010), pp. 113–118.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Overall architecture of the SLAS-VLP system.
Fig. 2.
Fig. 2. Flow diagram of ROI extraction under background light interference and LED-ID decoding.
Fig. 3.
Fig. 3. Schematic diagram of filtering method.
Fig. 4.
Fig. 4. Schematic diagram of asynchronous multithreading method.
Fig. 5.
Fig. 5. Transformation among coordinate systems. (a) Relationship between $XcYcZc$ camera coordinate system, $ij$ pixel coordinate system, $mn$ image coordinate system, and $XwYwZw$ world coordinate system; (b) rotation angle $\alpha$ of image coordinate system $({Xc,Yc})$ relative to world coordinate system $({Xw,Yw})$ .
Fig. 6.
Fig. 6. Single-LED positioning system model.
Fig. 7.
Fig. 7. Experimental platform of the SLAS-VLP system.
Fig. 8.
Fig. 8. Comparison of yaw angle measured by different sensors.
Fig. 9.
Fig. 9. Architecture of real-time experiment.
Fig. 10.
Fig. 10. Comparison of real-time CDF curves based on different angle sensors under the two synchronization methods.
Fig. 11.
Fig. 11. Comparison of real-time performance between the filtering method and the asynchronous multithreading method.
Fig. 12.
Fig. 12. Measured positioning time under each sensor selection with the two synchronization methods.
Fig. 13.
Fig. 13. Comparison of the positioning continuity performance based on the two proposed visual-inertial message synchronization methods.
Fig. 14.
Fig. 14. CDF curves of positioning errors.
Fig. 15.
Fig. 15. Distribution of positioning errors.
Fig. 16.
Fig. 16. Trajectory estimation of our SLAS-VLP system compared with the specified trajectory controlled by program.
Fig. 17.
Fig. 17. Comparison between the predicted trajectory calculated by our SLAS-VLP and the actual trajectory captured in the demo video.

Tables (2)

Tables Icon

Table 1. Parameters of the SLAS-VLP System

Tables Icon

Table 2. Performance Comparison with the State-of-the-Art Schemes

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

μ = d d = H z
1 H + 1 z = 1 f
z = ( 1 μ + 1 ) f
m = ( i i 0 ) d m
n = ( j j 0 ) d n
[ m n ] = [ cos α sin α sin α cos α ] [ m n ]
μ = x m = y n
{ X = x + x 0 Y = y + y 0 Z = z

Metrics