Abstract
Visible light positioning (VLP) can provide high positioning accuracy based on existing lighting facilities, which gives it great potential for intelligent transportation system (ITS). However, in real scenarios, the performance of visible light positioning is limited by outage problem caused by the sparse distribution of light emitting diode (LED) and time cost of the positioning algorithm. In this paper, a particle filter (PF) assisted single LED VLP (SL-VLP) and inertial fusion positioning scheme is proposed and experimentally demonstrated. It improves the robustness of VLP in sparse LED scenarios. In addition, the time cost and the positioning accuracy at different outage rates and speeds are investigated. The experimental results show that, by using the proposed vehicle positioning scheme, it can achieve the mean positioning error is 0.09 m, 0.11 m, 0.15 m and 0.18 m at 0, 5.5%, 11% and 22% of SL-VLP outage rate, respectively.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Vehicle positioning systems are essential for internet of vehicles (IOV) such as intelligent transportation, assisted parking, and autonomous navigation [1,2]. The global positioning system (GPS) is widely used, and it provides easy-to-access solution for localization services. Although GPS is very popular, performance degradation is increasingly observed due to link blockages and multipath, it makes unfeasible in urban canyons, inside tunnels or underground [3]. To solve the problems of GPS, geometry-based positioning scheme in the case of GPS interruption is proposed [4]. In [5], an integrated positioning algorithm of GPS and inertial measurement unit (IMU) is developed. In [6], a Received Signal Strength Indication (RSSI) localization algorithm based on a linear modified log function model is proposed. However, these schemes are limited due to the use of the GPS. Inertial navigation using IMU is a positioning technique that has the advantages of high short-term accuracy and resistance to interference. However, it cannot provide accurate positioning in the long term due to the presence of cumulative errors [7]. In addition, various vehicle positioning technologies such as light detection and ranging (LiDAR) and radio detection and ranging (Radar) are reported for vehicle positioning in IOV. It can provide high positioning accuracy with additional devices [8]. For visible light positioning (VLP), it is an emerging technology that it can be deployed without additional positioning equipment, and it is based on street light and vehicle light, thus, it can greatly reduce the implementation cost [9]. Meanwhile, VLP can be used to compensate for weak or no GPS signals by providing services as vehicles receive the signals sent from LED light.
In VLP systems, the common receivers are photodetector (PD) and complementary metal oxide semiconductor (CMOS) camera. For PD-based VLP, it is difficult to provide positioning reliability in outdoor environments due to intense solar radiation, as well as the interference from other ambient light [10]. For CMOS camera-based VLP, due to the spatial separability of CMOS cameras, it can effectively separate interference noise such as sunlight and LED light sources, and it can provide reliable localization services in outdoor environments with interference sources [11]. However, the image processing algorithms with high computational complexity is required [12,13].
In addition, there are many challenges in traditional VLP systems, especially for the shortage of LEDs, line-of-sight (LOS) barriers, and the limited field-of-view (FOV) of the image sensor [14]. Although single-LED VLP (SL-VLP) schemes are proposed [15,16], it is also difficult to guarantee known LOS LED for every corner and any area in the real scenarios. There are some existing solutions for VLP outage [7,14,17,18], most of them are applicable to low-speed scenes and does not take into account the time overhead of positioning algorithm. Moreover, the positioning error of VLP caused by the measurement noise of image sensors is the bottleneck of positioning accuracy for CMOS camera-based VLP.
In the paper, the PF-assisted SL-VLP and inertial fusion positioning scheme is proposed and experimentally demonstrated. At the transmitter, the LED sends modulated signal with identification (ID) information. At the receiver, the vehicle is equipped with the device integrating PD, COMS camera and IMU for vehicle positioning. The CMOS camera is responsible for precise positioning based on a single LED, and the PD is used to receive and decode the ID information from the LEDs. It can reduce the computational complexity and improve the system performance of VLP. In addition, a PF-assisted data fusion scheme is used to fuse SL-VLP and inertial positioning for improving the positioning accuracy and solving the outage problem of the SL-VLP.
2. Proposed SL-VLP and inertial fusion scheme
Figure 1 shows the proposed PF-assisted SL-VLP and inertial fusion positioning scheme. At the transmitter, each LED light broadcasts the signal of its location. Due to the different geographical locations of each LED, each LED has unique ID data. Firstly, the ID data of each LED light is packaged as a data packet. The packet structure consists of 4-bit header, multi-bit payload and 1-bit tail. 4-bit headers and 1-bit tails are inserted at the beginning and end of each packet, which carry information for synchronous detection of each packet. The multi-bit payload contains the position data, and it is including the ID information of the LEDs, and the exact length of the payload is determined by the number of LEDs. In addition, Manchester code is used to avoid flickering of the LEDs. Finally, it is transmitted via on-off keying (OOK) modulation.
After transmission over free-space channel, the signals sent from the LEDs with location information are collected by the PD installed in the vehicle, and processed by the microcontroller unit (MCU) to obtain the ID information of the LEDs, thereby obtaining the world coordinates of the neighboring LEDs. In addition, the IMU-based vehicle positioning scheme is used to analyze the inertial data of the vehicle collected through the IMU. Meanwhile, in the central processing unit (CPU), the SL-VLP measurement is calculated using photogrammetry analysis. Moreover, a PF-assisted data fusion scheme is used to fuse the results of the SL-VLP and IMU-based positioning schemes. Finally, by combining the world coordinates of the neighboring LEDs, accurate vehicle position information can be obtained. By using the proposed positioning scheme, the vehicle can still maintain effective positioning when the SL-VLP system is interrupted and get more accurate positioning results when it is not interrupted.
2.1. SL-VLP algorithm
The schematic of 3D world coordinate system, the 3D camera coordinate system, and the 2D image plane coordinate system is shown in Fig. 2. The position of the LED in the world coordinate system is ${P_i} = {[{X_i},{Y_i},{Z_i}]^T},(i = 1,2,\ldots N)$, ${P_i}$ map through the lens of camera to a position in the image coordinate system as ${p_i} = {[{x_i},{y_i}]^T},(i = 1,2,\ldots n)$, where ${P_i}$ is known information when the LED is deployed.
Assuming that a calibrated pinhole camera has a perspective projection model, the VLP model is expressed as:
where ${P_{ci}} = {[{{X_{ci}},{Y_{ci}},{Z_{ci}}} ]^T},({i = 1,2, \ldots N} )$ is the position of the LED located ${P_i}$ position in the camera coordinate system, ${P_{co}} = {[{{X_{co}},{Y_{co}},{Z_{co}}} ]^T}$ is the original position of the camera coordinate system in the world coordinate system, which is the position of the positioning terminal $S = {[{{X_s},{Y_s},{Z_s}} ]^T}$. $\boldsymbol{R}$ is a rotation matrix from the 3D world coordinate system to the 3D camera coordinate system. And it is given by:For the vertical distance from the camera plane to the ceiling, it is given by:
where ${Z_s}$ is the $Z$ coordinate of the positioning terminal, f is the focal length, d is the physical diameter of the LED and $d^{\prime}$ is the diameter of the LED in the image. The relationship between the coordinates ${P_{ci}} = {[{{X_{ci}},{Y_{ci}},{Z_{ci}}} ]^T}$ of the LED in the camera coordinate system and the coordinates ${p_i} = {[{{x_i},{y_i}} ]^T}$ in the image coordinate system is expressed as:Substituting (2), (3) and (4) into (1), the world coordinates ${P_i} = {[{{X_i},{Y_i},{Z_i}} ]^T}$ of LED, the image coordinates ${p_i} = {[{{x_i},{y_i}} ]^T}$ of LED and the world coordinates $S = {[{{X_s},{Y_s},{Z_s}} ]^T}$ of the positioning terminal are given by:
2.2. IMU-based inertial positioning scheme
To estimate the complete 3D ($x$, y, and $z$) pose of the vehicle over time, it can be described as a nonlinear dynamic system, and it is given by:
where $f({\cdot} )$ is a nonlinear state transition function for the state prediction of inertial sensors, and ${w_t}$ is Gaussian process noise. ${s_t} = {[{{x_t}{{\dot{x}}_t}{y_t}{{\dot{y}}_t}} ]^T}$ is the state of the vehicle at the time of t. Considering that the vehicle moves in the plane by the X and Y axes, the state of the vehicle ${s_t}$ contains the position of the vehicle in the world coordinates (${x_t}$, ${y_t}$) and the velocity along the X and Y axis directions (${\dot{x}_t}$, ${\dot{y}_t}$).The vehicle attitude information $\left[ {\begin{array}{ccc} {{a_x}}&{{a_y}}&{{a_z}}\\ r&p&y\\ {{\omega_r}}&{{\omega_p}}&{{\omega_y}} \end{array}} \right]$ is acquired from the IMU sensor, which is obtained by quadratic to Euler angular transformation of the raw sensor data, where ${a_x}$, ${a_y}$ and ${a_z}$ are the linear accelerations along the X, Y, and Z axes, respectively. And ${\omega _r}$, ${\omega _p}$ and ${\omega _y}$ are the angular velocities of roll angle, pitch angle and yaw angle, respectively. r, p and y denote the angles of roll angle, pitch angle and yaw angle. Then, the state of the vehicle can be given by:
where ${\boldsymbol{R}_{\boldsymbol{s}}} = \left[ {\begin{array}{ccccc} 1&0&{\mathrm{\Delta }t}&0\\ 0&1&0&{\mathrm{\Delta }t}\\ 0&0&1&0\\ 0&0&0&1 \end{array}} \right]$ and ${\boldsymbol{R}_{\boldsymbol{c}}} = \left[ {\begin{array}{c} {0.5{a_x}\mathrm{\Delta }{t^2}}\\ {0.5{a_y}\mathrm{\Delta }{t^2}}\\ {{a_x}\mathrm{\Delta }t}\\ {{a_y}\mathrm{\Delta }t} \end{array}} \right]$, $\mathrm{\Delta }t$ is the time to transfer the state ${s_{t - 1}}$ to the state ${s_t}$.2.3. PF-assisted SL-VLP and inertial fusion positioning scheme
The flowchart of PF-assisted SL-VLP and inertial fusion positioning scheme is shown in Fig. 3. By using the particle filter scheme based on sampling importance resampling (SIR), the positioning error caused by hardware noise and angle deviation is reduced. As $\mathrm{\Delta }t$ is the adjacent sampling time interval, at the sampling time of k, the vehicle state is expressed as ${s_k} = {[{{x_k}{{\dot{x}}_k}\; {y_k}{{\dot{y}}_k}\; } ]^T}$, where $({\; {x_k},{y_k}} )$ is the position of the vehicle at the time of k, where $({\; {{\dot{x}}_k},{{\dot{y}}_k}} )$ is the velocities of the vehicle in the direction of x-axis and y-axis at the time of k, and the state of the vehicle is given by:
where $\boldsymbol{\phi } = \boldsymbol{\; }\left[ {\begin{array}{cccc} 1&{\mathrm{\Delta }t}&0&0\\ 0&1&0&0\\ 0&0&1&{\mathrm{\Delta }t}\\ 0&0&0&1 \end{array}} \right]$ and $\boldsymbol{\varphi } = \; \left[ {\begin{array}{cc} {0.5\mathrm{\Delta }{t^2}}&0\\ {\mathrm{\Delta }t}&0\\ 0&{0.5\mathrm{\Delta }{t^2}}\\ 0&{\mathrm{\Delta }t} \end{array}} \right]$, ${a_k}$ is the acceleration of the vehicle along the x-axis and y-axis (${a_x},{a_y}$), ${\omega _k}$ is process noise.The observation function based on the SL-VLP vehicle positioning scheme is given by:
where ${z_k}$ is the distance between the vehicle and the target LED obtained by the SL-VLP vehicle positioning scheme. (${x_s}$, ${y_s}$) is the position of the LED light in world coordinates and ${v_k}$ is the measurement noise of the vehicle at the time of k.The total number of particles is N, and ${w_k}$ is the weight value of particle ${s_k}$. After obtaining the observation value ${z_k}$ at the sampling time of k, we can predict the particles based on the set of particles $\{{s_{k - 1}^i,w_{k - 1}^i} \}_{i = 1}^N$ at the sampling time of $k - 1$. The prediction process is as follows: Firstly, the particle $\{{s_k^i} \}_{i = 1}^N$ predicted at the sampling time of k is obtained based on the prior probability density. It is expressed as:
Secondly, for $\textrm{i} = 1,2, \ldots \textrm{N}$, the weight value of each particle is calculated according to the likelihood function:
Then, the weight values of each particle are normalized:
Finally, from the particle set as $\{{s_k^i,w_k^i} \}_{i = 1}^N$, a resampling algorithm is applied to generate a new particle set $\{{\tilde{s}_k^i,\tilde{w}_k^i} \}_{i = 1}^N$ based on the weight values, and the weight value of each particle is set to $1/N$. The vehicle state at the sampling time of k is used to estimate the vehicle position. It is expressed as:
Therefore, PF can be used to mitigate the Gaussian noise and non-Gaussian noise from SL-VLP, and improve the localization accuracy of SL-VLP.
In addition, linear minimum variance estimation is adopted to fuse the data of IMU and SL-VLP. The linear minimum square error for multi-sensor information fusion is expressed as:
where ${\hat{S}_i},i = 1,2,\ldots ,L$ is the unbiased estimation value of the $ith$ sensor, ${A_i},i = 1,2,\ldots ,L$ is the optimal weighting value of ${\hat{S}_i}$, and the optimal weighting matrix are given by: where P is the covariance matrix of the estimation error. e is column matrix, and all of the elements are set to 1.3. Experimental setup and results discussion
In the paper, to simulate the real scene, the ratio of experimental setting to the real scene is 1:10. In the experiment, the distance between the vehicle and the LED light is set to 50 cm and the diameter of LED light is 2 cm. Other parameters are shown in Table 1. Figure 4 shows the experimental setup of the PF-assisted SL-VLP and inertial fusion positioning scheme. The LED is fixed as the transmitter. The device integrated by PD, CMOS camera (iPhone 7) and IMU as the receiver is moved on the track, and it can control the device to move at constant velocities. In the experiment, the data of the speed from 3 m/s to 9 m/s with the outage rate from 0 to 22% are measured and analyzed. Considering that the sparse LED layout, there is SL-VLP outage area between two LEDs, and SL-VLP outage rate is expressed as:
where ${L_{outage}}$ is the length of the SL-VLP outage area, and L is the total length of the coverage area of two LEDs and the SL-VLP outage area.3.1 Time overhead for the proposed positioning scheme
In the paper, the computation time of the proposed positioning scheme is measured 564 times at SL-VLP outage rates of 0, 5.5%, 11%, and 22% when the vehicle moves at 5 m/s, and the results are shown in Fig. 5(a)-(d). The x-axis (Frame Number) in Fig. 5 is the time overhead of frame i when $x = i$. The time overhead of the proposed scheme ${T_o}$ is expressed as:
3.2 Mean positioning error for the proposed vehicle positioning scheme
Figure 6 shows the positioning errors under different positioning schemes when the vehicle moves at 5 m/s with SL-VLP outage rates of 22%. To obtain the positioning error, it is scaled by a factor of 10 based on the ratio of experimental setting to the real scene. It can be seen that in the area without LED coverage, SL-VLP is not available, and the proposed positioning scheme is mainly depend on the PF-based IMU scheme. In addition, due to the cumulative errors, even if PF is used to optimize the positioning performance, the positioning error of the IMU still gradually increases. In this experiment, the mean error of the PF-based IMU positioning scheme is 0.48 m, the mean error of the PF-based SL-VLP positioning scheme is 0.29 m, and the mean positioning error of the proposed PF-assisted SL-VLP and inertial fusion positioning scheme is 0.17 m, which brings about 41% improvement in positioning accuracy compared with the PF-based SL-VLP positioning scheme. In addition, the proposed positioning scheme can overcome the outage problem of SL-VLP and alleviate the cumulative error characteristics caused by the IMU.
The impact of SL-VLP outage rate on the mean positioning error at different speeds are shown in Fig. 7(a)-(c). To investigate the effect of different LED layout sparsity on the proposed positioning scheme, the outage rates is set to 0, 5.5%, 11%, and 22%. The variance of PF-based SL-VLP is 0.045, the variance of IMU is 0.051, and the variance of the proposed scheme is 0.015. It is observed that the mean positioning error of the PF-assisted SL-VLP and inertial fusion positioning scheme is lower and more stable for different outage rates and speeds. Furthermore, considering that the speed from 5 m/s to 7 m/s, the mean positioning error at different SL-VLP outage rates are 0.09 m, 0.11 m, 0.15 m, and 0.18 m, respectively. The mean positioning error of the proposed positioning scheme increases with the SL-VLP outage rate.
At the SL-VLP outage rate of 11%, the mean positioning errors of SL-VLP, PF-based SL-VLP, IMU, PF-based IMU and SL-VLP and inertial fusion schemes at different speeds are shown in Fig. 8. For the moving speeds from 3 m/s to 9 m/s, 130 to 380 data are collected at each speed for the total of 1530 data. It can be seen that the mean positioning error is 0.34 m for SL-VLP, 0.28 m for PF-based SL-VLP, 0.38 m for IMU, and 0.31 m for PF-based IMU. After using PF, the mean positioning error of SL-VLP scheme and IMU-based positioning scheme decreases, the former decreases by 17.7% and the latter decreases by 18.4%. This is because PF mitigates the effect of Gaussian and non-Gaussian noise in these two positioning schemes. The mean positioning error of PF-assisted SL-VLP and inertial fusion positioning scheme is 0.16 m with outage rates of 11% at all speeds, which is 42.9% lower than the PF-based SL-VLP scheme. Thus, the proposed PF-assisted SL-VLP and inertial fusion positioning scheme can improve the positioning accuracy.
The CDF curves of different positioning schemes are shown in Fig. 9. It can be seen that 90% of the positioning errors of the PF-based SL-VLP scheme are within 0.47 m, 90% of the positioning errors of the PF-based IMU positioning scheme are within 0.70 m, and 90% of the positioning errors of the proposed SL-VLP and inertial fusion scheme are within 0.27 m. For all data with different outage rates at different speeds, the mean positioning error is about 0.14 m by using the proposed positioning scheme.
4. Conclusions
In the paper, the PF-assisted SL-VLP and inertial fusion positioning scheme is proposed and experimentally demonstrated. The LED light transmits its position information through OOK modulation, the PD installed in the vehicle receives the signal and decodes the world coordinates of the LED. Then, the position of the vehicle is obtained through the PF-assisted data fusion combined with SL-VLP and IMU positioning scheme. It reduces the time overhead of positioning by using the PD assisted CMOS camera-based SL-VLP scheme. In addition, by combining the SL-VLP with the IMU-based positioning scheme, the positioning accuracy of vehicle is improved in the case of SL-VLP outage. The experimental results show that, by using the proposed positioning scheme, it can achieve mean positioning error of 0.14 m at the speed from 3 m/s to 9 m/s in scenarios with VLP outage rates from 0 to 22%.
Funding
National Natural Science Foundation of China (61775054); Natural Science Foundation of Hunan Province (2020JJ4210).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. J. Moon, I. Bae, and S. Kim, “Real-time near-optimal path and maneuver planning in automatic parking using a simultaneous dynamic optimization approach,” in 2017 IEEE Intelligent Vehicles Symposium (2017), pp. 193–196.
2. A. Bazzi, B. M. Masini, A. Zanella, and A. Calisti, “Visible Light Communications as a Complementary Technology for the Internet of Vehicles,” Comput. Commun. 93(1), 39–51 (2016). [CrossRef]
3. Y. Zhuang, L. Hua, L. N. Qi, J. Yang, P. Cao, Y. Cao, Y. P. Wu, J. Thompson, and H. Hass, “A Survey of Positioning Systems Using Visible LED Lights,” IEEE Commun. Surv. Tutorials 20(3), 1963–1988 (2018). [CrossRef]
4. O. Kaiwartya, Y. Cao, J. Lloret, S. Kumar, N. Aslam, R. Kharel, A. H. Abdullah, and R. R. Shah, “Geometry-Based Localization for GPS Outage in Vehicular Cyber Physical Systems,” IEEE Trans. Veh. Technol. 67(5), 3800–3812 (2018). [CrossRef]
5. C. -H. Park and J. -H. Han, “Performance Evaluation of GNSS and Motion Sensor Integrated Positioning Algorithm for Land Vehicle Monitoring,” in 2020 International Conference on Information and Communication Technology Convergence (ICTC) (2020), pp. 1592–1595.
6. Y. Wang, X. Yin, G. Cai, G. Wang, S. Guo, and K. Liang, “Tunnel Vehicle RSSI Positioning Algorithm Based on LMLF Model,” in 2019 4th International Conference on Cloud Computing and Internet of Things (CCIOT) (2019), p. 32.
7. Z. Li, L. Feng, and A. Yang, “Fusion Based on Visible Light Positioning and Inertial Navigation Using Extended Kalman Filters,” Sensors 17(5), 1093–1103 (2017). [CrossRef]
8. T. Kim, J. Lee, and T. Park, “Fusing Lidar, Radar, and Camera Using Extended Kalman Filter for Estimating the Forward Position of Vehicles,” in IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2019), pp. 374–379.
9. D. Trong-Hop and Y. Myungsik, “An in-Depth Survey of Visible Light Communication Based Positioning Systems,” Sensors 16(5), 678–717 (2016). [CrossRef]
10. W. Xu, J. Wang, H. Shen, H. Zhang, and X. You, “Indoor Positioning for Multiphotodiode Device Using Visible-Light Communications,” IEEE Photonics J. 8(1), 1–11 (2016). [CrossRef]
11. J. He and B. Zhou, “Vehicle positioning scheme based on visible light communication using a CMOS camera,” Opt. Express 29(17), 27278–27290 (2021). [CrossRef]
12. J. He, K. Tang, J. He, and J. Shi, “Effective Vehicle-to-Vehicle Positioning Method Using Monocular Camera Based on VLC,” Opt. Express 28(4), 4433–4443 (2020). [CrossRef]
13. B. Zhou, A. Liu, V. Lau, J. Wen, S. Mumtaz, A. K. Bashir, and S. H. Ahmed, “Performance Limits of Visible Light-Based Positioning for Internet-of-Vehicles: Time-Domain Localization Cooperation Gain,” IEEE Trans. Intell. Transport. Syst. 22(8), 5374–5388 (2021). [CrossRef]
14. W. Guan, L. Huang, B. Hussain, and C. P. Yue, “Robust Robotic Localization Using Visible Light Positioning and Inertial Fusion,” IEEE Sens. J. 22(6), 4882–4892 (2022). [CrossRef]
15. R. Zhang, W. -D. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photonics J. 9(4), 1–9 (2017). [CrossRef]
16. H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A Single LED Visible Light Positioning System Based on Geometric Features and CMOS Camera,” IEEE Photonics Technol. Lett. 32(17), 1097–1100 (2020). [CrossRef]
17. C. Qin and X. Zhan, “VLIP: Tightly Coupled Visible-Light/Inertial Positioning System to Cope With Intermittent Outage,” IEEE Photonics Technol. Lett. 31(2), 129–132 (2019). [CrossRef]
18. R. Zhang, Z. Liu, K. Qian, S. Zhang, P. Du, C. Chen, and A. Alphones, “Outage Bridging and Trajectory Recovery in Visible Light Positioning Using Insufficient RSS Information,” IEEE Access 8, 162302–162312 (2020). [CrossRef]