Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vehicle positioning scheme based on particle filter assisted single LED visible light positioning and inertial fusion

Open Access Open Access

Abstract

Visible light positioning (VLP) can provide high positioning accuracy based on existing lighting facilities, which gives it great potential for intelligent transportation system (ITS). However, in real scenarios, the performance of visible light positioning is limited by outage problem caused by the sparse distribution of light emitting diode (LED) and time cost of the positioning algorithm. In this paper, a particle filter (PF) assisted single LED VLP (SL-VLP) and inertial fusion positioning scheme is proposed and experimentally demonstrated. It improves the robustness of VLP in sparse LED scenarios. In addition, the time cost and the positioning accuracy at different outage rates and speeds are investigated. The experimental results show that, by using the proposed vehicle positioning scheme, it can achieve the mean positioning error is 0.09 m, 0.11 m, 0.15 m and 0.18 m at 0, 5.5%, 11% and 22% of SL-VLP outage rate, respectively.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Vehicle positioning systems are essential for internet of vehicles (IOV) such as intelligent transportation, assisted parking, and autonomous navigation [1,2]. The global positioning system (GPS) is widely used, and it provides easy-to-access solution for localization services. Although GPS is very popular, performance degradation is increasingly observed due to link blockages and multipath, it makes unfeasible in urban canyons, inside tunnels or underground [3]. To solve the problems of GPS, geometry-based positioning scheme in the case of GPS interruption is proposed [4]. In [5], an integrated positioning algorithm of GPS and inertial measurement unit (IMU) is developed. In [6], a Received Signal Strength Indication (RSSI) localization algorithm based on a linear modified log function model is proposed. However, these schemes are limited due to the use of the GPS. Inertial navigation using IMU is a positioning technique that has the advantages of high short-term accuracy and resistance to interference. However, it cannot provide accurate positioning in the long term due to the presence of cumulative errors [7]. In addition, various vehicle positioning technologies such as light detection and ranging (LiDAR) and radio detection and ranging (Radar) are reported for vehicle positioning in IOV. It can provide high positioning accuracy with additional devices [8]. For visible light positioning (VLP), it is an emerging technology that it can be deployed without additional positioning equipment, and it is based on street light and vehicle light, thus, it can greatly reduce the implementation cost [9]. Meanwhile, VLP can be used to compensate for weak or no GPS signals by providing services as vehicles receive the signals sent from LED light.

In VLP systems, the common receivers are photodetector (PD) and complementary metal oxide semiconductor (CMOS) camera. For PD-based VLP, it is difficult to provide positioning reliability in outdoor environments due to intense solar radiation, as well as the interference from other ambient light [10]. For CMOS camera-based VLP, due to the spatial separability of CMOS cameras, it can effectively separate interference noise such as sunlight and LED light sources, and it can provide reliable localization services in outdoor environments with interference sources [11]. However, the image processing algorithms with high computational complexity is required [12,13].

In addition, there are many challenges in traditional VLP systems, especially for the shortage of LEDs, line-of-sight (LOS) barriers, and the limited field-of-view (FOV) of the image sensor [14]. Although single-LED VLP (SL-VLP) schemes are proposed [15,16], it is also difficult to guarantee known LOS LED for every corner and any area in the real scenarios. There are some existing solutions for VLP outage [7,14,17,18], most of them are applicable to low-speed scenes and does not take into account the time overhead of positioning algorithm. Moreover, the positioning error of VLP caused by the measurement noise of image sensors is the bottleneck of positioning accuracy for CMOS camera-based VLP.

In the paper, the PF-assisted SL-VLP and inertial fusion positioning scheme is proposed and experimentally demonstrated. At the transmitter, the LED sends modulated signal with identification (ID) information. At the receiver, the vehicle is equipped with the device integrating PD, COMS camera and IMU for vehicle positioning. The CMOS camera is responsible for precise positioning based on a single LED, and the PD is used to receive and decode the ID information from the LEDs. It can reduce the computational complexity and improve the system performance of VLP. In addition, a PF-assisted data fusion scheme is used to fuse SL-VLP and inertial positioning for improving the positioning accuracy and solving the outage problem of the SL-VLP.

2. Proposed SL-VLP and inertial fusion scheme

Figure 1 shows the proposed PF-assisted SL-VLP and inertial fusion positioning scheme. At the transmitter, each LED light broadcasts the signal of its location. Due to the different geographical locations of each LED, each LED has unique ID data. Firstly, the ID data of each LED light is packaged as a data packet. The packet structure consists of 4-bit header, multi-bit payload and 1-bit tail. 4-bit headers and 1-bit tails are inserted at the beginning and end of each packet, which carry information for synchronous detection of each packet. The multi-bit payload contains the position data, and it is including the ID information of the LEDs, and the exact length of the payload is determined by the number of LEDs. In addition, Manchester code is used to avoid flickering of the LEDs. Finally, it is transmitted via on-off keying (OOK) modulation.

 figure: Fig. 1.

Fig. 1. The proposed PF-assisted SL-VLP and inertial fusion positioning scheme.

Download Full Size | PDF

After transmission over free-space channel, the signals sent from the LEDs with location information are collected by the PD installed in the vehicle, and processed by the microcontroller unit (MCU) to obtain the ID information of the LEDs, thereby obtaining the world coordinates of the neighboring LEDs. In addition, the IMU-based vehicle positioning scheme is used to analyze the inertial data of the vehicle collected through the IMU. Meanwhile, in the central processing unit (CPU), the SL-VLP measurement is calculated using photogrammetry analysis. Moreover, a PF-assisted data fusion scheme is used to fuse the results of the SL-VLP and IMU-based positioning schemes. Finally, by combining the world coordinates of the neighboring LEDs, accurate vehicle position information can be obtained. By using the proposed positioning scheme, the vehicle can still maintain effective positioning when the SL-VLP system is interrupted and get more accurate positioning results when it is not interrupted.

2.1. SL-VLP algorithm

The schematic of 3D world coordinate system, the 3D camera coordinate system, and the 2D image plane coordinate system is shown in Fig. 2. The position of the LED in the world coordinate system is ${P_i} = {[{X_i},{Y_i},{Z_i}]^T},(i = 1,2,\ldots N)$, ${P_i}$ map through the lens of camera to a position in the image coordinate system as ${p_i} = {[{x_i},{y_i}]^T},(i = 1,2,\ldots n)$, where ${P_i}$ is known information when the LED is deployed.

 figure: Fig. 2.

Fig. 2. The schematic of world coordinates, camera coordinates, and image coordinate systems.

Download Full Size | PDF

Assuming that a calibrated pinhole camera has a perspective projection model, the VLP model is expressed as:

$${P_{ci}} = \boldsymbol{R}({{P_i} - {P_{co}}} )$$
where ${P_{ci}} = {[{{X_{ci}},{Y_{ci}},{Z_{ci}}} ]^T},({i = 1,2, \ldots N} )$ is the position of the LED located ${P_i}$ position in the camera coordinate system, ${P_{co}} = {[{{X_{co}},{Y_{co}},{Z_{co}}} ]^T}$ is the original position of the camera coordinate system in the world coordinate system, which is the position of the positioning terminal $S = {[{{X_s},{Y_s},{Z_s}} ]^T}$. $\boldsymbol{R}$ is a rotation matrix from the 3D world coordinate system to the 3D camera coordinate system. And it is given by:
$${R = {R_x}(\alpha )\mathrm{\ \cdot }{R_y}(\beta )\mathrm{\ \cdot }{R_z}(\gamma )}$$
$$= \left[ {\begin{array}{ccc} 1&0&0\\ 0&{cos\alpha }&{sin\alpha }\\ 0&{ - sin\alpha }&{cos\alpha } \end{array}} \right]\mathrm{\ \cdot }\left[ {\begin{array}{ccc} {cos\beta }&0&{ - sin\beta }\\ 0&1&0\\ {sin\beta }&0&{cos\beta } \end{array}} \right]\mathrm{\ \cdot }\left[ {\begin{array}{ccc} {cos\gamma }&{ - sin\gamma }&0\\ {sin\gamma }&{cos\gamma }&0\\ 0&0&1 \end{array}} \right]$$
where $\alpha $, $\beta $ and $\gamma $ are the angles of the X, $Y$ and Z axes in the camera coordinate system relative to the world coordinate system. Three rotation angles can be estimated by the IMU installed on the vehicle. In the paper, the vehicle is moving in a horizontal direction, so the ${R_x}(\alpha )$ and ${R_y}(\beta )$ are constant matrix ${C_\alpha }$ and ${C_\beta }$. And the camera plane is always parallel to the LED plane. Thus, ${R_z}(\gamma )$ is the identity matrix, the camera plane and the LED light plane will always have a tiny pin so that the ${R_z}(\gamma )$ is not a unit matrix. It affects the results of the VLP, and by using the PF, it can greatly mitigate this effect.

For the vertical distance from the camera plane to the ceiling, it is given by:

$$H = {Z_i} - {Z_s} = \frac{{df}}{{d^{\prime}}}$$
where ${Z_s}$ is the $Z$ coordinate of the positioning terminal, f is the focal length, d is the physical diameter of the LED and $d^{\prime}$ is the diameter of the LED in the image. The relationship between the coordinates ${P_{ci}} = {[{{X_{ci}},{Y_{ci}},{Z_{ci}}} ]^T}$ of the LED in the camera coordinate system and the coordinates ${p_i} = {[{{x_i},{y_i}} ]^T}$ in the image coordinate system is expressed as:
$$\frac{{{X_{ci}}}}{{{x_i}}} = \frac{{{Y_{ci}}}}{{{y_i}}} = \frac{{{Z_{ci}}}}{{ - f}}$$

Substituting (2), (3) and (4) into (1), the world coordinates ${P_i} = {[{{X_i},{Y_i},{Z_i}} ]^T}$ of LED, the image coordinates ${p_i} = {[{{x_i},{y_i}} ]^T}$ of LED and the world coordinates $S = {[{{X_s},{Y_s},{Z_s}} ]^T}$ of the positioning terminal are given by:

$${\left[ {\begin{array}{c} {{x_i}}\\ {{y_i}}\\ {d/d^{\prime}} \end{array}} \right] = {C_\alpha }\mathrm{\ \cdot }{C_\beta }({ - f/H} )\left[ {\begin{array}{c} {{X_i} - {X_s}}\\ {{Y_i} - {Y_s}}\\ {{Z_s} - {Z_i}} \end{array}} \right]}$$

2.2. IMU-based inertial positioning scheme

To estimate the complete 3D ($x$, y, and $z$) pose of the vehicle over time, it can be described as a nonlinear dynamic system, and it is given by:

$${\widehat {{s_t}} = f({{s_{t - 1}}} )+ {w_t}}$$
where $f({\cdot} )$ is a nonlinear state transition function for the state prediction of inertial sensors, and ${w_t}$ is Gaussian process noise. ${s_t} = {[{{x_t}{{\dot{x}}_t}{y_t}{{\dot{y}}_t}} ]^T}$ is the state of the vehicle at the time of t. Considering that the vehicle moves in the plane by the X and Y axes, the state of the vehicle ${s_t}$ contains the position of the vehicle in the world coordinates (${x_t}$, ${y_t}$) and the velocity along the X and Y axis directions (${\dot{x}_t}$, ${\dot{y}_t}$).

The vehicle attitude information $\left[ {\begin{array}{ccc} {{a_x}}&{{a_y}}&{{a_z}}\\ r&p&y\\ {{\omega_r}}&{{\omega_p}}&{{\omega_y}} \end{array}} \right]$ is acquired from the IMU sensor, which is obtained by quadratic to Euler angular transformation of the raw sensor data, where ${a_x}$, ${a_y}$ and ${a_z}$ are the linear accelerations along the X, Y, and Z axes, respectively. And ${\omega _r}$, ${\omega _p}$ and ${\omega _y}$ are the angular velocities of roll angle, pitch angle and yaw angle, respectively. r, p and y denote the angles of roll angle, pitch angle and yaw angle. Then, the state of the vehicle can be given by:

$${{s_t} = {\boldsymbol{R}_{\boldsymbol{s}}}{s_{t - 1}} + {\boldsymbol{R}_{\boldsymbol{c}}}}$$
where ${\boldsymbol{R}_{\boldsymbol{s}}} = \left[ {\begin{array}{ccccc} 1&0&{\mathrm{\Delta }t}&0\\ 0&1&0&{\mathrm{\Delta }t}\\ 0&0&1&0\\ 0&0&0&1 \end{array}} \right]$ and ${\boldsymbol{R}_{\boldsymbol{c}}} = \left[ {\begin{array}{c} {0.5{a_x}\mathrm{\Delta }{t^2}}\\ {0.5{a_y}\mathrm{\Delta }{t^2}}\\ {{a_x}\mathrm{\Delta }t}\\ {{a_y}\mathrm{\Delta }t} \end{array}} \right]$, $\mathrm{\Delta }t$ is the time to transfer the state ${s_{t - 1}}$ to the state ${s_t}$.

2.3. PF-assisted SL-VLP and inertial fusion positioning scheme

The flowchart of PF-assisted SL-VLP and inertial fusion positioning scheme is shown in Fig. 3. By using the particle filter scheme based on sampling importance resampling (SIR), the positioning error caused by hardware noise and angle deviation is reduced. As $\mathrm{\Delta }t$ is the adjacent sampling time interval, at the sampling time of k, the vehicle state is expressed as ${s_k} = {[{{x_k}{{\dot{x}}_k}\; {y_k}{{\dot{y}}_k}\; } ]^T}$, where $({\; {x_k},{y_k}} )$ is the position of the vehicle at the time of k, where $({\; {{\dot{x}}_k},{{\dot{y}}_k}} )$ is the velocities of the vehicle in the direction of x-axis and y-axis at the time of k, and the state of the vehicle is given by:

$${{s_{k + 1}} = \boldsymbol{\phi }{s_k} + \boldsymbol{\varphi }{a_k} + {\omega _k}}$$
where $\boldsymbol{\phi } = \boldsymbol{\; }\left[ {\begin{array}{cccc} 1&{\mathrm{\Delta }t}&0&0\\ 0&1&0&0\\ 0&0&1&{\mathrm{\Delta }t}\\ 0&0&0&1 \end{array}} \right]$ and $\boldsymbol{\varphi } = \; \left[ {\begin{array}{cc} {0.5\mathrm{\Delta }{t^2}}&0\\ {\mathrm{\Delta }t}&0\\ 0&{0.5\mathrm{\Delta }{t^2}}\\ 0&{\mathrm{\Delta }t} \end{array}} \right]$, ${a_k}$ is the acceleration of the vehicle along the x-axis and y-axis (${a_x},{a_y}$), ${\omega _k}$ is process noise.

 figure: Fig. 3.

Fig. 3. The flowchart of PF-assisted SL-VLP and inertial fusion positioning.

Download Full Size | PDF

The observation function based on the SL-VLP vehicle positioning scheme is given by:

$${{z_k} = \; \sqrt {{{({{x_k} - {x_s}} )}^2} + {{({{y_k} - {y_s}} )}^2}} + {v_k}}$$
where ${z_k}$ is the distance between the vehicle and the target LED obtained by the SL-VLP vehicle positioning scheme. (${x_s}$, ${y_s}$) is the position of the LED light in world coordinates and ${v_k}$ is the measurement noise of the vehicle at the time of k.

The total number of particles is N, and ${w_k}$ is the weight value of particle ${s_k}$. After obtaining the observation value ${z_k}$ at the sampling time of k, we can predict the particles based on the set of particles $\{{s_{k - 1}^i,w_{k - 1}^i} \}_{i = 1}^N$ at the sampling time of $k - 1$. The prediction process is as follows: Firstly, the particle $\{{s_k^i} \}_{i = 1}^N$ predicted at the sampling time of k is obtained based on the prior probability density. It is expressed as:

$$s_k^i\sim p({{x_k}\textrm{|}s_{k - 1}^i} )$$

Secondly, for $\textrm{i} = 1,2, \ldots \textrm{N}$, the weight value of each particle is calculated according to the likelihood function:

$$w_k^i = w_{k - 1}^ip({{z_k}\textrm{|}s_k^i} )$$

Then, the weight values of each particle are normalized:

$$w_k^i = \frac{{w_k^i}}{{\mathop \sum \nolimits_{j = 1}^N w_k^j}}$$

Finally, from the particle set as $\{{s_k^i,w_k^i} \}_{i = 1}^N$, a resampling algorithm is applied to generate a new particle set $\{{\tilde{s}_k^i,\tilde{w}_k^i} \}_{i = 1}^N$ based on the weight values, and the weight value of each particle is set to $1/N$. The vehicle state at the sampling time of k is used to estimate the vehicle position. It is expressed as:

$$\hat{s} = \mathop \sum \limits_{i = 1}^N \tilde{s}_k^i\tilde{w}_k^i$$

Therefore, PF can be used to mitigate the Gaussian noise and non-Gaussian noise from SL-VLP, and improve the localization accuracy of SL-VLP.

In addition, linear minimum variance estimation is adopted to fuse the data of IMU and SL-VLP. The linear minimum square error for multi-sensor information fusion is expressed as:

$${\hat{S}_0} = {A_1}{\hat{S}_1} + {A_2}{\hat{S}_2} + \ldots + {A_L}{\hat{S}_L}$$
where ${\hat{S}_i},i = 1,2,\ldots ,L$ is the unbiased estimation value of the $ith$ sensor, ${A_i},i = 1,2,\ldots ,L$ is the optimal weighting value of ${\hat{S}_i}$, and the optimal weighting matrix are given by:
$$A = \; {[{{A_1} \ldots \; {A_L}} ]^T} = \; {P^{ - 1}}e{({{e^T}{P^{ - 1}}e} )^{ - 1}}$$
where P is the covariance matrix of the estimation error. e is column matrix, and all of the elements are set to 1.

3. Experimental setup and results discussion

In the paper, to simulate the real scene, the ratio of experimental setting to the real scene is 1:10. In the experiment, the distance between the vehicle and the LED light is set to 50 cm and the diameter of LED light is 2 cm. Other parameters are shown in Table 1. Figure 4 shows the experimental setup of the PF-assisted SL-VLP and inertial fusion positioning scheme. The LED is fixed as the transmitter. The device integrated by PD, CMOS camera (iPhone 7) and IMU as the receiver is moved on the track, and it can control the device to move at constant velocities. In the experiment, the data of the speed from 3 m/s to 9 m/s with the outage rate from 0 to 22% are measured and analyzed. Considering that the sparse LED layout, there is SL-VLP outage area between two LEDs, and SL-VLP outage rate is expressed as:

$${R_{outage}} = \frac{{{L_{outage}}}}{L}$$
where ${L_{outage}}$ is the length of the SL-VLP outage area, and L is the total length of the coverage area of two LEDs and the SL-VLP outage area.

 figure: Fig. 4.

Fig. 4. Experimental setup of the PF-assisted SL-VLP and inertial fusion positioning scheme.

Download Full Size | PDF

Tables Icon

Table 1. The experimental parameters

3.1 Time overhead for the proposed positioning scheme

In the paper, the computation time of the proposed positioning scheme is measured 564 times at SL-VLP outage rates of 0, 5.5%, 11%, and 22% when the vehicle moves at 5 m/s, and the results are shown in Fig. 5(a)-(d). The x-axis (Frame Number) in Fig. 5 is the time overhead of frame i when $x = i$. The time overhead of the proposed scheme ${T_o}$ is expressed as:

$${T_o} = \left\{ \begin{array}{l} {T_{PF - IMU}},{\kern 1cm} \textrm{ if SL - VLP outage}\\ {T_{SL - VLP}} + \; \; {T_f},{\kern 1cm}\textrm{else} \end{array} \right.$$
where ${T_{PF - IMU}}$ is the time cost of the PF-based IMU scheme, ${T_{SL - VLP}}$ is the time cost of the SL-VLP scheme, and ${T_f}$ is the time cost of the PF-assisted SL-VLP and inertial fusion scheme. The time overhead of the proposed scheme is composed of the SL-VLP positioning and the PF-assisted SL-VLP and inertial fusion scheme when SL-VLP is running, otherwise, the time overhead is the time cost of the PF-based IMU scheme. The average computation time of the positioning scheme is 22 ms, 21 ms, 17 ms, and 16 ms for SL-VLP outage rates of 0, 5.5%, 11%, and 22%, respectively. Due to PD used for ID identification of LED, the time cost by CMOS cameras for stripe decoding of images is eliminated and the performance of the system is improved. The average time overhead of the proposed positioning scheme is 19 ms. In the paper, the frame rate of camera is 25 fps, and the interval between two adjacent frames is 40 ms. Thus, the proposed positioning scheme can complete the positioning of the current frame before the arrival of the next frame. In addition, it is shown that the average computation time of the proposed positioning scheme decreases as the increasing of SL-VLP outage rate. The computation time of the proposed positioning scheme decreases because the SL-VLP positioning is not executed when the vehicle leaves the LED coverage.

 figure: Fig. 5.

Fig. 5. The measured positioning time with SL-VLP outage rates of (a) 0, (b) 5.5%, (c) 11%, (d) 22%.

Download Full Size | PDF

3.2 Mean positioning error for the proposed vehicle positioning scheme

Figure 6 shows the positioning errors under different positioning schemes when the vehicle moves at 5 m/s with SL-VLP outage rates of 22%. To obtain the positioning error, it is scaled by a factor of 10 based on the ratio of experimental setting to the real scene. It can be seen that in the area without LED coverage, SL-VLP is not available, and the proposed positioning scheme is mainly depend on the PF-based IMU scheme. In addition, due to the cumulative errors, even if PF is used to optimize the positioning performance, the positioning error of the IMU still gradually increases. In this experiment, the mean error of the PF-based IMU positioning scheme is 0.48 m, the mean error of the PF-based SL-VLP positioning scheme is 0.29 m, and the mean positioning error of the proposed PF-assisted SL-VLP and inertial fusion positioning scheme is 0.17 m, which brings about 41% improvement in positioning accuracy compared with the PF-based SL-VLP positioning scheme. In addition, the proposed positioning scheme can overcome the outage problem of SL-VLP and alleviate the cumulative error characteristics caused by the IMU.

 figure: Fig. 6.

Fig. 6. Positioning error under different positioning schemes.

Download Full Size | PDF

The impact of SL-VLP outage rate on the mean positioning error at different speeds are shown in Fig. 7(a)-(c). To investigate the effect of different LED layout sparsity on the proposed positioning scheme, the outage rates is set to 0, 5.5%, 11%, and 22%. The variance of PF-based SL-VLP is 0.045, the variance of IMU is 0.051, and the variance of the proposed scheme is 0.015. It is observed that the mean positioning error of the PF-assisted SL-VLP and inertial fusion positioning scheme is lower and more stable for different outage rates and speeds. Furthermore, considering that the speed from 5 m/s to 7 m/s, the mean positioning error at different SL-VLP outage rates are 0.09 m, 0.11 m, 0.15 m, and 0.18 m, respectively. The mean positioning error of the proposed positioning scheme increases with the SL-VLP outage rate.

 figure: Fig. 7.

Fig. 7. The SL-VLP outage rate on mean positioning error with the speeds of (a) 5 m/s, (b) 6 m/s, (c) 7 m/s.

Download Full Size | PDF

At the SL-VLP outage rate of 11%, the mean positioning errors of SL-VLP, PF-based SL-VLP, IMU, PF-based IMU and SL-VLP and inertial fusion schemes at different speeds are shown in Fig. 8. For the moving speeds from 3 m/s to 9 m/s, 130 to 380 data are collected at each speed for the total of 1530 data. It can be seen that the mean positioning error is 0.34 m for SL-VLP, 0.28 m for PF-based SL-VLP, 0.38 m for IMU, and 0.31 m for PF-based IMU. After using PF, the mean positioning error of SL-VLP scheme and IMU-based positioning scheme decreases, the former decreases by 17.7% and the latter decreases by 18.4%. This is because PF mitigates the effect of Gaussian and non-Gaussian noise in these two positioning schemes. The mean positioning error of PF-assisted SL-VLP and inertial fusion positioning scheme is 0.16 m with outage rates of 11% at all speeds, which is 42.9% lower than the PF-based SL-VLP scheme. Thus, the proposed PF-assisted SL-VLP and inertial fusion positioning scheme can improve the positioning accuracy.

 figure: Fig. 8.

Fig. 8. Mean positioning error of different positioning schemes at different velocities.

Download Full Size | PDF

The CDF curves of different positioning schemes are shown in Fig. 9. It can be seen that 90% of the positioning errors of the PF-based SL-VLP scheme are within 0.47 m, 90% of the positioning errors of the PF-based IMU positioning scheme are within 0.70 m, and 90% of the positioning errors of the proposed SL-VLP and inertial fusion scheme are within 0.27 m. For all data with different outage rates at different speeds, the mean positioning error is about 0.14 m by using the proposed positioning scheme.

 figure: Fig. 9.

Fig. 9. CDF curves of different positioning schemes.

Download Full Size | PDF

4. Conclusions

In the paper, the PF-assisted SL-VLP and inertial fusion positioning scheme is proposed and experimentally demonstrated. The LED light transmits its position information through OOK modulation, the PD installed in the vehicle receives the signal and decodes the world coordinates of the LED. Then, the position of the vehicle is obtained through the PF-assisted data fusion combined with SL-VLP and IMU positioning scheme. It reduces the time overhead of positioning by using the PD assisted CMOS camera-based SL-VLP scheme. In addition, by combining the SL-VLP with the IMU-based positioning scheme, the positioning accuracy of vehicle is improved in the case of SL-VLP outage. The experimental results show that, by using the proposed positioning scheme, it can achieve mean positioning error of 0.14 m at the speed from 3 m/s to 9 m/s in scenarios with VLP outage rates from 0 to 22%.

Funding

National Natural Science Foundation of China (61775054); Natural Science Foundation of Hunan Province (2020JJ4210).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Moon, I. Bae, and S. Kim, “Real-time near-optimal path and maneuver planning in automatic parking using a simultaneous dynamic optimization approach,” in 2017 IEEE Intelligent Vehicles Symposium (2017), pp. 193–196.

2. A. Bazzi, B. M. Masini, A. Zanella, and A. Calisti, “Visible Light Communications as a Complementary Technology for the Internet of Vehicles,” Comput. Commun. 93(1), 39–51 (2016). [CrossRef]  

3. Y. Zhuang, L. Hua, L. N. Qi, J. Yang, P. Cao, Y. Cao, Y. P. Wu, J. Thompson, and H. Hass, “A Survey of Positioning Systems Using Visible LED Lights,” IEEE Commun. Surv. Tutorials 20(3), 1963–1988 (2018). [CrossRef]  

4. O. Kaiwartya, Y. Cao, J. Lloret, S. Kumar, N. Aslam, R. Kharel, A. H. Abdullah, and R. R. Shah, “Geometry-Based Localization for GPS Outage in Vehicular Cyber Physical Systems,” IEEE Trans. Veh. Technol. 67(5), 3800–3812 (2018). [CrossRef]  

5. C. -H. Park and J. -H. Han, “Performance Evaluation of GNSS and Motion Sensor Integrated Positioning Algorithm for Land Vehicle Monitoring,” in 2020 International Conference on Information and Communication Technology Convergence (ICTC) (2020), pp. 1592–1595.

6. Y. Wang, X. Yin, G. Cai, G. Wang, S. Guo, and K. Liang, “Tunnel Vehicle RSSI Positioning Algorithm Based on LMLF Model,” in 2019 4th International Conference on Cloud Computing and Internet of Things (CCIOT) (2019), p. 32.

7. Z. Li, L. Feng, and A. Yang, “Fusion Based on Visible Light Positioning and Inertial Navigation Using Extended Kalman Filters,” Sensors 17(5), 1093–1103 (2017). [CrossRef]  

8. T. Kim, J. Lee, and T. Park, “Fusing Lidar, Radar, and Camera Using Extended Kalman Filter for Estimating the Forward Position of Vehicles,” in IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2019), pp. 374–379.

9. D. Trong-Hop and Y. Myungsik, “An in-Depth Survey of Visible Light Communication Based Positioning Systems,” Sensors 16(5), 678–717 (2016). [CrossRef]  

10. W. Xu, J. Wang, H. Shen, H. Zhang, and X. You, “Indoor Positioning for Multiphotodiode Device Using Visible-Light Communications,” IEEE Photonics J. 8(1), 1–11 (2016). [CrossRef]  

11. J. He and B. Zhou, “Vehicle positioning scheme based on visible light communication using a CMOS camera,” Opt. Express 29(17), 27278–27290 (2021). [CrossRef]  

12. J. He, K. Tang, J. He, and J. Shi, “Effective Vehicle-to-Vehicle Positioning Method Using Monocular Camera Based on VLC,” Opt. Express 28(4), 4433–4443 (2020). [CrossRef]  

13. B. Zhou, A. Liu, V. Lau, J. Wen, S. Mumtaz, A. K. Bashir, and S. H. Ahmed, “Performance Limits of Visible Light-Based Positioning for Internet-of-Vehicles: Time-Domain Localization Cooperation Gain,” IEEE Trans. Intell. Transport. Syst. 22(8), 5374–5388 (2021). [CrossRef]  

14. W. Guan, L. Huang, B. Hussain, and C. P. Yue, “Robust Robotic Localization Using Visible Light Positioning and Inertial Fusion,” IEEE Sens. J. 22(6), 4882–4892 (2022). [CrossRef]  

15. R. Zhang, W. -D. Zhong, Q. Kemao, and S. Zhang, “A single LED positioning system based on circle projection,” IEEE Photonics J. 9(4), 1–9 (2017). [CrossRef]  

16. H. Cheng, C. Xiao, Y. Ji, J. Ni, and T. Wang, “A Single LED Visible Light Positioning System Based on Geometric Features and CMOS Camera,” IEEE Photonics Technol. Lett. 32(17), 1097–1100 (2020). [CrossRef]  

17. C. Qin and X. Zhan, “VLIP: Tightly Coupled Visible-Light/Inertial Positioning System to Cope With Intermittent Outage,” IEEE Photonics Technol. Lett. 31(2), 129–132 (2019). [CrossRef]  

18. R. Zhang, Z. Liu, K. Qian, S. Zhang, P. Du, C. Chen, and A. Alphones, “Outage Bridging and Trajectory Recovery in Visible Light Positioning Using Insufficient RSS Information,” IEEE Access 8, 162302–162312 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The proposed PF-assisted SL-VLP and inertial fusion positioning scheme.
Fig. 2.
Fig. 2. The schematic of world coordinates, camera coordinates, and image coordinate systems.
Fig. 3.
Fig. 3. The flowchart of PF-assisted SL-VLP and inertial fusion positioning.
Fig. 4.
Fig. 4. Experimental setup of the PF-assisted SL-VLP and inertial fusion positioning scheme.
Fig. 5.
Fig. 5. The measured positioning time with SL-VLP outage rates of (a) 0, (b) 5.5%, (c) 11%, (d) 22%.
Fig. 6.
Fig. 6. Positioning error under different positioning schemes.
Fig. 7.
Fig. 7. The SL-VLP outage rate on mean positioning error with the speeds of (a) 5 m/s, (b) 6 m/s, (c) 7 m/s.
Fig. 8.
Fig. 8. Mean positioning error of different positioning schemes at different velocities.
Fig. 9.
Fig. 9. CDF curves of different positioning schemes.

Tables (1)

Tables Icon

Table 1. The experimental parameters

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

P c i = R ( P i P c o )
R = R x ( α )   R y ( β )   R z ( γ )
= [ 1 0 0 0 c o s α s i n α 0 s i n α c o s α ]   [ c o s β 0 s i n β 0 1 0 s i n β 0 c o s β ]   [ c o s γ s i n γ 0 s i n γ c o s γ 0 0 0 1 ]
H = Z i Z s = d f d
X c i x i = Y c i y i = Z c i f
[ x i y i d / d ] = C α   C β ( f / H ) [ X i X s Y i Y s Z s Z i ]
s t ^ = f ( s t 1 ) + w t
s t = R s s t 1 + R c
s k + 1 = ϕ s k + φ a k + ω k
z k = ( x k x s ) 2 + ( y k y s ) 2 + v k
s k i p ( x k | s k 1 i )
w k i = w k 1 i p ( z k | s k i )
w k i = w k i j = 1 N w k j
s ^ = i = 1 N s ~ k i w ~ k i
S ^ 0 = A 1 S ^ 1 + A 2 S ^ 2 + + A L S ^ L
A = [ A 1 A L ] T = P 1 e ( e T P 1 e ) 1
R o u t a g e = L o u t a g e L
T o = { T P F I M U ,  if SL - VLP outage T S L V L P + T f , else
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.