Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Indoor 3D NLOS VLP using a binocular camera and a single LED

Open Access Open Access

Abstract

In this paper, we propose a non-line of sight (NLOS) visible light positioning (VLP) system using a binocular camera and a single light emitting diode (LED) for the realization of 3D positioning of an arbitrary posture. The proposed system overcomes the challenges of the shadowing/blocking of the line of sight (LOS) transmission paths between transmitters and receivers (Rxs) and the need for a sufficient number of LEDs that can be captured within the limited field of view of the camera-based Rx. We have developed an experimental testbed to evaluate the performance of the proposed system with results showing that the lowest average error and the root mean square error (RMSE) are 26.10 and 31.02 cm following an error compensation algorithm. In addition, a label-based enhanced VLP scheme is proposed for the first time, which has a great improvement on the system performance with the average error and RMSE values of 7.31 and 7.74 cm and a 90th percentile accuracies of < 11 cm.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Location-based services as one of the most critical items required in intelligent and context-aware Internet-of-thing (IoT) systems are becoming increasingly important especially in indoor environments [1], where the global positioning system (GPS) does not work well since the radio frequency (RF) signals are easily obstructed [2]. To provide enhanced indoor location-based services, several positioning technologies based on different wireless signals have been proposed including wireless local area network (WLAN) [3], RF identification (RFID) [4], Bluetooth [5], ultra-wideband (UWB) [6], and visible light [711]. WLAN, RFID, and Bluetooth-based positioning systems have lower accuracy and must create and maintain the RF map frequently [12]. Whereas UWB-based positioning systems have higher accuracy but are costly [6]. Compared with the RF-based positioning technologies, the visible light positioning (VLP) system offers great potential because of its immunity to RF-induced electromagnetic interference, free and unrestricted spectrum, and a much higher level of security [13,14]. VLP uses light-emitting diodes (LED) lights and image sensors or photodiodes (PD) as the transmitters (Tx) and the receivers (Rx), respectively, to detect the signals and to estimate the relationships between the Txs and the Rxs. Considering the widespread use of LEDs-based lights in buildings and COMS cameras for monitoring and in smart devices we are seeing more research works in both VLP systems and the development of computer vision algorithms [15,16].

In VLP there are two key steps of signal acquisition from the Tx and establishing a relationship between the world coordinate system (WCS) and the camera coordinate system (CCS). However, there are challenges to be addressed in both two steps including ($i$) the shadowing/blocking of the line-of-sight (LOS) paths between Txs and Rxs that result in the link failure; and ($ii$) how to compute the camera pose and the LED position in CCS. To address these issues, several options have been investigated including ($i$) VLP systems based on NLOS paths [17]; and ($ii$) the use of a geometric relationship constructed using multiple LEDs to determine the camera pose and LEDs–positions [18,19]. However, capturing a sufficient number of LEDs within the limited field of view (FOV) of the camera is a problem. Therefore, VLP using a lower number of LEDs or a single LED is much more attractive and viable in many practical application scenarios. In such systems, LEDs with beacon signals together with a camera looking at LED to estimate the angle of roll are required [2022]. In addition, the position of a LED in CCS needs determining using the diameter ratio of the LED and its projection on the captured LED image. But no works have been reported on overcoming these challenges at the same time.

In this paper, we propose a NLOS VLP system using a binocular camera (BCam) and a single LED for the realization of 3D positioning of an arbitrary posture and to demonstrate its operation. What is more, we determine the coordinates of transmitted light reflection points on the ground on CCS based on the binocular stereo vision model [23]. The camera pose is then measured using an inertial measurement unit (IMU). In addition, for the first time, we propose a label-based enhanced (LBE) VLP scheme to improve the system performance of the NLOS VLP.

The remainder of this paper is organized as follows. Section 2 provides a description of the system model. The presented NLOS VLP scheme based on a BCam is elaborated in Section 3, whereas Section 4 presents the analyzes of experiment results on the positioning. Finally, conclusions are provided in Section 5.

2. System model

The proposed NLOS VLP system is composed of a NLOS optical camera communication (OCC) subsystem for obtaining the received signal and a binocular stereo vision (BSV) model for transforming the coordinate of a single point from the pixel coordinate system (PCS) into CCS. In this section, first, we present an OCC signal recovery model followed by a BSV model. Next, we present an algorithm to estimate the position of the camera and an error compensation algorithm to optimize its performance. The schematic block diagram of the signal recovery process in OCC is depicted in Fig. 1.

 figure: Fig. 1.

Fig. 1. The processes of signal recovery in OCC.

Download Full Size | PDF

2.1 OCC signal recovery model

In OCC, which is different from PD-based visible light communication (VLC), the focus is more on the estimation of the gray value of a single pixel on the image due to a pixel being the smallest unit of the imaging processing. Note, the received radiation intensity is given as [24], and here we will take the ambient light into consideration, which can be expressed as:

$$r(u,v,t) = a(u,v)+l(u,v)s(t),$$
where $a(u,v)$ is the radiation intensity of ambient light and $l(u,v)$ is the integral of radiation intensity of the LED with no modulated signal in the region of pixel $(u,v)$ , and $s(t)$ is the modulate signal. Here is the image formation model is defined as:
$$i(u,v) = k\int_{-\infty }^{\infty } r(u,v,t)f(u,v,t)dt+n(u,v),$$
where $k$ is the sensor gain that converts radiance to pixel gray value, and $n(u,v)$ is image noise. Note, ($i$) the image noise is a random variation of brightness or color information in the images captured, which is the degradation in image signal caused by external sources and is always present in digital images during image acquisition, coding, transmission, and processing steps. Noise increases with the sensitivity setting in the camera, length of the exposure, temperature, and even varies amongst different camera models; and ($ii$) $f(u,v,t)$ is the exposure function of the camera. The line-by-line exposure characteristics of rolling shutter-based camera can be expressed as a rectangular time window function related to a pixel row $v$ as given by:
$$f(u,v,t)=e(t_{v}-t),$$
where $e(t_{v}-t)$ is a shutter function, which is defined as:
$$e(t_{v}-t)=\begin{cases} 1,t\in(t_{v}-\Delta t, t_{v})\\ 0,t\notin (t_{v}-\Delta t, t_{v}) \end{cases},$$
where $\Delta$t is exposure time per row of pixels and $t_{v}$ is the exposure time of a pixel in a row $v$. According to Eq. (3), Eq. (2) can be rewritten as:
$$i(u,v) = k\Delta t a(u,v)+kl(u,v)g(v)+n(u,v),$$
where $g(v)$ is the convolution of shutter function $e(t_{v}-t)$ and signal function $s(t)$, which can be expressed as:
$$g(v)=g'( t_{v})=\int_{-\infty }^{\infty } e(t_{v}-t)s(t)dt=e*s( t_{v})=e*s( v\Delta t ).$$

It is known that $k\Delta t a(u,v)$ is the ambient light component, which is a constant, and we can obtain Eq. (5) when the LED is turn off, which is given as:

$$i_{a}(u,v) = k\Delta t a(u,v)+n(u,v).$$

Note, provided the image has a high level of signal-to-noise ratio (SNR) then the influence of $n(u,v)$ can be ignored . Here, we use the frame subtraction to filter the ambient light component in Eq. (5), i.e., the noise, therefore we have:

$$\Delta i(u,v) = kl(u,v)g(v).$$

Performing Fourier transform, following summation of pixel values per row will results in:

$$I(\omega )=L(\omega )*(E(\omega )S(\omega )).$$

Note, in Eq. (4) can be changed artificially, which means that we can choose different exposure time slots per pixel row. Two images with different exposures (one long and one short) are captured sequentially. Since the period of long exposure is much longer than the period of the time signal, it can be considered that the long exposure image always has the same time signal with the short exposure image [24]. Therefore, we can recover the signal function by the two different exposure forms (Eq. (9)), which is given by:

$$I_{1}(\omega )*(E_{2}(\omega )S(\omega ))-I_{2}(\omega )*(E_{1}(\omega )S(\omega ))=0.$$

The temporal signal $s(t)$ consists of a small, discrete set of temporal frequencies $\Omega =[w_{1},w_{2},\ldots,w_{m}]$. So that $\vec {I_{1}}$ can be regarded as a vector, which consists of different frequency components of $I_{1}(\omega )$, i.e., $\vec {I_{2}}$, $\vec {E_{1}}$, $\vec {E_{2}}$,and $\vec {S}$. Equation (10) can be expressed in a matrix form as:

$$(\mathbf{I_{1}E_{2}} -\mathbf{I_{2}E_{1}} )\vec{S}=0,$$
where $\mathbf {I_{1}}$ and $\mathbf {I_{2}}$ are Toeplitz matrices defined by $\vec {I_{1}}$ and $\vec {I_{2}}$, respectively. $\mathbf {E_{1}}$ and $\mathbf {E_{2}}$ are diagonal matrices defined by $\vec {E_{1}}$ and $\vec {E_{2}}$, respectively. $\vec {S}$ can be solved by the linear equations in (11).

2.2 Binocular stereo vision model

A BSV model can be used to determine the coordinate of a single point in space on CCS according to the pixel coordinates of its projection point on the left and right CCSs. Assuming the coordinate of a point $P$ in space on left and right CCSs are $\mathbf {P_{l}}$ and $\mathbf {P_{r}}$, respectively, see Fig. 2, the transformation is as:

$$\mathbf{P_{l}=RP_{r}+T},$$
where $\mathbf {R}$ and $\mathbf {T}$ are the rotation matrix and translation vector from right CCS to left CCS, whose value can be determined by calibrating the BCam. We assume that, the projections of point $P$ on the left and right cameras are $P_{1}$ and $P_{2}$, respectively, and their coordinates are ($u_{1}$, $v_{1}$), ($u_{2}$, $v_{2}$) accordingly. Let ${z_{l}}$ and ${z_{r}}$ represent the coordinates on ${z}$-axis on CSS of the right and left cameras, respectively. Thus, there will be a relationship between PCS and CCS as given by:
$$\begin{cases} z_{l}\mathbf{t_{1}=K_{l}P_{l}} \\ z_{r}\mathbf{t_{2}=K_{r}P_{r}} \end{cases},$$
where $\mathbf {t_{1}}$=($u_{1}$, $v_{1}$,1), $\mathbf {t_{2}}$=($u_{2}$, $v_{2}$,1), $\mathbf {K_{l}}$ and $\mathbf {K_{r}}$ are the internal parameter matrix of left and right cameras. Note, the value of $\mathbf {P_{l}}$ is calculated by solving Eqs. (12) and (13).

 figure: Fig. 2.

Fig. 2. The model of binocular stereo vision.

Download Full Size | PDF

2.3 BCam position estimation (BPE) algorithm

We can establish a transform relationship about the point $P$ from WCS to left CCS, thus having:

$$\mathbf{P_{l}=R_{l}P_{w}+T_{l}},$$
where $\mathbf {R_{l}}$ and $\mathbf {T_{l}}$ are the rotation matrix and translation vector, respectively. The position of the left camera in WCS can be expressed by:
$$\mathbf{W_{l}={-}R_{l}^{{-}1}T_{l}}.$$

The value of $\mathbf {R_{l}}$ in Eqs. (14) and (15) can be measured by an IMU so that the value of $\mathbf {T_{l}}$ can be determined. And therefore, the target position of the left camera can be readily determined. Actually, Eqs. (12) and (13) may have no real solutions due to the system error, which involves the camera’s parameter error, and the estimation error of the projections coordinate in PCS. In this work, we transform the problem of solving Eqs. (12) and (13) to find the minimum of reprojection errors of $P_{1}$ and $P_{2}$. There is an optimum solution for solving the minimum value point for the 2D reprojection error function, which is given as:

$$\mathbf{P_{l}}^{{\ast}} = \arg\min _{\mathbf{P_{l}}} \left \{ \begin{Vmatrix} \mathbf{t_{1}-K_{l}P_{l}}/z_{l} \end{Vmatrix}^{2} + \begin{Vmatrix} \mathbf{t_{2}-K_{r}R^{{-}1}(P_{l}-T)}/z_{r} \end{Vmatrix}^{2} \right \} .$$

Finally, the corresponding target position expression can be obtained as given by:

$$\mathbf{W_{l}^{*}=P_{w}-R_{l}^{{-}1}P_{l}^{*}}.$$

2.4 BPE error compensation algorithm

From Eqs. (16) and (17), we can see that there are three main sources of system errors, which are due to the camera parameters, IMU angle estimation, and the estimation error of the symmetric point on PCS. In order to simplify the error analysis, the parts of system errors come from the camera parameters and IMU angle estimation will not be taken into consideration. We focus on the impact of the estimation error of the symmetric point on system performance and propose an error compensation algorithm to compensate it. Therefore, we first give the system errors function through Eqs. (16) and (17), which is given as:

$$\mathbf{\Delta W_{l}=W_{l}^{*}-W_{l}^{'}=R_{l}^{{-}1}(P_{l}^{'}-P_{l}^{*})=R_{l}^{{-}1}\Delta P_{l}},$$
where $\mathbf {\Delta W_{l}}$ is the system errors including both the estimation errors on the $x$, $y$, and $z$ axes, and $\mathbf {W_{l}^{'}}$ is the estimation value of the proposed system. $\mathbf {P_{l}^{'}}$ and $\mathbf {\Delta P_{l}}$ are the estimation value and the estimation error of point $P$ on the CCS of left camera, respectively. Due to the IMU angle estimation error be ignored, the value of $\mathbf {R_{l}}$ in Eq. (18) can be determined. According to [25], the estimation error on $z$-axis is much larger than that on $x$ and $y$ axes in $\mathbf {\Delta P_{l}}$ , which is also consistent with the results in later experiment. Because of this, we think of the estimation error of $z_{l}$ is the most important factor that influenced the system error. Therefore, we design an error compensation (EC) algorithm for BPE, which can compensate the system error on $z$-axis by optimizing $z_{l}$. In EC, we only discuss the error of $z_{l}$ . Expanding Eqs. (12) and (13) into the form of equations, $z_{l}$ can be expressed as:
$${z_{l}=f_{lx}f_{rx}t_{0}/(f_{rx}(u_{1}-u_{l})-f_{lx}(u_{2}-u_{r}))},$$
where $u_{l}$ is abscissa of the projection of the focus of the left camera on PCS, and $f_{lx}$ is the ratio of the focus length with the length of each pixel of the left camera, i.e., $u_{r}$ , $f_{rx}$ . We have chosen $z_{l}^{'}$ as the measured value of $z_{l}$ and $\Delta z_{l}$ is regarded as the error of $z_{l}$ which is expressed as:
$$\Delta z_{l}=z_{l}-z_{l}^{'}= z_{l}z_{l}^{'}(f_{lx}\Delta u_{2}-f_{rx}\Delta u_{1})/ f_{lx}f_{rx}t_{0},$$
where $\Delta u_{1}$ and $\Delta u_{2}$ are the estimation errors of $u_{1}$ and $u_{2}$, respectively. Obviously, the pixel size of highlighted areas around the symmetric point on PCS will decrease with the increasing $z_{l}$. This is being verified experimentally with the result shown in Fig. 3(a). Note, in line with the consistency of relative errors, we can consider that there is an inverse proportional relation between $\Delta u_{1}$, $\Delta u_{2}$ and $z_{l}$. By this relationship, $\Delta z_{l}$ is linear with $z_{l}^{'}$ through Eq. (20).

 figure: Fig. 3.

Fig. 3. Error analysis: (a)linear fitting of the square of the highlight areas surrounding the symmetric point and the reciprocal of the height of the camera, and(b)linear fitting of the absolute value of $\Delta z_{l}$ and the measured value of the height of the camera to the symmetric point of the LED.

Download Full Size | PDF

In this work, we have developed an experimental testbed, where the LED is fixed at a height of 40 cm above the ground, and the camera is facing up to the reflected point of the LED to ensure that $\mathbf {R_{l}}$ is the identity matrix to eliminate the influence of the IMU angle estimation error. Fig. 3(b) shows the measured data points and the linear fitting of the absolute value of $\Delta z_{l}$ as a function of $z_{l}$, which is a linear relationship. Note, we found that the results obtained are affected by the camera sensitivity and the flatness of the floor. Note, the camera ISO was set to 50 to ensure that the LED edges can be captured and detected. Considering that the linear effect is not significant the error of $z$-axis increases with the camera height from the LED symmetry point on CCS. Therefore, to compensate for the error we have used the coefficient $\lambda =0.112$ of linear fitting, which is given as:

$${z_{l}^{*}=z_{l}^{'}(1+\sigma \lambda/(1-\lambda ) ) },$$
where $\sigma$ is a Boolean variable, which is used to decide whether the measured value of $z_{l}$ needs to be optimized.

The implementation of the EC algorithm is illustrated in the flow chart in Fig. 4, in which $H$ is the highlighted areas around the symmetric point. Assuming $\sigma =1$, we can calculate both $(u_{1},v_{1})$ and $(u_{2},v_{2})$ for $z_{l}=z_{l}^{*}$ through Eq. (13). Note, ($i$) if both are in the highlighted areas around the symmetric point, then $\sigma =1$, else, $\sigma =0$; ($ii$) with $\sigma$ determined, $z_{l}$ can be optimized; and ($iii$) then, determine the target value on $z$-axis following EC algorithm by substituting in Eq. (17). We choose the target value on the $x$, $y$ and $z$ axes with the BPE algorithm and EC algorithm, respectively as the final results.

 figure: Fig. 4.

Fig. 4. The implementation processes of the EC algorithm.

Download Full Size | PDF

3. Proposed system

The proposed system block diagram is depicted in Fig. 5. It is composed of two mains of NLOS OCC and BCam-based position estimation subsystems. Note, the signal acquisition at the Rx in the NLOS OCC subsystem is the main feature of the proposed system, which can also be realized in the OCC signal recovery model [26]. As for the position estimation subsystem, we have proposed two different schemes. The first estimates the camera position through the BSV model and a BPS algorithm by considering the symmetrical point of LED on the ground as the point $P$. Note that, the NLOS VLP works on the floor of an ordinary room. However, the system performance decreases as the ground roughness increases. What is worse, when the roughness is too large, the highlight area may not exist (cannot be detected) on the picture and the NLOS VLP may not work. In order to overcome this problem, we have proposed a LBE VLP scheme, where the point $P$ in the BSV model is replaced with a label, which is the orthographic projection of the LED on the reflecting surface and needs be marked in advance. Compared with the NLOS VLP detecting the highlight area by the pixel gray value, the LBE VLP identifies the label by its shape, so that rough reflection has less influence on it.

 figure: Fig. 5.

Fig. 5. A block diagram of the proposed system.

Download Full Size | PDF

3.1 NLOS VLP

At the Tx, the location of the LED in WCS (${x_{0}}$, ${y_{0}}$, ${z_{0}}$) is first coded prior to the intensity modulation of the LED for transmission over the free space channel. At the Rx, a BCam captures the reflected lights from the ground using both the long and short exposure modes; and the background scene when the LED is off. The software of ZED Explore is chosen to control the two cameras at the same time. Actually, the processes of signal recovery in OCC in Fig. 1 are mainly for a single camera, which can be realized by one of the left or right camera of the BCam, and left camera is chosen in this work. Note the followings: ($i$) an IMU module at the left camera measures the camera pose; ($ii$) the OCC signal recovery mode is used to obtain the LED position (${x_{0}}$, ${y_{0}}$, ${z_{0}}$); ($iii$) determine the WCS coordinate of the symmetric point of the LED about the ground, i.e., (${x_{0}}$, ${y_{0}}$, ${-z_{0}}$) ; and ($iv$) the projections on the left and right cameras of the LED via NLOS paths is equivalent to the projections of symmetrical point via the LOS paths. Because of latter, the symmetrical point can be considered as the point $P$ in the BSV model to realize NLOS VLP. The coordinate of the symmetric point on PCS is then regarded as the geometric center of the fit ellipse following ellipse fitting that highlight areas surrounding the symmetric point, see Fig. 6(a). Next, following the extraction of the projection coordinates of the symmetric point in PCS under a long exposure time and with the WCS coordinate of the symmetric point, the camera position in WCS can be estimated using the BPE algorithm.

 figure: Fig. 6.

Fig. 6. Pixel coordinates extracting of reference points: (a) pixel coordinates extraction of the symmetric point by ellipse fitting and the label from the pictures captured by left and right cameras. Note, photos taken by the left and right cameras are combined into a single photo with the scale of $3840\times 1080$, and (b) corner point detecting for the label.

Download Full Size | PDF

3.2 Label-based enhanced VLP

The LBE VLP scheme is basically the same as NLOS VLP except for replacing the point $P$ in the BSV model with a label, which is the orthographic projection of the LED on the reflecting surface that should be marked in advance. Because of this, in the LBE VLP, the coordinate of the label on WCS can be calculated according to the LED coordinate. Like the NLOS VLP, we can also use the NLOS OCC subsystem to receive the LED coordinate. Meanwhile, we can use the BCam to identify the label. The detailed process involved in this scheme is very similar to NLOS VLP except for the labels being considered as reference points. The WCS coordinate of the label is determined by the geometric relations between the label and the LED, which is (${x_{0}}$, ${y_{0}}$, ${0}$). We have used the corner points detection method to process photos from the left and right cameras to recognize the label that is the intersection of two straight lines. Note that, the coordinate of the label on PCS is the average of four corner points coordinates, see Fig. 6(b).

4. Experimental implementation and results

4.1 Experiments setup

The experimental testbed for the proposed system is illustrated in Fig. 7(a). The Tx is composed of the STM32 microcontroller unit (MCU) operating at the frequency of 3.3 kHz, a driver module and a LED positioned at a height of 1.96 m from the floor level with its illumination projection on the floor is the WCS. At the Rx, a ZED2 binocular camera is used to capture two images from left and right camera, whereas IMU is for measuring the camera pose. First, we calibrated ZED2 to obtain the internal parameter matrix of its two cameras. We found that the rotation matrix between the two cameras is an identity matrix with only the x-axis displacement. Thus, we have $\mathbf {R=I}$ and $\mathbf {T}=[t_{0},0,0]$ in Eq. (12), where $t_{0}$ is a constant. Note that, for each experiment we adjusted the position of ZED2 to ensure that its projection on the floor is a test point as shown in Fig. 7(b). We measured the height of ZED as the coordinate of z-axis of the test point. By comparing the coordinate of the test point with the coordinate of ZED2, which is estimated using the proposed system, we determine the system error. We carried out this for a total 60 points, see in Fig. 7(b), and estimated the root mean square error (RMSE) and the cumulative error distribution function (CDF). All the key system parameters used are given in Table 1.

 figure: Fig. 7.

Fig. 7. Proposed system: (a) hardware testbed, and (b) the test environment.

Download Full Size | PDF

Tables Icon

Table 1. System parameters.

4.2 NLOS VLP

Here, we have determined RMSE values and the average positioning errors for 60 data sets through its absolute value of the difference between the estimated value of proposed system and the measured value on the $x$, $y$, and $z$ axes. The experimental results are shown in Table 2, where the lowest 3D average error and the RMSE are 37.2 and 44.6 cm, respectively. In addition, we have calculated the average errors for 60 data sets of 6.96, 6.01, and 35.32 cm for the $x$, $y$, and $z$ axes, respectively. Fig. 8(a) depicts the 2D positioning performance of the $x-y$ plane for the test point and NLOS VLP, from which we can see that the NLOS VLP system generally has a good performance for the $x-y$ plane except for some test points far away from the LED have a poor deviation. And Fig. 8(b) displays the CDF plots for the $x$, $y$, and $z$ axes and for the 3D, respectively. The CDF plots for the $x$ and $y$ axes are consistent with the results of Fig. 8(a), which can achieve $90^{th}$ percentile accuracies less than 15 cm. However, the performance obtained for the $z$ axis and 3D, i.e., $80^{th}$ percentile accuracies of < 60 cm, is not satisfactory. Comparing with NLOS VLP with no EC, the lowest average error and the RMSE values are decreased by 26.10 and 31.02 cm, respectively. Note that, ($i$) the $90^{th}$ percentile accuracy has dropped down to 50 cm; and ($ii$) the CDF curve is showing the effect of the EC algorithm, see Fig. 9.

 figure: Fig. 8.

Fig. 8. System performance of NLOS VLP: (a) 2D positioning performance of $x-y$ plane, and (b) CDF of $x$, $y$, and $z$ axes and 3D.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. CDF of $z$-axis and 3D with and without EC in NLOS VLP scheme.

Download Full Size | PDF

Tables Icon

Table 2. System performance.

4.3 LBE VLP

We have carried tests, measurement and the performance evaluation for the LBE VLP scheme, where we estimated the RMSE and the average positioning errors for the 60 sets of data as in NLOS VLP. The experimental results are shown in Table 2, where the ($i$) lowest average error and the RMSE are 7.31 and 7.74 cm, respectively; and ($ii$) average error for the $x$, $y$, and $z$ axes are 3.62, 3.24, and 4.50 cm, respectively. Fig. 10 displays the CDF plots for the $x$, $y$, and $z$ axes and for the 3D, respectively, in which both $x$, $y$, and $z$ axes can achieve $90^{th}$ percentile accuracies less than 8 cm, and the performance of 3D can achieve $90^{th}$ percentile accuracies are < 11 cm.

 figure: Fig. 10.

Fig. 10. LBE VLP CDF of $x$, $y$, and $z$ axes and 3D.

Download Full Size | PDF

Compared with the NLOS VLP, the LBE VLP obviously has improved performance at the test point a longer distance from the LED in $x-y$ plane, see Fig. 11(a). Fig. 11(b) depicts the CDF performance as a function of the error distance for of the $z$-axis and 3D for the LBE VLP, NLOS VLP, and NLOS VLP with EC algorithm. The LBE VLP has ($i$) the smaller estimation error of the reference point (the label or the symmetry point) on PCS, and ($ii$) the smaller estimation error on CCS because the distance from the symmetry point to the camera is much larger than the distance from the label to the camera (Note that the distance between the LED and the ground is 196cm). Therefore, it is obvious from Fig. 11 that the LBE VLP scheme offers the best performance compared with NLOS VLP with EC and NLOS VLP.

 figure: Fig. 11.

Fig. 11. Performance comparisons: (a) 2D positioning performance of $x-y$ plane compared between LBE VLP and NLOS VLP, and (b) comparisons of the error of $z$-axis and 3D among the LBE VLP, NLOS VLP and NLOS VLP with EC.

Download Full Size | PDF

5. Conclusion

We proposed a NLOS VLP system based on a binocular camera and a single LED for realization of 3D positioning of an arbitrary posture, where it overcomes shadowing/blocking experienced in LOS paths between Txs and Rxs and the need for sufficient numbers of LEDs being captured within the camera’s FOV. We developed a dedicated testbed and evaluated the system performance of two different schemes by considering 60 locations in an area of $2\times 2\times 2m^{3}$. For NLOS VLP with the symmetric point of the LED, the lowest average error and the RMSE were 37.27 and 44.59 cm, respectively. Using an error compensation algorithm the average error and the RMSE were reduced to 26.10 and 31.02 cm, respectively. On the other hand, with the proposed scheme of LBE VLP with a label on the reflecting surface both the average error and the RMSE were further reduced to 7.31 and 7.74 cm, respectively and achieved $90^{th}$ percentile accuracies of less than 11 cm.

Funding

Natural Science Foundation of Fujian Province (2022J01499); Science and Technology Program of Quanzhou (2020C069, 2020G18, 2021C005R); Science and Technology Development Project of Fujian Provincial Department of Housing and Urban-Rural Development (2022K52); STS Project of CAS and Fujian Province (2020T3026); EU COST Action NEWFOCUS (CA19111).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Chen, S. Thombre, K. Järvinen, E. S. Lohan, A. Alén-Savikko, H. Leppäkoski, M. Z. H. Bhuiyan, S. Bu-Pasha, G. N. Ferrara, S. Honkala, J. Lindqvist, L. Ruotsalainen, P. Korpisaari, and H. Kuusniemi, “Robustness, security and privacy in location-based services for future iot: A survey,” IEEE Access 5, 8956–8977 (2017). [CrossRef]  

2. S. Sadowski, P. Spachos, and K. N. Plataniotis, “Memoryless techniques and wireless technologies for indoor localization with the internet of things,” IEEE Internet Things J. 7(11), 10996–11005 (2020). [CrossRef]  

3. W. Shao, H. Luo, F. Zhao, H. Tian, S. Yan, and A. Crivello, “Accurate indoor positioning using temporal–spatial constraints based on wi-fi fine time measurements,” IEEE Internet Things J. 7(11), 11006–11019 (2020). [CrossRef]  

4. A. A. N. Shirehjini and S. Shirmohammadi, “Improving accuracy and robustness in hf-rfid-based indoor positioning with kalman filtering and tukey smoothing,” IEEE Trans. Instrum. Meas. 69(11), 9190–9202 (2020). [CrossRef]  

5. T.-M. T. Dinh, N.-S. Duong, and K. Sandrasegaran, “Smartphone-based indoor positioning using ble ibeacon and reliable lightweight fingerprint map,” IEEE Sens. J. 20(17), 10283–10294 (2020). [CrossRef]  

6. X. Zhu, J. Yi, J. Cheng, and L. He, “Adapted error map based mobile robot uwb indoor positioning,” IEEE Trans. Instrum. Meas. 69(9), 6336–6350 (2020). [CrossRef]  

7. Y. Zhuang, L. Hua, L. Qi, J. Yang, P. Cao, Y. Cao, Y. Wu, J. Thompson, and H. Haas, “A survey of positioning systems using visible led lights,” IEEE Commun. Surv. Tutorials 20(3), 1963–1988 (2018). [CrossRef]  

8. M. Maheepala, A. Z. Kouzani, and M. A. Joordens, “Light-based indoor positioning systems: A review,” IEEE Sens. J. 20(8), 3971–3995 (2020). [CrossRef]  

9. N. Chaudhary, L. N. Alves, and Z. Ghassemblooy, “Current trends on visible light positioning techniques,” in 2019 2nd West Asian Colloquium on Optical Wireless Communications (WACOWC), (IEEE, 2019), pp. 100–105.

10. P. Chen, M. Pang, D. Che, Y. Yin, D. Hu, and S. Gao, “A survey on visible light positioning from software algorithms to hardware,” Wireless Communications and Mobile Computing 2021, 9739577 (2021). [CrossRef]  

11. J. Luo, L. Fan, and H. Li, “Indoor positioning systems based on visible light communication: State of the art,” IEEE Commun. Surv. Tutorials 19(4), 2871–2893 (2017). [CrossRef]  

12. P. Davidson and R. Piché, “A survey of selected indoor positioning methods for smartphones,” IEEE Commun. Surv. Tutorials 19(2), 1347–1370 (2016). [CrossRef]  

13. M. Pang, G. Shen, X. Yang, K. Zhang, P. Chen, and G. Wang, “Achieving reliable underground positioning with visible light,” IEEE Trans. Instrum. Meas. 71, 1–11 (2022). [CrossRef]  

14. B. Hussain, Y. Wang, R. Chen, H. C. Cheng, and C. P. Yue, “Lidr: Visible-light-communication-assisted dead reckoning for accurate indoor localization,” IEEE Internet Things J. 9(17), 15742–15755 (2022). [CrossRef]  

15. H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, and Z. Chen, “A fast and high-accuracy real-time visible light positioning system based on single led lamp with a beacon,” IEEE Photonics J. 12(6), 1–8 (2020). [CrossRef]  

16. M. S. Rahman and K.-D. Kim, “Indoor location estimation using visible light communication and image sensors,” Int. J. Smart Home 7, 99–113 (2013).

17. F. Yang, S. Li, H. Zhang, Y. Niu, C. Qian, and Z. Yang, “Visible light positioning via floor reflections,” IEEE Access 7, 97390–97400 (2019). [CrossRef]  

18. Y.-S. Kuo, P. Pannuto, K.-J. Hsiao, and P. Dutta, “Luxapose: Indoor positioning with mobile phones and visible light,” in Proceedings of the 20th annual international conference on Mobile computing and networking, (2014), pp. 447–458.

19. H. Song, S. Wen, C. Yang, D. Yuan, and W. Guan, “Universal and effective decoding scheme for visible light positioning based on optical camera communication,” Electronics 10(16), 1925 (2021). [CrossRef]  

20. W. Guan, S. Wen, H. Zhang, and L. Liu, “A novel three-dimensional indoor localization algorithm based on visual visible light communication using single led,” in 2018 IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), (IEEE, 2018), pp. 202–208.

21. R. Zhang, W.-D. Zhong, Q. Kemao, and S. Zhang, “A single led positioning system based on circle projection,” IEEE Photonics J. 9, 1–9 (2017). [CrossRef]  

22. L. Huang, S. Wen, Z. Yan, H. Song, S. Su, and W. Guan, “Single led positioning scheme based on angle sensors in robotics,” Appl. Opt. 60(21), 6275–6287 (2021). [CrossRef]  

23. M.-g. Moon, S.-i. Choi, J. Park, and J. Y. Kim, “Indoor positioning system using led lights and a dual image sensor,” J. Opt. Soc. Korea 19(6), 586–591 (2015). [CrossRef]  

24. K. Jo, M. Gupta, and S. K. Nayar, “Disco: Display-camera communication using rolling shutter sensors,” ACM Trans. Graph. 35(5), 1–13 (2016). [CrossRef]  

25. H. Liu, C. Gong, Z. Xu, and J. Luo, “Positioning error analysis of indoor visible light positioning using dual cameras,” in 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), (IEEE, 2019), pp. 1–6.

26. F. Yang, S. Li, Z. Yang, C. Qian, and T. Gu, “Spatial multiplexing for non-line-of-sight light-to-camera communications,” ACM Trans. Graph. 18(11), 2660–2671 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The processes of signal recovery in OCC.
Fig. 2.
Fig. 2. The model of binocular stereo vision.
Fig. 3.
Fig. 3. Error analysis: (a)linear fitting of the square of the highlight areas surrounding the symmetric point and the reciprocal of the height of the camera, and(b)linear fitting of the absolute value of $\Delta z_{l}$ and the measured value of the height of the camera to the symmetric point of the LED.
Fig. 4.
Fig. 4. The implementation processes of the EC algorithm.
Fig. 5.
Fig. 5. A block diagram of the proposed system.
Fig. 6.
Fig. 6. Pixel coordinates extracting of reference points: (a) pixel coordinates extraction of the symmetric point by ellipse fitting and the label from the pictures captured by left and right cameras. Note, photos taken by the left and right cameras are combined into a single photo with the scale of $3840\times 1080$, and (b) corner point detecting for the label.
Fig. 7.
Fig. 7. Proposed system: (a) hardware testbed, and (b) the test environment.
Fig. 8.
Fig. 8. System performance of NLOS VLP: (a) 2D positioning performance of $x-y$ plane, and (b) CDF of $x$, $y$, and $z$ axes and 3D.
Fig. 9.
Fig. 9. CDF of $z$-axis and 3D with and without EC in NLOS VLP scheme.
Fig. 10.
Fig. 10. LBE VLP CDF of $x$, $y$, and $z$ axes and 3D.
Fig. 11.
Fig. 11. Performance comparisons: (a) 2D positioning performance of $x-y$ plane compared between LBE VLP and NLOS VLP, and (b) comparisons of the error of $z$-axis and 3D among the LBE VLP, NLOS VLP and NLOS VLP with EC.

Tables (2)

Tables Icon

Table 1. System parameters.

Tables Icon

Table 2. System performance.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

r ( u , v , t ) = a ( u , v ) + l ( u , v ) s ( t ) ,
i ( u , v ) = k r ( u , v , t ) f ( u , v , t ) d t + n ( u , v ) ,
f ( u , v , t ) = e ( t v t ) ,
e ( t v t ) = { 1 , t ( t v Δ t , t v ) 0 , t ( t v Δ t , t v ) ,
i ( u , v ) = k Δ t a ( u , v ) + k l ( u , v ) g ( v ) + n ( u , v ) ,
g ( v ) = g ( t v ) = e ( t v t ) s ( t ) d t = e s ( t v ) = e s ( v Δ t ) .
i a ( u , v ) = k Δ t a ( u , v ) + n ( u , v ) .
Δ i ( u , v ) = k l ( u , v ) g ( v ) .
I ( ω ) = L ( ω ) ( E ( ω ) S ( ω ) ) .
I 1 ( ω ) ( E 2 ( ω ) S ( ω ) ) I 2 ( ω ) ( E 1 ( ω ) S ( ω ) ) = 0.
( I 1 E 2 I 2 E 1 ) S = 0 ,
P l = R P r + T ,
{ z l t 1 = K l P l z r t 2 = K r P r ,
P l = R l P w + T l ,
W l = R l 1 T l .
P l = arg min P l { t 1 K l P l / z l 2 + t 2 K r R 1 ( P l T ) / z r 2 } .
W l = P w R l 1 P l .
Δ W l = W l W l = R l 1 ( P l P l ) = R l 1 Δ P l ,
z l = f l x f r x t 0 / ( f r x ( u 1 u l ) f l x ( u 2 u r ) ) ,
Δ z l = z l z l = z l z l ( f l x Δ u 2 f r x Δ u 1 ) / f l x f r x t 0 ,
z l = z l ( 1 + σ λ / ( 1 λ ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.