Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Object localization and tracking in three dimensions by space-to-time encoding

Open Access Open Access

Abstract

In this paper, we present a novel method for measuring the location and estimating the dynamics of fast-moving small objects in free space. The proposed 3D localization method is realized by a space-to-time optical transform and measurement of time-of-flight. We present the underlying physical and mathematical model of the method and provide an example based on a simple configuration. In the simplest mode, the method is implemented by two plane mirrors, a spherical light pulse illuminator, and a single fast response photodetector. The 3D spatial information is retrieved from the temporal measurements by solving an inverse problem that uses a sparse approximation of the scene. System simulation shows the ability to track fast small objects that are moving in space using only a single time-resolved detector.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

One of the most common methods for three-dimensional spatial mapping is based on measurement time-of-flight (TOF). The principle underlying this method is to create and transmit a short temporary optical signal and measure the time from transmission to the time that the return signal is received. An optical pulse propagate in free space at the speed of light, and electronic systems are fast enough to measure the time from transmission to the retrieval of a reflection from the object, thus it is possible to measure the distance of the object from the system accurately. Measurements of this kind are one of the basic principles on which systems such as radio detection and ranging (RADAR) [13] and light detection and ranging (LiDAR) [4,5] systems are based. Using the TOF principle alone makes it possible to map the radii of objects in space but not the spatial angles at which the objects are located. Spatial angle information is usually obtained by using an optical pulse from a system with a small angular divergence and a mechanical system for angular scanning. Another common method is to use an array of detectors, where each detector is associated with a different angular location in space. Using this approach, optical reflections from different angles are obtained at different pixel positions, and thus the position of the objects in space can be calculated. Both methods have significant disadvantages for mapping moving objects. The scanning method requires a relatively long scanning time; it is thus suitable mainly for mapping stationary objects. On the other hand, the method of using multiple detectors or an array of detectors, although suitable for moving objects, is costly, and generally provides low angular separation. The spatial coding method is based on methods similar to those mentioned, where the area examined is illuminated by means of a short temporary pulse. The spatial separation capability is achieved by using a spatial encoder such as an array of digital micro-mirror devices (DMDs [6], or others [7,8]. In this way, many pulses are sent to the target and for each pulse a spatial coding pattern is generated. Knowing the coding patterns and return times allows us under different assumptions to hypothesize the locations of the targets. This interesting method can yield good results for a stationary scene since the rate of pattern changes is often slow. As with the other methods tracking of objects in motion, it may be quite limited.

In this article, we suggest a new method to measure the state vectors (position, velocity, and acceleration) of fast (velocities of |v|∼103 to 105 m/s) small objects in free space which uses a space-to-time transform and solves an inverse problem via a compressive sensing technique [9,10]. The presented method may be implemented without any moving parts or analysis of projecting sensing patterns. This method may be implemented in many ways and for different wavelengths, but basically, two mirrors, one pulse illuminator, and one fast photodetector are sufficient for the basic implementation. Throughout the article, we consider small objects to be sub-centimeter size as point-objects having a Lambertian surface [11,12] which reflects the light.

2. Background

2.1 Multi-path phenomenon

Let’s deliberately consider first the simple case of identifying the location of object1 by a LiDAR device without a scanning system and using a single light detector, as depicted in Fig. 1. As long the measured distance between the LiDAR device and object1 gets longer, the transmitted light pulse will diverge in such a way that part of the light pulse covers object1 and the other part hits the ground. If object1, object2, and the ground have a Lambertian surfaces, the radiance will be uniform in all the directions, and multiple pulses will be returned from object2 which is positioned nearby, in addition to the directly returned pulse from object1. These multiple returned pulses give rise to distortion of the detected signal and to fading, as a result of which the location of object1 will not be identifiable at all. This phenomenon is known as the multi-path effect [13,14]. If, conversely, object2 has a specular surface like a mirror, all the incident light rays will be scattered out of the LiDAR detection range except for one ray, following the law of reflection; this creates a ghost effect, i.e., detection of the position of imaginary object1 in addition to the real one.

 figure: Fig. 1.

Fig. 1. Multipath detection effect caused by other objects (Object2) near the target object (Object1).

Download Full Size | PDF

2.2 Compressive sensing

Compressive sensing (CS) [1519] is a revolutionary compression technique first published by Donoho and Candès et al. in 2006. Compressive sensing provides a framework to capture and recover signals from fewer measurements than required by traditional systems designed to comply with the Shannon–Nyquist sampling theorem. CS theorem is based on three conditions, 1) a sparse signal model, 2) an appropriate sensing design, and 3) appropriate reconstruction algorithms. The CS measurement process is described by

$$\mathbf{y} = \mathbf{Ax} + \boldsymbol{\mathrm{\epsilon}}$$
where y∈ℝM is the multiplexed measured signal, A∈ℝM×N is the sensing matrix where M << N; x∈ℝN represents the signal vector with K nonzero elements where K < M << N and ε∈ℝM is the measurement error i.e., the noise. The goal is to reconstruct the unknown vector x based on y and A by sparse approximation algorithm in minimal time. In order to solve Eq. (1) several algorithms [2023] were developed. A common class of recovery algorithms estimates the signal $\mathbf{\hat{x}}$∈ ℝN by solving the minimization problem
$$\mathop {\mathrm{argmin}}\limits_{\mathbf{\hat{x}}} \{{\|{\mathbf{y} - \mathbf{A\hat{x}}} \|_{2}^{2} + \mathbf{\tau }{{\|{\mathbf{\hat{x}}} \|}_1}} \}$$
where τ is a regularization parameter and ${\|\cdot \|_\textrm{p}}$ the p norm.

CS is relevant for our problem because we aim to detect the position of K sparsely distributed of point-objects with a reduced number of measurements. In order to detect the state vector S(t) = (x(t), v(t), a(t))T of the moving point-objects in 3D space we need to realize an appropriate sensing matrix that lets us reconstruct the state vector from the measurement vector of the detected returned light signals.

3. Proposed measurement technique

The proposed measurement technique is based on taking advantage of the optical multi-path phenomenon to create unique temporal measurement signatures for any position in the 3D space. We demonstrate the concept by simplest realization configuration to create an optical multi-path, of two planar mirrors standing parallel to each other, as shown in Fig. 2. A transceiver generates spherical light pulses, which are reflected from the two point-objects P1 and P2. The reflected light propagates back through multiple paths towards the detector, thus generating a pulse train unique to their location, as illustrated in Fig. 2(b). The multipath geometry with the two parallel planar mirrors configuration, as shown in Fig. 2(a), is analyzed in Supplement 1-A. It is shown that the total optical path length (OPL) of the transmitted light from the transceiver and back through object P1 is given by

$${{D}_\textrm{n}} = |{\mathbf{B} - \mathbf{A}} |+ {d}_{n}$$
where A is the position vector of object P1 and B is the position vector of the transceiver, dn is the distance between object P1 to the transceiver at reflection order n. This distance, can be calculated by the following equation:
$$\begin{aligned} {d}_{n} &= |{\mathbf{A} - {\mathbf{B}_{n}}} |= \sqrt {{{({x_A} - {a_{{xB}}}({n}))}^{2}} + {{({{y}_{A}} - {{y}_{B}})}^{2}} + {{({{z}_{A}} - 0)}^{2}}} \\ \mathbf{A} &= ({{x}_{A}},{{y}_{A}},{{z}_{A}});{\mathbf{B}_{n}} = ({a_{{xB}}}({n}),{{y}_{B}},0) \end{aligned}$$
where n = -S,…,-2,-1,0,1,2,…,S is the order number, Bn is the position vector of the reflected transceivers of transceiver at reflection order n, and B0 is the position vector of transceiver at order 0.

 figure: Fig. 2.

Fig. 2. Space-to-time encoding. (a) Generating multiple paths by placing the transceiver and point-objects P1 & P2 between parallel mirrors. (b) Illustration of the pulses received from objects P1 (up) and P2 (bottom).

Download Full Size | PDF

Another configuration of mirrors where the received pulse intensities maintain the same order of magnitude, making it more practical to use, is the configuration of two planar mirrors standing adjacent to each other with a given mirrors opening angle (MOA) of α between them. The multipath geometry is analyzed in Supplement 1-B. In the following, we assume a perpendicular-mirrors configuration, i.e., MOA of α=90°, as shown in Fig. 3(a). The process of one measurement to issue a short time spherical light pulse over the measurement space that include the point-object A. the detector B collects the return light pulses from object A directly and via the mirrors. The position of detector B is set outside the illumination cone of the light source to avoid direct glare on the detector.

 figure: Fig. 3.

Fig. 3. (a) Two-perpendicular-mirrors configuration. (b) Objects’ reflections in a two-perpendicular-mirrors configuration and the optical multi-path of the detected returned light signals from object A to detector B. The pulse of light is sent towards object A from pulse illuminator Tr.

Download Full Size | PDF

The mirrors create a finite number of reflection orders, thus there is a finite number of distinct optical paths for each point-object position found in front of the mirrors, depending on the total number of orders at given α. At α=90° the mirrors create four reflections orders and distinct optical paths, as shown in Fig. 3. The principle of calculation of the OPL of the returned signals is the same as described in the previous mirrors’ configuration: it involves calculating, the distances from the real point-object to the real and reflected detectors. The total OPL of each reflection order from the pulse illuminator to the detector via the point-object at any position in the space in front of the mirrors can be calculated by the following equation:

$${D_{{i,n}}} = Dt{p_{i}} + Dp{d_{{i,n}}} = |{\mathbf{Tr} - {\mathbf{P}_{i}}} |= |{{\mathbf{P}_{i}} - \mathbf{De}{\mathbf{t}_{n}}} |$$
where n = -S,…,-2,-1,0,1,2,…,S is the reflection order number, S is the highest reflection order number, i = 1,2,3,4,…,N is the object lexicographic position index, Dtpi is the distance from the pulse illuminator to object i, Dpdi,n is the distance from the object i to the detector at order n. Tr is the position vector of the pulse illuminator, Pi is the position vector of the object in position i, Detn is the position vector of the reflected detectors at order $n \ne 0$, and Det0 is the position vector of the real detector. The signal is expressed at the detector as power versus time and is determined by the set of OPLs associated with each object location.

The arrival time of the pulses, according to each OPL, can be calculated by the following equation:

$${{t}_{{i,n}}} = {\textstyle{{{{D}_{{i,n}}}} \over {c}}} \cdot {1}{{0}^{9}}[{\mathrm{ns}} ]$$
when c = 3·108 m/s is the speed of light. The light power [24] of the detected returning light pulses for each OPL can be calculated by the following equation:
$${{I}_{{i,n}}}{ = }{{I}_{{tr}}} \cdot \frac{{{A}{{p}_{i}}}}{{{\Omega tr} \cdot {Dt}{{p}_{i}}^{2}}} \cdot \frac{{{{A}_{{det}}}}}{{{\Omega }{{p}_{i}} \cdot {Dp}{{d}_{{i,n}}}^{2}}}$$
where Itr is the power of the transmitted light pulse [W], Api is the cross-sectional area [m2] of the object i, Adet is the detector sensing area [m2], Ωtr and Ωpi are the solid angles [sr] of the transmitted light pulse and the returned light from each object i respectively, Dtpi and Dpdi,n are the distances from the pulse illuminator to object i and from object i to the detector for each order n.

As a result of the geometry of this mirrors configuration (Fig. 3), the optical path lengths for each object position are of the same scale order of magnitude. The gap between them is in the range of the diameter of the imaginary circle of reflections of object A. Therefore, the power of the light of the received pulses are at the same scale order of magnitude. Likewise, a good time separation between the detected signals can be achieved depending on detector position. When a pulse of light is sent towards a single object at an arbitrary position in the space, several distinct signals are obtained in accordance with the reflection orders that the mirrors create. Figure 4(a) demonstrates the returning pulses of a configuration of mirrors with MOA of α=90°; the detector detects four returned light signals. Figure 4(b) demonstrates the returning pulses of a configuration with an MOA of α=60°; the detector detects six returned light signals. For each of these configurations their typical detector output signal i.e., the time signature. The time signatures in Fig. 4 are unique to the position A. Thus, any position in the space in front of the mirrors can be encoded by this mirrors configuration with a specific optical multi-path expressed by the detector as light power vs. time, i.e., as a time signature.

 figure: Fig. 4.

Fig. 4. The detected signal from object A at an arbitrary position in a configuration of mirrors with (a) MOA of α=90° and with (b) MOA of α=60°.

Download Full Size | PDF

4. Measurement technique and sensing region

To explain our method, we take as an example a two perpendicular planar mirrors configuration with MOA of α=90° (Fig. 3); with mirror dimensions of 3 m x 1 m; a 10 Watt pulse illuminator and a detector located respectively at (0,0,1) and (0.5,2.25,0), as shown in Fig. 5(a). Let us consider a simple configuration of a Cartesian grid of 9 × 8 positions, covering a region of 4.5 m x 4 m (18 m2) in the measurement space, at fixed intervals of 0.5 m from each other, on a plane at a height of 0.5 m in front of the mirrors’ opening and at a distance of 2.15 m from the edges of the mirrors (Fig. 5(a)). Let us assume that object A with cross-sectional area 4 mm x 4 mm, moves along each row of the Cartesian grid from the left side to the right side and let us assume that the temporal resolution of this configuration of mirrors is defined by the detector response time Δtdet = 1.3 ns, and that the velocity of the object is such that it does not cross more than a single grid cell during 1.3 ns, i.e., Vpr =0.5 m/1.3 ns. The signals received by the detector from any grid point can be calculated by Eq. (5), (6) and (7), and are represented as a time signature, as shown in Fig. 5. The time window Tsig of each time signature is defined according to the shortest and the longest time-of-flight of the nearest and farthest grid points respectively, as follows:

$$\begin{aligned} &{{T}_{{sig}}}{ = }{rd}({{{t}_{{max}}}} ){ - }{ru}({{{t}_{{min}}}} )\\ &{ru - round\,up; rd - round\,down}\\ &{{t}_{{max}}}{ = }\mathrm{max}\{{{{t}_{{i,n}}}} \}{;}{{t}_{{min}}}{ = }\mathrm{min}\{{{{t}_{{i,n}}}} \}\end{aligned}$$
where tmax, tmin are the maximum and minimum detection times of all the detection times of the detected returning signals, calculated by Eq. (6). For example, tmin is that of the light reflected from an object at the 6th grid point (using our lexicographical ordering) and that goes directly to the real detector B, and tmax is that of the longest ghost pulse returning from an object at the 72nd grid point. The measurement time window can be discretized to M equidistant points, by the following equation:
$${M}{ = }\frac{{{{T}_{{sig}}}}}{{\mathrm{\Delta }{{t}_{{det}}}}}{ + 1}$$
where Δtdet is the detector’s time resolution. In this case, the temporal measurement range starts from 22.1 ns up to 63.7 ns with a time interval Δtdet =1.3 ns, therefore M = 33. Hance, the time signature for each position in our grid can be represented as a vector of 33 values of pulses, when in practice, only four values are greater than zero, and the others are equal to zero.

 figure: Fig. 5.

Fig. 5. (a) The measurement region is represented by a Cartesian grid of 72 sensing positions in front of two perpendicular planar mirrors (i.e., MOA of α=90°), pulse illuminator Tr and detector B. The optical paths of object A are at the first position at the upper left corner and at the last position at the lower right corner. (b) The time signatures of the two first positions and of the last position in the grid (position number is set by a lexicographical ordering of the grid).

Download Full Size | PDF

Assuming a finite number of reflectors i.e., the objects, over the measurement grid, we can invoke the CS models described in Sec. 2.2. The CS model given in Eq. (1) illustrated by Fig. 6(a). The sensing matrix A∈ℝM×N represents the position-to-time sensing matrix, where M = 33, N = 72. Each column in the matrix A, represents the obtained time signature of each position in the defined positions grid. Thus, each position in the positions grid act as a sensing position and the positions grid act as a sensing grid of the measurement region.

 figure: Fig. 6.

Fig. 6. (a) The Forward Model for five objects, and (b) the multiplexed measured signal, detected by detector B, of five objects at different positions of the Cartesian grid {3,10,20,37,40}.

Download Full Size | PDF

In this work all time signatures respective to a Cartesian grid in measurement space were calculated numerically using Eq. (5), (6) and (7). In general, the sensing matrix can be obtained in two ways. One way is using the known system geometry and using Eq. (5), (6) and (7), to specify the sensing matrix. The second way to specify the sensing matrix is based on measuring the time signature for each position point. This can be achieved by placing a reference object in all physical location of interest in the measurement space and recording the time signature reflected from it. This process is more tedious and would be required in circumstances where the geometry of the system isn’t fully known or when there are technical issues such as unknown curvature of the mirrors, etc.

The measurement signal y∈ℝM is the multiplexed measured signal of several objects at different positions of the positions grid. Vector x∈ℝN is a vector of the lexicographic indexes i = 1,2,3,…,N of the positions grid i.e., of the detected objects and their relative size. For example, if five moving objects with reflective unit are situated at the moment of measurement at positions {3,10,20,37,40}, the obtained measurement signal y will be as shown in Fig. 6(b). The measurement signal y is the linear combination of the columns of the sensing matrix A with vector x. By making measurement y, the object locations can be determined using the recovery algorithm as an inverse problem. Once it is discovered, the positions and velocities of the objects can be ascertained.

The process of recovering position values P(t) for the point-objects is based on knowledge of the system's functional model or, in other words, on knowing of the system's sensing matrix A. The sensing matrix is determined by the multipath geometry described in Sec.3. By using the measured information from the sensing system and knowing the sensing matrix it is possible, under the assumption of sparse representation, to solve the system’s equation and find the positions P(t) of the objects. The result of this process yields a kind of frozen image of the objects and their location in three-dimensional space. Then, to find the average velocity v(t) of the object, the sensing action is repeated once more. In this way an average velocity of the objects is obtained by the following equation:

$${\mathbf{v}} \cong \frac{{{\mathbf{P}}{(}{t}\mathrm{\ +\ \Delta }{t}{) \mathrm{-} }{\mathbf{P}}{(}{t}{)}}}{{\mathrm{\Delta }{t}}}$$

The above depends on the time period between measurements, Δt = Tm, i.e., the light pulse repetition rate, which is related to intervals in sensing grid’s points and the known or the estimated velocity of the fastest object in the measurement region.

Similarly, the process can be repeated one more time. Measuring the position of the objects a third time, will allow calculating the acceleration of them. By finding the changes in velocity, the average acceleration can be calculated as follows:

$${\mathbf{a}} \cong \frac{{{\mathbf{v}}{(}{t}\mathrm{\ +\ \Delta }{t}{) - }{\mathbf{v}}{(}{t}{)}}}{{\mathrm{\Delta }{t}}} = \frac{{{\mathbf{P}}{(}{t}{ + }{2}\mathrm{\Delta }{t}{) - 2}{\mathbf{P}}{(}{t}{ + }\mathrm{\Delta }{t}{)} + {\mathbf{P}}{(}{t}{)}}}{{{{\mathrm{(\Delta }{t}{)}}^{2}}}}$$

Therefore, theoretically, at least 3 measurements are sufficient to map all the kinematic values of all the objects in the system space.

One of the difficulties that arise under the process described is when different objects cannot be distinguished due to certain ambiguous geometrical and physical conditions, which may prevent direct determination of the relationship between the locations of the objects from three measurements. Such ambiguities can be resolved by tracking moving objects in multiple temporal snapshots. Therefore, we propose the following sequence of actions to overcome ambiguity and recover the state vectors of the objects:

Step 1: Primary recovery of the position indexes from each measurement by solving Eq. (1) for the space-to-time matrix system A (Fig. 6(a)) using the OMP algorithm [20]. The OMP algorithm may provide primary results of the encoded positions, i.e., the indexes of the positions that the objects might have on the Cartesian grid. The recovery process provides the lexicographic vector that is converted to a 3D array to represent each object in the 3D grid. However, there might be uncertainty in the detected set of positions, for the reasons explained above. This is overcome by following the motion trajectory in Step 2.

Step 2: Finding the motion trajectory lines by using the Hough transform [2527]. In order to find the trajectories of the objects, we assume that the motion of each object is very high relative to the sensing region dimensions and therefore the motion of the object along short segments can be approximated as a straight line. Therefore, the trace of such an object in the position-time space is a straight line describing its trajectory. In our simulation we take the detected points in our cartesian grid together with their detection time stamp and arrange them in a 3D array. Then we apply the Hough transform on this 3D image to detect the points that lay over the straight lines.

Step 3: Calculating the state vectors. Once the trend lines are obtained as position indexes vs. measurement times, the position, velocity, and acceleration vectors of the objects can be calculated.

5. Discussion and result

5.1 Detection resolution

The sensing position detection resolution is fundamentally affected by the temporal resolution of the system i.e., the measurement time resolution, which in turn determines the sensing matrix (see Sec. 4). Each sensing position has a sensing boundary around it. When an object moves inside this limit space, the time signature is the same as it is on the sensing position, for a given measurement time resolution τ = max(Δtdet, ΔtLpulse) where Δtdet and ΔtLpulse are the detector’s time resolution and the light pulse width, respectively. The distance from this boundary is not equal around the sensing position. Distance is determined by the optical paths’ differences and depends on the closeness of the object to the sensing position. The optical path differences can be calculated for all n reflection orders. When at least one of them is greater or equal to a distance of 0.5, where c is the speed of light, the time signature changes and will not be considered as the time signature of the sensing position by the OMP algorithm.

Since we cannot give an analytic expression, we calculated the sensing range boundary around each sensing position numerically. The pseudocode is detailed in Supplement 1-C, which elaborates the procedure to estimate the sensing boundary i.e., the positions around each sensing position where at least one of the optical path differences is greater or equal to a distance of 0.5. Figure 7 illustrates the sensing boundary around the sensing position for the sensing grid described in Sec.4, with different measurement time resolutions of 1.3 ns, 0.65 ns, and 0.3 ns (ΔtdettLpulse). These sensing boundaries were calculated numerically when φ=90° since the sensing region is defined as a planar grid. As Fig. 7 shows, when the detector response time and the light pulse width increase, the sensing range boundary around the sensing positions increases as well, thus increasing the position uncertainty.

 figure: Fig. 7.

Fig. 7. The sensing boundary around the first sensing position at different temporal resolutions: 1.3 ns (orange line) 0.65 ns (blue line) 0.35 ns (green line).

Download Full Size | PDF

When we define a sensing region by the positions grid, two things must exist to prevent ambiguity or add errors in the recovery process: 1) all the returned light signals from each sensing position's orders must be detected. 2) The differential time between all the detected signals of all the sensing positions is not less than the measurement time resolution τ=max(Δtdet, ΔtLpulse), which is determined by the detector position.

5.2 Validation of the presented method

To evaluate the system's performance statistically, we performed a simulation of the movement of four objects and tested the system's ability to track them. The simulation is based on tracking objects moving along or across the positions grid, where in addition the initial position of each object is determined at random. For a set of four objects, we documented their estimated trajectories and compared them to their true location in the simulation. To get a statistical picture we repeated this process hundreds of times to get results as reliable as possible. The sensing system can provide performance for various design parameters. To consider different effects of the design parameters on the system performance, we chose to repeat each set of simulations for different design parameters of spatial and time separation. We selected the data system parameters provided in Table 1.

Tables Icon

Table 1. The Simulated Systems.

The results of the object-tracking simulation can yield the following results from the groups: 1) Full identification of all four objects only. 2) Full identification of all four objects plus false alarms. 3) Missed detection with or without false alarms. In order to demonstrate the typical results, we provide in Fig. 8 some examples. When we track a single object the results of the tracking may be either correct detection (CD), false detection (FD, Fig. 8(b), 8(d)), or missed detection (MD, Fig. 8(c), 8(d)).

 figure: Fig. 8.

Fig. 8. Four moving objects at different trajectories & velocities are marked in green. The densest green circles depict an object moving at the lowest speed. The most spaced green circles depict an object moving at the highest speed. (a) Precise recovery i.e., correct detection (CD) of the four trajectories of the objects are marked in red. (b) Detection of all the four trajectories with one false detection (FD). (c) Detection of three trajectories with one missed detection (MD) (d) Detection of three trajectories, one false detection and one missed detection (see Visualization 1, Fig. 8(b-c) depict incorrect detections).

Download Full Size | PDF

Figure 9 shows the results of the simulation for 400 repetitions of the four objects tracking simulation. In Fig. 9, each correct trajectory identification is shown using a green square and each incorrect trajectory identification is shown using a red square. The number of correct trajectories can be between 0-4 and the number of incorrect trajectories can be greater than 0. From Fig. 9 we can see that probability to find more than three trajectories with at most one false trajectory in each graph is given as follows: (1) 0.988; (2) 0.99; (3) 0.975; (4) 0.958; (5) 0.995; (6) 0.99; (7) 0.993 (8) 0.965.

 figure: Fig. 9.

Fig. 9. The result distribution for eight different measurement time resolutions at three different sensing grid sizes and intervals. The precise recovered trajectory is shown in green. The false alarm of recovering an incorrect trajectory is shown in red.

Download Full Size | PDF

It can be summed up and stated that the probability of identifying the four objects’ presented parameters is between 0.96-0.99 when there is a tolerance to be identified (e.g., Fig. 8(b) shows all four trajectories correctly detected plus one additional false detection). For precise identification of the four objects, study of the results indicates that the higher the temporal and spatial separation of the system, the better the ability to identify objects. Nevertheless, for some performance measurement the temporary separation can be set according to the requested statistics. If the requirement is perfect identification of the four objects at probability of 0.85 it is sufficient to use a temporary separation of 0.097 ns.

6. Conclusions

In this paper, we have introduced a fast-moving of objects tracking technique based on a space-to-time transformation. We have shown that by deliberately generating multi-path optical reflections, it is possible to estimate the position, velocity, and acceleration of each object using a single time-resolving detector. Thus, we present an optical configuration and a principle that can be used in order to convert spatial information into temporal information. We used the compressed sensing theory and the Hough transform to calculate the position, velocity, and acceleration of the objects. The simulation results show that the method presented can be realized with different grid sizes and measurement time resolutions, and these should be considered to achieve optimal results. The presented method can be implemented with different illumination sources and detectors. To obtain an angular resolution, we do not need an array of detectors, rather we can use a single fast detector. In this work, we assumed that the operational optical wavelength is very small compared to the size of the object. We demonstrated the working principle with a geometry of two perpendicular mirrors. Other geometries of mirrors, with different angles, can be used to generate more reflections, i.e., to generate more optical paths that uniquely define the positions; however, that comes at the expense of reducing the sensing volume. The principle presented may be generalized for the use of multiple detectors and multiple mirrors, depending on the purpose of the measurement.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. Skolnik, Radar Handbook, (1962).

2. J. J. Kovaly, Synthetic Aperture RADAR., (1976).

3. C. J. Oliver, “Synthetic-aperture RADAR imaging,” J. Phys. D: Appl. Phys. 22(7), 871–890 (1989). [CrossRef]  .

4. G. Heritage and A. Large, Laser Scanning for the Environmental Sciences, (2009).

5. J. Shan and C. K. Toth, Topographic Laser Ranging and Scanning: Principles and Processing, (2008).

6. M. F. Duarte, “Single-Pixel Imaging via Compressive Sampling,” IEEE Signal Process. Mag. 25(2), 83 (2008). [CrossRef]  

7. Y. Sher, L. Cohen, D. Istrati, and H. S. Eisenberg, “Low intensity LiDAR using compressed sensing and a photon number resolving detector,” Emerging Digital Micromirror Device Based Systems and Applications 10, 105460J (2018). [CrossRef]  

8. E. Hahamovich, S. Monin, Y. Hazan, and A. Rosenthal, “Single pixel imaging at megahertz switching rates via cyclic Hadamard masks,” Nat. Commun. 12(1), 4516 (2021). [CrossRef]  .

9. H. Dai, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24(23), 26080–26096 (2016). [CrossRef]  .

10. G. A. Howland, “Compressive sensing LIDAR for 3D imaging,” in Lasers and Elctro-Optics Conference (CLEO) (2011).

11. L. Tu, Z. Qin, L. Yang, F. Wang, J. Geng, and S. Zhao, “Identifying the Lambertian Property of Ground Surfaces in the Thermal Infrared Region via Field Experiments,” Remote Sens. 9(481), 1–23 (2017). [CrossRef]  

12. A. D. Ryer, The light measurement handbook, International Light Technologies (1997).

13. A. Habib and S. Moh, “Wireless Cahnnel Models for over the sea communication - a coparative study,” Applied Siences 9(3), 1–32 (2019). [CrossRef]  

14. R. H. Raekken and G. Lovnes, “Multipath propagation and its influence on digital mobile communication system,” ResearchGate109–126 (1995).

15. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  .

16. E. Candes and J. Romberg, “Sparsity and incoherence in compressive sampling,” IOP Publishing 3(3), 969–985 (2006). [CrossRef]  

17. R. G. Baraniuk, “Compressive sensing,” IEEE Signal Processing Mag. 24(4), 118–121 (2007). [CrossRef]  .

18. Y. C. Eldar and Gitta Kutyniok, “Compressed sensing theory and applications,” Cambridge University Press. (2012).

19. A. Stern, Optical compressive imaging, CRC Press, (2016).

20. T. Tony Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57(7), 4680–4688 (2011). [CrossRef]  .

21. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). [CrossRef]  

22. J. M. Bioucas-Dias and M. A. T. Figueiredo, “Two-step algorithms for linear inverse problems with non-quadratic regularization,” IEEE Conf. on Image Process. (2007).

23. J. Yang, Y. Zhang, and W. Yin, “Fast TVL1-L2 minimization algorithm for signal reconstruction from partial Fourier data,” ResearchGate 4(2), 288–297 (2008).

24. P. McManamon, Field guide to Lidar, SPIE (2015).

25. V. F. Leavers, Shape Detection in Computer Vision Using the Hough Transform (Springer, London, 1992).

26. P. Mukhopadhyay and B. B. Caudhuri, “A survey of Hough Transform,” Pattern Recognition 48, 993–1010 (2014). [CrossRef]  

27. A. S. Hassanein, S. Mohammad, M. Sameer, and M. E. Ragab, “A Survey on Hough Trasnform, Theory, Techniques and Applications,” Jornal of computer sceince 12(1), 139–156 (2015).

Supplementary Material (2)

NameDescription
Supplement 1       Multipath geometry analysis
Visualization 1       Detections of several precise recoveries along with their trajectories

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Multipath detection effect caused by other objects (Object2) near the target object (Object1).
Fig. 2.
Fig. 2. Space-to-time encoding. (a) Generating multiple paths by placing the transceiver and point-objects P1 & P2 between parallel mirrors. (b) Illustration of the pulses received from objects P1 (up) and P2 (bottom).
Fig. 3.
Fig. 3. (a) Two-perpendicular-mirrors configuration. (b) Objects’ reflections in a two-perpendicular-mirrors configuration and the optical multi-path of the detected returned light signals from object A to detector B. The pulse of light is sent towards object A from pulse illuminator Tr.
Fig. 4.
Fig. 4. The detected signal from object A at an arbitrary position in a configuration of mirrors with (a) MOA of α=90° and with (b) MOA of α=60°.
Fig. 5.
Fig. 5. (a) The measurement region is represented by a Cartesian grid of 72 sensing positions in front of two perpendicular planar mirrors (i.e., MOA of α=90°), pulse illuminator Tr and detector B. The optical paths of object A are at the first position at the upper left corner and at the last position at the lower right corner. (b) The time signatures of the two first positions and of the last position in the grid (position number is set by a lexicographical ordering of the grid).
Fig. 6.
Fig. 6. (a) The Forward Model for five objects, and (b) the multiplexed measured signal, detected by detector B, of five objects at different positions of the Cartesian grid {3,10,20,37,40}.
Fig. 7.
Fig. 7. The sensing boundary around the first sensing position at different temporal resolutions: 1.3 ns (orange line) 0.65 ns (blue line) 0.35 ns (green line).
Fig. 8.
Fig. 8. Four moving objects at different trajectories & velocities are marked in green. The densest green circles depict an object moving at the lowest speed. The most spaced green circles depict an object moving at the highest speed. (a) Precise recovery i.e., correct detection (CD) of the four trajectories of the objects are marked in red. (b) Detection of all the four trajectories with one false detection (FD). (c) Detection of three trajectories with one missed detection (MD) (d) Detection of three trajectories, one false detection and one missed detection (see Visualization 1, Fig. 8(b-c) depict incorrect detections).
Fig. 9.
Fig. 9. The result distribution for eight different measurement time resolutions at three different sensing grid sizes and intervals. The precise recovered trajectory is shown in green. The false alarm of recovering an incorrect trajectory is shown in red.

Tables (1)

Tables Icon

Table 1. The Simulated Systems.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

y = A x + ϵ
a r g m i n x ^ { y A x ^ 2 2 + τ x ^ 1 }
D n = | B A | + d n
d n = | A B n | = ( x A a x B ( n ) ) 2 + ( y A y B ) 2 + ( z A 0 ) 2 A = ( x A , y A , z A ) ; B n = ( a x B ( n ) , y B , 0 )
D i , n = D t p i + D p d i , n = | T r P i | = | P i D e t n |
t i , n = D i , n c 1 0 9 [ n s ]
I i , n = I t r A p i Ω t r D t p i 2 A d e t Ω p i D p d i , n 2
T s i g = r d ( t m a x ) r u ( t m i n ) r u r o u n d u p ; r d r o u n d d o w n t m a x = m a x { t i , n } ; t m i n = m i n { t i , n }
M = T s i g Δ t d e t + 1
v P ( t   +   Δ t ) P ( t ) Δ t
a v ( t   +   Δ t ) v ( t ) Δ t = P ( t + 2 Δ t ) 2 P ( t + Δ t ) + P ( t ) ( Δ t ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.