Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Nanosecond pulsed CMOS LED for all-silicon time-of-flight ranging

Open Access Open Access

Abstract

Light detection and ranging (LIDAR) is a widely used technique for measuring distance. With recent advancements in integrated photonics, there is a growing interest in miniaturizing LIDAR systems through on-chip photonic devices, but a LIDAR light source compatible with current integrated circuit technology remains elusive. In this letter, we report a pulsed CMOS LED based on native Si, which spectrally overlaps with Si detectors’ responsivity and can produce optical pulses as short as 1.6 ns. A LIDAR prototype is built by incorporating this LED and a Si single-photon avalanche diode (SPAD). By utilizing time-correlated single-photon counting (TCSPC) to measure the time-of-flight (ToF) of reflected optical pulses, our LIDAR successfully estimated the distance of targets located approximately 30 cm away with sub-centimeter resolution, approaching the Cramér-Rao lower bound set by the pulse width and instrument jitter. Additionally, our LIDAR is capable of generating depth images of natural targets. This all-Si LIDAR demonstrates the feasibility of integrated distance sensors on a single photonic chip.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Light detection and ranging (LIDAR) systems have long been used for distance measurements in meteorology [1], geology [2], robotics [3], astronomy [4], and various other fields [5]. A typical LIDAR contains three main components for light emission, beam steering, and light detection. Usually, a high-bandwidth laser generates optical pulses or modulated signals and a mechanical scanner directs the light to a target. A photodetector then records the reflected signals, and the time-of-flight (ToF) is used to estimate the target distance [6]. Recently, driven by emerging applications such as autonomous vehicles and face recognition, highly integrated, cost-effective, miniaturized LIDARs have gained significant interest. This has prompted tremendous effort to replace bulky, free-space LIDAR parts with on-chip photonic devices that are complementary metal–oxide–semiconductor (CMOS)-compatible [68]. For example, beam steering has been achieved using chip-based optical phased arrays (OPA) [9,10], which directs the emission by manipulating on-chip optical phase distributions. Another strategy involves microelectromechanical systems (MEMS)-actuated optical switches positioned at the back focal plane of a lens [11]. A desired emission direction is achieved by routing light to a specific switch in the array according to spatial Fourier transform. On the light detection side, CMOS single-photon avalanche diodes (SPADs) and SPAD arrays were reported with single-photon sensitivity and sub-nanosecond timing precision [1217]. Furthermore, various emitters, such as light-emitting diodes (LEDs) [18], vertical cavity surface emitting lasers (VCSELs) [19,20], and soliton microcombs [21,22], have been proposed for miniaturized LIDARs. Compared with laser sources, LEDs have relatively low bandwidth and power density but they offer several advantages, such as being low-cost, consuming less power, being eye-safe, and not requiring specific thermal management, making them ideal for short-range tasks [23,24]. For example, Griffiths et al. used millimeter-scale commercial LEDs with optical pulses of $\approx 10$ ns, along with a SPAD camera, to generate multispectral depth images of targets located $\approx 1$ m away with a standard deviation of $3.41$ cm, limited by their timing module [25]. Additionally, they reported ranging results on targets located from $0$ to $40$ cm away with a standard deviation of $0.64$ cm using the same LEDs but different SPAD and timing electronics. Compact LED-based ToF sensors are commercially available from, for example, Terabee (France), whose smallest model (a few cm) can estimate distances ranging from $0.2$ to $30$ m with approximately $1$-$2{\%}$ range accuracy [26]. LEDs have also been heterogeneously integrated with CMOS chips to achieve higher compactness. Recently, Carreira et al. [18] demonstrated the integration of an $8\times 8$ blue ($450$ nm) micro-LED array and a SPAD on a single CMOS chip for ToF ranging. They performed distance estimations at ranges from $0.2$ to $1.2$ m with errors of approximately $8$ cm, which was limited by the LED’s optical pulse width of $19$ ns. The optical pulse width could be improved to $300$ ps if incorporating state-of-the-art ultraviolet on-chip LEDs [27].

The aforementioned emitters are not compatible with microelectronics or imaging CMOS platforms due to their underlying materials. Alternatively, native Si emitters may be used to address this compatibility problem. However, due to the indirect bandgap of Si, the intensity of Si emitters are typically limited to $10^{-4}$ - $10^{-1}$ W/cm$^2$, which is several orders of magnitude lower than their III-V counterparts [2833]. This photon-starved regime requires the detector to be sensitive at the single-photon level while efficient single-photon detectors at the Si bandgap energy around $1100$ nm are still in development. InGaAs-based SPADs and superconducting nanowire single–photon detectors (SNSPDs) are commonly used in this wavelength regime but they are not CMOS-compatible [34]. Ge-on-Si SPADs are relatively new devices and can potentially be fabricated monolithically with Si, but currently, they require cryogenic environments and are not commercially available [3537]. A Si-based SPAD can potentially be used, but its detection efficiency drops rapidly approaching $1100$ nm. Another challenge of using a Si emitter for LIDAR is the requirement for high bandwidth, as ToF detection requires either pulsed emission or frequency/amplitude modulation. It is therefore necessary to reduce the emitter’s active volume to lower the associated capacitance. However, most Si emitters are designed to be tens of $\mu$m in size in order to suppress defect-assisted non-radiative recombination by maintaining a low surface-to-volume ratio [32]. Due to the trade-off between optical intensity and device speed, the reported modulation of on-chip Si LEDs is limited to below $1$ MHz mainly in the context of chip-based communication links. Researchers have reported reverse-biased avalanche-mode LEDs (AMLEDs) for intra-chip communication up to $1$ Mbps [38,39], and inter-chip communication up to $100$ kHz [40]. For forward-biased Si LEDs, intra-chip communication of $1$ kHz has been reported [41]. Recently, a $3$-dB switching bandwidth of $10$ MHz was experimentally demonstrated in Si LEDs fabricated in SOI process [42] and the authors also claimed that GHz switching is possible by further reducing the device active region. Meanwhile, reports on pulsed Si LEDs are scarce.

In this letter, we report a forward-biased CMOS LED that can produce optical pulses as short as $1.6$ ns. The speed is achieved by reducing the active region to sub-micrometer scale and utilizing fast carrier extraction. Additionally, the emission spectrum of our LED is sufficiently broad to overlap with the responsivity of typical Si detectors. An all-Si LIDAR prototype is built based on this LED and a Si SPAD. Using our LIDAR, we demonstrate distance estimations with sub-centimeter resolution and depth imaging on natural targets.

2. Pulsed CMOS LED

The LED’s static performance has been detailed in our recent report [43], here we extend the operation of these devices to high-speed, large-signal modulation. Our LED was fabricated in a $55$ nm unmodified microelectronic CMOS node. A schematic top view and layer structure are presented in Fig. 1 (a) and (b), respectively. The LED is an n+/n/p vertical junction based on n-poly-Si gate, n-well and p-substrate. A negative gate bias ($-6$ V) was initially applied to introduce gate oxide hard breakdown during which a Si filament formed as an electrical conduction path in the gate oxide [44,45]. After breakdown, the LED is turned on with the poly-Si gate grounded and the substrate at positive bias. Near the gate oxide/n-well interface, the electrons are injected from the top contact and recombine with the holes accumulated in the n-well, which generates a highly localized emission spot. Once the electrons enter the substrate, they recombine with majority holes in the substrate, leading to a dim, spatially broad background. In Fig. 1 (c), the sub-micron ($< 400$ nm diameter) emission spot overlaid on a micrograph of the LED is presented. Note that the emission spot is smaller than the emission wavelength, resulting in high spatial coherence of the emission. In our previous work [43], the spatial coherence was established by generating holograms. The sub-wavelength emission profile also contributes to the high quality of the collimated beam. A measure of the beam quality is that light from the emission spot can be coupled into a single-mode fiber with over $40{\%}$ coupling efficiency. As we will discuss later in this works, this high beam quality plays a crucial role in achieving the desired lateral resolutions of our LIDAR system, as it ensures a diffraction-limited beam divergence described by the half-angle $\theta = \lambda /{\pi w_0}$ where $\lambda$ represents the wavelength and $w_0$ is the collimated beam waist radius of a Gaussian beam. For example, as will be presented later, in our setup, $w_0$ is approximately 2 mm, resulting in $\theta \approx 160$ $\mu$rad.

 figure: Fig. 1.

Fig. 1. Summary of the LED characteristics. (a) Schematic top view of the LED. Gate oxide and the back-end-of-line dielectrics (BEOL) are not shown. (b) Schematic side view of the LED. The hollow and solid circles indicate holes and electrons, respectively. (c) Micrograph of the emission pattern overlaid on a micrograph of the LED captured using a CMOS camera. The integration of the emission pattern was $5$ s and the LED was biased at $5.0$ V. Electrons are injected from the taper in the dashed box. The other taper is not used in this work. (d) Typical emission spectra of the LED under DC and $5$ ns pulsed bias, compared with the quantum efficiency (QE) of a commercial Si SPAD. The emission spectra were measured using a custom-built InGaAs spectrometer. The DC bias is $6$ V and the pulsed bias has a voltage swing of $-1.5$ - $6.5$ V. The raw spectrum (circles) is smoothed (solid line) by a Savitzky–Golay filter (order 3, frame length 21). The SPAD QE is digitized from the product data sheet.

Download Full Size | PDF

The emission pattern captured in Fig. 1 (c) using a CMOS camera suggests that the LED’s emission spectrum overlaps with the typical Si detectors’ responsivity. In Fig. 1 (d), we present the LED’s emission spectra and the quantum efficiency (QE) of a commercial Si SPAD. While the spectral peak of the emission is set by band-to-band recombination at around $1100$ nm, the spectral full-width-half-maximum (FWHM) is approximately $170$ nm with $5$ ns pulsed bias and $190$ nm with DC bias. The spectral width of our LED is approximately $2\times$ that of Si LEDs with micrometer-scale injectors (FWHM $\approx 50$ - $100$ nm [2830]), and is comparable with other Si LEDs with nanoscale injectors [31,46,47]. This broadband emission is a result of our electron injection structure. The Si filament, acting as a nanoscale electron injector, elevates the carrier temperature of injected electrons, leading to broadband emission. The emitted photons with wavelength shorter than $1000$ nm enhance the detection signal with Si detectors. The emission spectrum of our LED corresponds to a wavelength-weighted detection QE of $\approx 2{\%}$ under pulsed bias, making it possible to create all-Si LIDAR.

The LED emission under pulsed bias is characterized and presented in Fig. 2. Time-resolved optical pulses are recorded using the commercial Si SPAD in Fig. 1 (d) and TCSPC electronics. We first investigate the switching dynamics. In Fig. 2 (a), top panel, the emission is resolved with the LED biased with a long pulse of $250$ ns width and $0$ - $4$ V swing. By fitting the rising and falling edges into single exponential, the characteristic times are estimated as $\tau _\text {rise} \approx 14$ ns and $\tau _\text {fall} \approx 3.5$ ns. On the rising edge, considering the radiative recombination rate scales as the product of the concentration of the injected electrons and the concentration of the holes accumulated in the n-well, the emission power $P$ can be expressed as

$$P \approx \eta\, \hbar \omega\, n \,p\, V$$
where $\hbar \omega$ is the photon energy, $n$ is the concentration of the injected electrons, $p$ is the concentration of the holes accumulated in the n-well, $V$ is the active volume, and $\eta$ is a coefficient accounting for the bimolecular recombination coefficient and the injection efficiency. While $n$ has a fast rise time and is mainly limited by the current turn-on of the external circuit, the rise of $p$ is relatively slow, limited by the diffusion capacitance of the n+/n/p junction. On the falling edge, while Eq. (1) still holds, the extraction of holes from the junction is assisted by the built-in electrical field as discussed by Ref. [48]. The emission turn-off is faster, $\tau _\text {fall} \approx 1.6$ ns, with larger voltage swing (Fig. 2 (a) bottom panel) due to the stronger reverse electrical field. Similar fast carrier extraction in p-i-n diodes has been reported in Ref. [42,49]. The rising edge deviates from a single exponential fit when approaching steady-state, which is more prominent with a higher maximum voltage and can be avoided using a bias pulse shorter than $10$ ns (Fig. 2 (a), bottom panel). This phenomenon can be a result of multiple processes, including velocity saturation in the Si filament due to high electron kinetic energy, and the increasing diffusion capacitance with voltage.

 figure: Fig. 2.

Fig. 2. Pulse characterization. (a) Time-resolved emission of the CMOS LED under long pulsed bias ($250$ ns) with $0$ - $4$ V (top panel) and $-1.5$ - $6.5$ V (bottom panel) voltage swings. (b) Time-resolved emission of the CMOS LED under nanosecond pulsed bias with $-1.5$ - $6.5$ V swing. The inset shows optical pulse width versus electrical pulse width. (c) Time-resolved emission of the CMOS LED under sub-nanosecond pulsed bias. The inset shows the electrical pulse shape. (d) Average optical power with pulse repetition rates of $50$ MHz (left panel) and $1$ MHz (right panel). The voltage swings in (d) are $-1.5$ - $6.5$ V except the $0.9$ ns data point, which is approximately $0$ - $5$ V. The optical pulses in (a) - (c) are measured by a Si SPAD using TCSPC. The powers in (d), except the $0.9$ ns point, are measured using a low-bandwidth, amplified InGaAs photodiode. The power with $0.9$ ns electrical pulse is estimated by scaling the power with $5$ ns pulse using the ratio of the corresponding photon counts in (b) and (c).

Download Full Size | PDF

In Fig. 2 (b), we present the time-resolved optical pulses when the LED is biased with nanosecond electrical pulses, with the inset showing the corresponding optical pulse width. The optical pulse width decreases from approximately $3$ ns to $1.6$ ns as the electrical pulse width decreasing from $5$ ns to $2$ ns. Further reducing the electrical pulse width to $\approx 0.9$ ns, the optical pulse remains approximately $1.6$ ns (Fig. 2 (c)), indicating that it has approached the limit of the current experimental conditions. Note that this result is the convolution of the intrinsic device response with the jitter of the SPAD ($\approx 0.4$ ns) and the rising/falling time of the bias source ($\approx 0.5$ ns), which serves as a upper bound of the intrinsic optical pulse width and the corresponding pulse jitter.

In Fig. 2 (d), we present the average optical power of the pulsed emission, measured by a low-bandwidth InGaAs photodiode. At $50$ MHz and $1$ MHz repetition rates, the highest average optical powers are approximately $14$ pW and $39$ pW with $5$ ns and $250$ ns electrical pulses, respectively. It can be seen that with the same duty cycle, reducing the repetition rate (increasing the bias pulse width) increases the average optical power, which is consistent with the slow rising region in Fig. 2 (a).

Our LED can produce nanosecond optical pulses ($\approx 1.6$ ns) mainly because of the fast carrier extraction and the low capacitance associated with the small active region. This low capacitance is achieved by our device design including using transparent poly-Si top contact to enhance light extraction, embedding the active region into substrate to dissipate heat, and confining holes with electrical field [43]. Additionally, the nanoscale injector generates high electron temperatures resulting in broadband emission that spectrally overlaps with Si detectors. These two advantages, combined with its intrinsic compatibility with CMOS platforms, make our Si LED ideal for active on-chip applications including miniaturized LIDARs.

3. LIDAR experiment

Our LIDAR apparatus, as shown in Figure 3 (a), incorporates the LED that is pulse-biased by an arbitrary waveform generator (AWG). The LED emission is collimated by an objective lens with a numerical aperture (NA) of $0.95$ and directed towards a target approximately $30$ cm away by a mirror. The reflected optical signal is collected by a bi-convex lens positioned behind the mirror, which focuses the photons onto a commercial Si-based SPAD. The SPAD produces pulses in response to the detected photons, and these pulses, along with synchronous pulses from the AWG, are sent to a TCSPC module that builds a histogram of the arrival times of single photons.

 figure: Fig. 3.

Fig. 3. LIDAR apparatus. (a) Schematic of the all-Si LIDAR setup. The red solid arrow indicates the direction of the collimated LED emission towards a target, while the dashed arrows and the shaded area indicate the signal reflected by the target. The electrical and optical pulse trains are sketched in blue and red, respectively. (b) Bias pulse waveform with $50$ MHz repetition rate, $5$ ns pulse width, and $-1.5$ - $6.5$ V swing and (c) the corresponding time-resolved optical power of the LED. The components used in the setup are labeled as follows: M: mirror of $\approx 5 \times 5$ mm$^2$; OBJ: 100X, 0.95NA objective; AWG: arbitary waveform generator; RF AMP: radio-frequency amplifier; L: bi-convex lens with a $50$ mm clear aperture and $60$ mm focal length; LP, $830$ nm longpass filter; Si SPAD: $500$ $\mu$m-diameter, passively quenched SPAD (ID Quantique); TCSPC: reversed start-stop timing electronics and histogrammer (B&H).

Download Full Size | PDF

Alternatively, the mirror can be rotated $90^\circ$ (clockwise in Fig. 3 ) to direct LED emission into the SPAD directly and generate a reference histogram. In Figure 3 (b) and (c), a typical electrical pulse train and the corresponding reference optical pulse train are presented, respectively. The cross-correlation coefficients of the reference histogram and the reflection histogram are computed, and assuming that the optical pulse shape does not vary significantly during travel, the maximum cross-correlation coefficient corresponds to the round trip ToF from the mirror to the target, allowing for distance estimation.

4. Ranging results

Examples of single measurements are shown in Fig. 4 (a) (b). A white target made of a cardboard sheet was placed at approximately $25$ cm and $30$ cm from the mirror (M in Fig. 3). The LED was biased with $5$ ns electrical pulse width, $50$ MHz repetition rate and a $-1.5$ V to $6.5$ V voltage swing to balance the optical pulse width ($\approx 3$ ns) and the average optical power ($\approx 14$ pW). The SPAD captured approximately $300$ reflected photons per second. Examples of the normalized reflection histograms with $5$ s integration along with the reference histogram are presented in Fig. 4 (a). The corresponding cross-correlation coefficients are presented in Fig. 4 (b). The peak positions were estimated by fitting the cross-correlation curves into parabolas near the maximum time bins. It can be seen that the ToF from the two targets can be well differentiated from the cross-correlation coefficients. In these specific examples, the ToF are approximately $1.995$ ns and $1.635$ ns, corresponding to distances of $29.9$ cm and $24.5$ cm, respectively.

 figure: Fig. 4.

Fig. 4. Resolution test with a white target. (a) Examples of reflection histograms at two different positions with $5$ s integration. (b) Corresponding cross-correlation coefficients of the reference histogram and the reflection histograms in (a). The peaks are extracted by fitting the curves into parabolas. (c) Statistics of $20$ distance estimations at three different positions and with various integration times. The error bars are the standard deviations. (d) Resolution versus integration time compared with the Cramér-Rao lower bound (CRLB). Here the resolution is defined as $2 \times$ standard deviation.

Download Full Size | PDF

We characterize the resolution of distance measurements by repeating a single measurement $20$ times with the targets positioned at $25$ cm, $28$ cm, and $30$ cm and with various integration times. The means and the standard deviations of the results are presented in Fig. 4 (c). In all the measurement conditions, the mean is within $\approx 1$ cm of the ground truth and the the standard deviation decreases with the integration time as expected from Poisson statistics of photon arrival. We quantify the resolution as twice the standard deviation, which corresponds to the situation that the error bars of two estimations do not overlap in Fig. 4 (c). Here the three targets can be resolved from each other with $2$ s integration and the highest resolution is approximately $0.67$ cm with $20$ s integration.

The resolution of our system depends on the shape of the optical pulse and the integration time. Assuming that the reference histogram, which can be integrated for a long time, is noiseless and that both photon noise and dark noise follow Poisson statistics, we derived the Cramér-Rao lower bound (CRLB) on the resolution as follows:

$$R = c \frac{1}{\sqrt{N_p \Delta T}} \left( \sum_i \frac{ \left [ f_i^\prime \right]^2 } { f_i + \frac{N_d}{N_p} } \right)^{-\frac{1}{2}}$$
where $N_p$ is the photon rate captured by the SPAD, $N_d$ is the dark count rate per time bin, $\Delta T$ is the integration time, $c$ is the speed of light, $f_i$ is the normalized noiseless count in the $i$th time bin that satisfies $\sum _i f_i = 1$, and $f_i^\prime$ represents the first-order derivative relative to time. The derivation of Eq. (2) is presented in Supplement 1. This result is virtually identical to the CRLB on the optical position estimator derived by Winick [50]. When conditions are such that $N_d << N_p$, $f$ has a Gaussian shape, and the time bin width is small compared to the pulse width, Eq. (2) reduces to:
$$R = \frac{c \sigma}{\sqrt{N_p \Delta T}}$$
where $\sigma$ is the standard deviation of the Gaussian shape. In our setup, $N_p \approx 300$ s$^{-1}$ , $N_d \approx 0.38$ s$^{-1}$. For simplicity, the pulse shape is fit by a Gaussian profile of which $\sigma \approx 1.4$ ns. The fitting procedure can be referred to in Supplement 1. The resultant CRLB is presented in Fig. 4 (d) along with the experimental resolutions. It can be observed that above $2$ s, the experimental resolution is tightly bounded by the CRLB, indicating good efficiency of our estimator. However, below $1$ s, the resolution is more than two times worse than CRLB. As has been discussed by Ianniello [51], this is likely due to anomalous errors in correlation-based estimations, of which the probability exhibits a thresholding effect on decreasing signal-to-noise ratio (SNR).

We further present the capability of our LIDAR by imaging natural targets. Here the pulse condition is the same as that in the resolution test. The targets are two wooden chess pieces on a motorized $xy$ scanning stage. The two targets are separated by approximately $2$ cm both in $x$ and $z$, as shown in Fig. 5 (a) (b). A white cardboard sheet is positioned at $z \approx 30$ cm as background. The stage is raster-scanned at $2.5$ mm step size with a total of $21 \times 21$ steps. The scanned area, as indicated by the dashed box, is confirmed by the image of the reflection counts (Fig. 5 (c)) where low-count pixels appear at the target boundaries due to large-angle scattering. The resulting distance images are presented in Fig. 5 (d) and (f), with $2$ s and $1$ s integration per pixel, respectively. In both images, the shapes of the two targets and their relative positions are resolved. For example, the features of the knight’s head and the space between the two targets can be observed. Note that the resolving power of LIDARs depends on all $xyz$ resolutions. In our setup, the $xy$ resolution is approximately the beam width of the collimated emission ($\approx 4$ mm), while the $z$ resolution varies due to the inhomogeneous reflection across the targets but is on average $1-2$ cm with $2$ s integration at the relatively flat areas. Relatively noisy pixels are found around the target boundaries, especially in Fig. 5 (f), which is consistent with the low-count pixels in Fig. 5 (c).

 figure: Fig. 5.

Fig. 5. Distance images of natural targets. (a) Top view and (b) front view of two wooden chess pieces within the scanned area. The back dashed box indicates the approximate scanned area. (c) Reflection image of the targets by summing the reflected photon counts. (d, f) Raw pixel-wise distance estimations of the targets. (e, g) Distance estimations after TV denoising. An empirical $\lambda = 0.5$ is used. The integration time is $2$ s for (c), (d), and (e), and $1$ s for (f) and (g).

Download Full Size | PDF

The piece-wise smoothness of natural images can be used as a prior to improve the accuracy of the distance estimations. We implemented a straightforward total variation (TV) denoising algorithm as follows [52]:

$$\hat{z}(x,y) = \text{argmin}_z ||z - z_0||_2^2 + \lambda \left( ||\nabla_x z||_1 +||\nabla_y z||_1 \right)$$
where $\hat {z}$ and $z_0$ are the denoised and the raw distance estimations, respectively. $||\cdot ||_i$ indicates the $i$th norm, $\nabla$ stands for the first-order derivative, and $\lambda$ tunes the strength of the regularization. The TV regularization preserves edges and generates images with piece-wise smooth regions. Eq. (4) is solved with the split Bregman method and fast cosine transform [5254]. A detailed description on the computational method is presented in Supplement 1. The denoised distance images are shown in Fig. 5 (e) (g) with an empirical $\lambda = 0.5$. Compared with the raw images, the two targets, as well as the background, are better resolved. In Fig. 5 (e) the distance of the knight and the bishop relative to the background are approximately $5$ cm and $3$ cm respectively, which are accurate within $1$ cm. Some artifacts can be observed in the denoised images. The distance estimations on the boundaries are less accurate because of the low SNR as well as possible multiple reflections from different targets. Additionally, over-smoothing occurs when the feature sizes are small, which is noticeable, for example, at the space between targets in Fig. 5 (g). These artifacts may be mitigated by using algorithms which take the reflection intensity into account as described in Ref. [55,56]. Also, deep-learning based image processing algorithms are promising for natural images in these low SNR situations [57].

5. Discussion

In this letter, we present an all-Si LIDAR prototype utilizing an on-chip Si LED and a Si SPAD. With an optical pulse width of approximately $3$ ns and an average optical power of $14$ pW from the LED, our LIDAR demonstrates sub-centimeter resolution of distance measurements with $20$ s integration, approaching the theoretical lower bound. The LED’s high bandwidth and broad emission spectrum overlapping with the Si SPAD responsivity are key factors contributing to our LIDAR’s performance. We also performed depth imaging on natural targets and successfully resolved two targets separated by approximately $2$ cm. We further enhanced the depth images using a simple yet effective total variation denoising algorithm.

The limitations of the current system can be better understood considering the resolutions scaling with target distance. In principle, a beam reducer such as a Galilean telescope can be used to reduce the collimated beam width and enhance the lateral resolutions. However, reducing the beam width leads to increased beam expansion during propagation due to diffraction, thereby limiting the lateral resolution at larger target distances. Assuming a Gaussian beam profile, the maximum distance range in which the beam width remains approximately unchanged can be described by the Rayleigh length $z_R = \pi w_0^2/{\lambda }$, where $w_0$ and $\lambda$ represent the beam waist radius and the wavelength, respectively. For example, considering a Rayleigh length of $30$ cm, which aligns with the typical target distances in this work, the minimum beam width is approximately $0.6$ mm. Note that the above estimation disregards imperfections of the optics and assumes high beam quality such that the collimated beam divergence is diffraction-limited.

Additionally, according to Eq. (3), the depth resolution depends on the number of reflected photons, which in turn relies on the distance and reflectance of the target. A comprehensive discussion can be found in Ref. [36]. If the target has a Lambertian surface, the reflected photon number follows an inverse square law, resulting in the depth resolution scaling approximately linearly with the distance. In the current setup, if the target distance increases to $1$ m, achieving the same resolution would require approximately $10$ times longer integration time. Alternatively, if there is a strong backreflection (for example normal incidence on a mirror-like surface), the depth resolution is significantly improved and remains unaffected by the distance to the target as long as the attenuation of the environment is minimal. In Supplement 1, we present a modified LIDAR setup employing the same LED source and demonstrate sub-centimeter depth resolutions for a high reflectivity ($>90 {\%}$) target positioned approximately $1$ m away with $1$ s integration. Natural targets typically fall between the extremes of a Lambertian surface and a mirror-like surface, and the depth resolution will vary depending on the specific surface properties.

Our work demonstrates the potential of native Si LEDs in CMOS acting as LIDAR emitters. While the LED, the SPAD, and the timing electronics are on separate CMOS chips in the reported setup, it is feasible to integrate them monolithically to realize a highly miniaturized on-chip LIDAR. This type of system could potentially be produced using the CMOS node in this work (55BCD), which has already seen the recent development of SPADs and demonstration of inter-chip optical interconnects [16,17,32]. To enhance performance, improvements can be made to the emitter and the detector. For example, moving the substrate contact of our LED closer to the active region or siliciding the contact taper can improve carrier injection and reduce series resistance, thereby enhancing the device speed. If the optical pulse width of our LED can be reduced to sub-ns level without sacrificing the optical power, $1$ s integration in our LIDAR should achieve sub-centimeter resolution. Further exploration of other Si-based emitters, such as AMLEDs [39], is also intriguing. Compared to forward-biased CMOS LEDs, AMLEDs require larger lateral carrier injection regions, which limits the integration density of a miniaturized LIDAR, and they also have lower quantum efficiency. However, when fabricated with SPADs, AMLEDs might have advantages since both require high voltage to introduce junction breakdown, and the visible emission of AMLEDs enhances the detection signal. On the detection side, the trade-off between timing jitter and detection efficiency for conventional Si SPAD may limit the performance of LIDARs. In our current setup, since the recorded optical pulse width is not limited by the SPAD jitter, using a SPAD with a thick absorption layer may be advantageous. Advanced strategies to break this trade-off include using nanoscale light-trapping structures, as reported by Zang et al. [12], which improved near-infrared detection efficiency by $2.5$ times while maintaining timing jitter at $25$ ps. Other emerging CMOS-compatible detectors, such as Ge-on-Si SPADs, which are orders of magnitude more efficient in near-infrared than Si SPADs, may be used with further improvements [35]. Currently, these SPADs rely on cryogenic environments to suppress dark counts.

While it is straightforward to implement scanning mirrors in our setup, we point out that it is especially attractive to fabricate individually addressed $2$D arrays of LEDs and position the chip at the back focal plane of a fixed lens for solid-state beam steering, similar to the approach described in Ref. [11]. Leveraging the nanometer precision and large scale of CMOS manufacturing, it is feasible to fabricate an array with $\mu$m pitch and mm dimensions. With a focal length comparable to the dimension of the array, a field-of-view of more than $50^\circ$ can be achieved. Some challenges associated with this configuration may include thermal management of the LED array, photon leakage to the on-chip SPAD, and potential electrical cross-talk between transmission lines.

Funding

National Research Foundation Singapore; Singapore-MIT Alliance for Research and Technology Centre.

Acknowledgments

The authors are grateful for the funding from the National Research Foundation (NRF), Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. The Disruptive & Sustainable Technologies for Agricultural Precision (DiSTAP) is an interdisciplinary research group (IRG) of the Singapore MIT Alliance for Research and Technology (SMART) centre.

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available from the authors upon request.

Supplemental document

See Supplement 1 for supporting content.

References

1. G. G. Goyer and R. Watson, “The laser and its application to meteorology,” Bull. Am. Meteorol. Soc. 44(9), 564–570 (1963). [CrossRef]  

2. M. Jaboyedoff, T. Oppikofer, A. Abellán, M.-H. Derron, A. Loye, R. Metzger, and A. Pedrazzini, “Use of LIDAR in landslide investigations: a review,” Nat. Hazards 61(1), 5–28 (2012). [CrossRef]  

3. U. Weiss and P. Biber, “Plant detection and mapping for agricultural robots using a 3D LIDAR sensor,” Robotics Auton. Syst. 59(5), 265–273 (2011). [CrossRef]  

4. S. Nozette, P. Rustan, L. P. Pleasance, et al., “The Clementine mission to the Moon: Scientific overview,” Science 266(5192), 1835–1839 (1994). [CrossRef]  

5. N. Li, C. P. Ho, J. Xue, L. W. Lim, G. Chen, Y. H. Fu, and L. Y. T. Lee, “A progress review on solid-state LiDAR and nanophotonics-based LiDAR sensors,” Laser Photonics Rev. 16, 2100511 (2022). [CrossRef]  

6. I. Kim, R. J. Martins, J. Jang, T. Badloe, S. Khadir, H.-Y. Jung, H. Kim, J. Kim, P. Genevet, and J. Rho, “Nanophotonics for light detection and ranging technology,” Nat. Nanotechnol. 16(5), 508–524 (2021). [CrossRef]  

7. S. Royo and M. Ballesta-Garcia, “An overview of LIDAR imaging systems for autonomous vehicles,” Appl. Sci. 9(19), 4093 (2019). [CrossRef]  

8. X. Sun, L. Zhang, Q. Zhang, and W. Zhang, “Si photonics for practical LiDAR solutions,” Appl. Sci. 9(20), 4225 (2019). [CrossRef]  

9. C. V. Poulton, A. Yaacobi, D. B. Cole, M. J. Byrd, M. Raval, D. Vermeulen, and M. R. Watts, “Coherent solid-state LIDAR with silicon photonic optical phased arrays,” Opt. Lett. 42(20), 4091–4094 (2017). [CrossRef]  

10. C.-P. Hsu, B. Li, B. Solano-Rivas, A. R. Gohil, P. H. Chan, A. D. Moore, and V. Donzella, “A review and perspective on optical phased array for automotive LiDAR,” IEEE J. Sel. Top. Quantum Electron. 27(1), 1–16 (2021). [CrossRef]  

11. X. Zhang, K. Kwon, J. Henriksson, J. Luo, and M. C. Wu, “A large-scale microelectromechanical-systems-based silicon photonics LiDAR,” Nature 603(7900), 253–258 (2022). [CrossRef]  

12. K. Zang, X. Jiang, Y. Huo, X. Ding, M. Morea, X. Chen, C.-Y. Lu, J. Ma, M. Zhou, Z. Xia, Z. Yu, T. I. Kamins, Q. Zhang, and J. S. Harris, “Silicon single-photon avalanche diodes with nano-structured light trapping,” Nat. Commun. 8(1), 628 (2017). [CrossRef]  

13. C. Niclass, A. Rochas, P.-A. Besse, and E. Charbon, “Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes,” IEEE J. Solid-State Circuits 40(9), 1847–1854 (2005). [CrossRef]  

14. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. Dalla Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS imager with 1024 SPADs and TDCs for single-photon timing and 3-D time-of-flight,” IEEE J. Sel. Top. Quantum Electron. 20(6), 364–373 (2014). [CrossRef]  

15. K. Morimoto, A. Ardelean, M.-L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, and E. Charbon, “Megapixel time-gated SPAD image sensor for 2D and 3D imaging applications,” Optica 7(4), 346–354 (2020). [CrossRef]  

16. F. Gramuglia, P. Keshavarzian, E. Kizilkan, C. Bruschini, S. S. Tan, M. Tng, E. Quek, M.-J. Lee, and E. Charbon, “Engineering breakdown probability profile for PDP and DCR optimization in a SPAD fabricated in a standard 55 nm BCD process,” IEEE J. Sel. Top. Quantum Electron. 28(2: Optical Detectors), 1–10 (2022). [CrossRef]  

17. W.-Y. Ha, E. Park, D. Eom, H.-S. Park, D. Chong, S. S. Tan, M. Tng, E. Quek, C. Bruschini, E. Charbon, W.-Y. Choi, and M.-J. Lee, “Single-photon avalanche diode fabricated in standard 55 nm bipolar-CMOS-DMOS technology with sub-20 V breakdown voltage,” Opt. Express 31(9), 13798–13805 (2023). [CrossRef]  

18. J. Carreira, A. Griffiths, E. Xie, B. Guilhabert, J. Herrnsdorf, R. Henderson, E. Gu, M. Strain, and M. D. Dawson, “Direct integration of micro-LEDs and a SPAD detector on a silicon CMOS chip for data communications and time-of-flight ranging,” Opt. Express 28(5), 6909–6917 (2020). [CrossRef]  

19. M. E. Warren, D. Podva, P. Dacha, M. K. Block, C. J. Helms, J. Maynard, and R. F. Carson, “Low-divergence high-power VCSEL arrays for LIDAR application,” Proc. SPIE 10552, 105520E (2018). [CrossRef]  

20. Y.-Y. Xie, P.-N. Ni, Q.-H. Wang, Q. Kan, G. Briere, P.-P. Chen, Z.-Z. Zhao, A. Delga, H.-R. Ren, H.-D. Chen, C. Xu, and P. Genevet, “Metasurface-integrated vertical cavity surface-emitting lasers for programmable directional lasing emissions,” Nat. Nanotechnol. 15(2), 125–130 (2020). [CrossRef]  

21. M.-G. Suh and K. J. Vahala, “Soliton microcomb range measurement,” Science 359(6378), 884–887 (2018). [CrossRef]  

22. J. Riemensberger, A. Lukashchuk, M. Karpov, W. Weng, E. Lucas, J. Liu, and T. J. Kippenberg, “Massively parallel coherent laser ranging using a soliton microcomb,” Nature 581(7807), 164–170 (2020). [CrossRef]  

23. P. Olivier, “Leddar optical time-of-flight sensing technology: a new approach to detection and ranging,” LeddarTech Inc p. 13 (2016).

24. T. Oggier, B. Büttgen, F. Lustenberger, G. Becker, B. Rüegg, and A. Hodac, “SwissRanger SR3000 and first experiences based on miniaturized 3D-TOF cameras,” Proc. of the First Range Imaging Research Day at ETH Zurich (2005).

25. A. D. Griffiths, H. Chen, D. D.-U. Li, R. K. Henderson, J. Herrnsdorf, M. D. Dawson, and M. J. Strain, “Multispectral time-of-flight imaging using light-emitting diodes,” Opt. Express 27(24), 35485–35498 (2019). [CrossRef]  

26. Terabee, Specification sheet: TeraRanger Neo ES (2022).

27. J. J. McKendry, B. R. Rae, Z. Gong, K. R. Muir, B. Guilhabert, D. Massoubre, E. Gu, D. Renshaw, M. D. Dawson, and R. K. Henderson, “Individually addressable AlInGaN micro-LED arrays with CMOS control and subnanosecond output pulses,” IEEE Photonics Technol. Lett. 21(12), 811–813 (2009). [CrossRef]  

28. M. A. Green, J. Zhao, A. Wang, P. J. Reece, and M. Gal, “Efficient silicon light-emitting diodes,” Nature 412(6849), 805–808 (2001). [CrossRef]  

29. W. L. Ng, M. Lourenco, R. Gwilliam, S. Ledain, G. Shao, and K. Homewood, “An efficient room-temperature silicon-based light-emitting diode,” Nature 410(6825), 192–194 (2001). [CrossRef]  

30. H.-C. Lee and C.-K. Liu, “Si-based current-density-enhanced light emission and low-operating-voltage light-emitting/receiving designs,” Solid-State Electron. 49(7), 1172–1178 (2005). [CrossRef]  

31. T. Hoang, P. LeMinh, J. Holleman, and J. Schmitz, “Strong efficiency improvement of SOI-LEDs through carrier confinement,” IEEE Electron Device Lett. 28(5), 383–385 (2007). [CrossRef]  

32. J. Xue, J. Kim, A. Mestre, K. Tan, D. Chong, S. Roy, H. Nong, K. Lim, D. Gray, D. Kramnik, A. Atabaki, E. Quek, and R. J. Ram, “Low-voltage, high-brightness silicon micro-LEDs for CMOS photonics,” IEEE Trans. Electron Devices 68(8), 3870–3875 (2021). [CrossRef]  

33. A. Shakoor, R. Lo Savio, P. Cardile, S. L. Portalupi, D. Gerace, K. Welna, S. Boninelli, G. Franzò, F. Priolo, T. F. Krauss, M. Galli, and L. O’Faolain, “Room temperature all-silicon photonic crystal nanocavity light emitting diode at sub-bandgap wavelengths,” Laser Photonics Rev. 7(1), 114–121 (2013). [CrossRef]  

34. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3(12), 696–705 (2009). [CrossRef]  

35. P. Vines, K. Kuzmenko, J. Kirdoda, D. C. Dumas, M. M. Mirza, R. W. Millar, D. J. Paul, and G. S. Buller, “High performance planar germanium-on-silicon single-photon avalanche diode detectors,” Nat. Commun. 10(1), 1086 (2019). [CrossRef]  

36. K. Kuzmenko, P. Vines, A. Halimi, R. J. Collins, A. Maccarone, A. McCarthy, Z. M. Greener, J. Kirdoda, D. C. Dumas, L. F. Llin, M. M. Mirza, R. W. Millar, D. J. Paul, and G. S. Buller, “3D LIDAR imaging using Ge-on-Si single–photon avalanche diode detectors,” Opt. Express 28(2), 1330–1344 (2020). [CrossRef]  

37. H. Wang, Y. Shi, Y. Zuo, Y. Yu, L. Lei, X. Zhang, and Z. Qian, “High-performance waveguide coupled germanium-on-silicon single-photon avalanche diode with independently controllable absorption and multiplication,” Nanophotonics 12(4), 705–714 (2023). [CrossRef]  

38. V. Agarwal, S. Dutta, A.-J. Annema, R. J. Hueting, P. G. Steeneken, and B. Nauta, “Low power wide spectrum optical transmitter using avalanche mode LEDs in SOI CMOS technology,” Opt. Express 25(15), 16981–16995 (2017). [CrossRef]  

39. V. Agarwal, S. Dutta, A.-J. Annema, R. J. Hueting, J. Schmitz, M. Lee, E. Charbon, and B. Nauta, “Optocoupling in CMOS,” in 2018 IEEE International Electron Devices Meeting (IEDM), (IEEE, 2018), pp. 32–1.

40. A. Chatterjee, P. Mongkolkachit, B. Bhuva, and A. Verma, “All Si-based optical interconnect for interchip signal transmission,” IEEE Photonics Technol. Lett. 15(11), 1663–1665 (2003). [CrossRef]  

41. S. Saito, N. Sakuma, Y. Suwa, H. Arimoto, D. Hisamoto, H. Uchiyama, J. Yamamoto, T. Sakamizu, T. Mine, S. Kimura, T. Sugawara, M. Aoki, and T. Onai, “Observation of optical gain in ultra-thin silicon resonant cavity light-emitting diode,” in 2008 IEEE International Electron Devices Meeting, (IEEE, 2008), pp. 1–4.

42. V. Puliyankot, G. Piccolo, R. J. Hueting, and J. Schmitz, “Toward GHz switching in SOI light emitting diodes,” IEEE Trans. Electron Devices 65(10), 4413–4420 (2018). [CrossRef]  

43. Z. Li, J. Xue, M. de Cea, J. Kim, H. Nong, D. Chong, K. Y. Lim, E. Quek, and R. J. Ram, “A sub-wavelength Si LED integrated in a CMOS platform,” Nat. Commun. 14(1), 882 (2023). [CrossRef]  

44. J. H. Stathis, “Percolation models for gate oxide breakdown,” J. Appl. Phys. 86(10), 5757–5766 (1999). [CrossRef]  

45. E. Miranda and J. Sune, “Electron transport through broken down ultra-thin SiO2 layers in MOS devices,” Microelectron. Reliab. 44(1), 1–23 (2004). [CrossRef]  

46. N. Akil, V. Houtsma, P. LeMinh, J. Holleman, V. Zieren, D. De Mooij, P. Woerlee, A. Van Den Berg, and H. Wallinga, “Modeling of light-emission spectra measured on silicon nanometer-scale diode antifuses,” J. Appl. Phys. 88(4), 1916–1922 (2000). [CrossRef]  

47. J. Mihaychuk, M. Denhoff, S. McAlister, W. McKinnon, and A. Chin, “Broad-spectrum light emission at microscopic breakdown sites in metal-insulator-silicon tunnel diodes,” J. Appl. Phys. 98(5), 054502 (2005). [CrossRef]  

48. E. F. Schubert, Light-emitting diodes (Cambridge University, 2006).

49. Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson, “Micrometre-scale silicon electro-optic modulator,” Nature 435(7040), 325–327 (2005). [CrossRef]  

50. K. A. Winick, “Cramér–Rao lower bounds on the performance of charge-coupled-device optical position estimators,” J. Opt. Soc. Am. A 3(11), 1809–1815 (1986). [CrossRef]  

51. J. Ianniello, “Time delay estimation via cross-correlation in the presence of large estimation errors,” IEEE Trans. Acoust., Speech, Signal Process. 30(6), 998–1003 (1982). [CrossRef]  

52. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM J. Imaging Sci. 2(2), 323–343 (2009). [CrossRef]  

53. Z. Li, N. Persits, D. J. Gray, and R. J. Ram, “Computational polarized raman microscopy on sub-surface nanostructures with sub-diffraction-limit resolution,” Opt. Express 29(23), 38027–38043 (2021). [CrossRef]  

54. M. K. Ng, R. H. Chan, and W.-C. Tang, “A fast algorithm for deblurring models with Neumann boundary conditions,” SIAM J. Sci. Comput. 21(3), 851–866 (1999). [CrossRef]  

55. J. Rapp and V. K. Goyal, “A few photons among many: Unmixing signal and noise for photon-efficient active imaging,” IEEE Trans. Comput. Imaging 3(3), 445–459 (2017). [CrossRef]  

56. J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, S. Mclaughlin, and J.-Y. Tourneret, “Bayesian 3D reconstruction of complex scenes from single-photon LIDAR data,” SIAM J. Imaging Sci. 12(1), 521–550 (2019). [CrossRef]  

57. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 9446–9454.

Supplementary Material (1)

NameDescription
Supplement 1       supplementary materials

Data availability

The data that support the findings of this study are available from the authors upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Summary of the LED characteristics. (a) Schematic top view of the LED. Gate oxide and the back-end-of-line dielectrics (BEOL) are not shown. (b) Schematic side view of the LED. The hollow and solid circles indicate holes and electrons, respectively. (c) Micrograph of the emission pattern overlaid on a micrograph of the LED captured using a CMOS camera. The integration of the emission pattern was $5$ s and the LED was biased at $5.0$ V. Electrons are injected from the taper in the dashed box. The other taper is not used in this work. (d) Typical emission spectra of the LED under DC and $5$ ns pulsed bias, compared with the quantum efficiency (QE) of a commercial Si SPAD. The emission spectra were measured using a custom-built InGaAs spectrometer. The DC bias is $6$ V and the pulsed bias has a voltage swing of $-1.5$ - $6.5$ V. The raw spectrum (circles) is smoothed (solid line) by a Savitzky–Golay filter (order 3, frame length 21). The SPAD QE is digitized from the product data sheet.
Fig. 2.
Fig. 2. Pulse characterization. (a) Time-resolved emission of the CMOS LED under long pulsed bias ( $250$ ns) with $0$ - $4$ V (top panel) and $-1.5$ - $6.5$ V (bottom panel) voltage swings. (b) Time-resolved emission of the CMOS LED under nanosecond pulsed bias with $-1.5$ - $6.5$ V swing. The inset shows optical pulse width versus electrical pulse width. (c) Time-resolved emission of the CMOS LED under sub-nanosecond pulsed bias. The inset shows the electrical pulse shape. (d) Average optical power with pulse repetition rates of $50$ MHz (left panel) and $1$ MHz (right panel). The voltage swings in (d) are $-1.5$ - $6.5$ V except the $0.9$ ns data point, which is approximately $0$ - $5$ V. The optical pulses in (a) - (c) are measured by a Si SPAD using TCSPC. The powers in (d), except the $0.9$ ns point, are measured using a low-bandwidth, amplified InGaAs photodiode. The power with $0.9$ ns electrical pulse is estimated by scaling the power with $5$ ns pulse using the ratio of the corresponding photon counts in (b) and (c).
Fig. 3.
Fig. 3. LIDAR apparatus. (a) Schematic of the all-Si LIDAR setup. The red solid arrow indicates the direction of the collimated LED emission towards a target, while the dashed arrows and the shaded area indicate the signal reflected by the target. The electrical and optical pulse trains are sketched in blue and red, respectively. (b) Bias pulse waveform with $50$ MHz repetition rate, $5$ ns pulse width, and $-1.5$ - $6.5$ V swing and (c) the corresponding time-resolved optical power of the LED. The components used in the setup are labeled as follows: M: mirror of $\approx 5 \times 5$ mm $^2$ ; OBJ: 100X, 0.95NA objective; AWG: arbitary waveform generator; RF AMP: radio-frequency amplifier; L: bi-convex lens with a $50$ mm clear aperture and $60$ mm focal length; LP, $830$ nm longpass filter; Si SPAD: $500$ $\mu$ m-diameter, passively quenched SPAD (ID Quantique); TCSPC: reversed start-stop timing electronics and histogrammer (B&H).
Fig. 4.
Fig. 4. Resolution test with a white target. (a) Examples of reflection histograms at two different positions with $5$ s integration. (b) Corresponding cross-correlation coefficients of the reference histogram and the reflection histograms in (a). The peaks are extracted by fitting the curves into parabolas. (c) Statistics of $20$ distance estimations at three different positions and with various integration times. The error bars are the standard deviations. (d) Resolution versus integration time compared with the Cramér-Rao lower bound (CRLB). Here the resolution is defined as $2 \times$ standard deviation.
Fig. 5.
Fig. 5. Distance images of natural targets. (a) Top view and (b) front view of two wooden chess pieces within the scanned area. The back dashed box indicates the approximate scanned area. (c) Reflection image of the targets by summing the reflected photon counts. (d, f) Raw pixel-wise distance estimations of the targets. (e, g) Distance estimations after TV denoising. An empirical $\lambda = 0.5$ is used. The integration time is $2$ s for (c), (d), and (e), and $1$ s for (f) and (g).

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

P η ω n p V
R = c 1 N p Δ T ( i [ f i ] 2 f i + N d N p ) 1 2
R = c σ N p Δ T
z ^ ( x , y ) = argmin z | | z z 0 | | 2 2 + λ ( | | x z | | 1 + | | y z | | 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.