Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Micro-Doppler effect based vibrating object imaging of coherent detection GISC lidar

Open Access Open Access

Abstract

Vibration, rotation and movement of the object generally have adverse effects on the detection and imaging for active and passive sensing. However, by combination of coherent detection and ghost imaging (GI), we propose a novel lidar system called coherent detection ghost imaging via sparsity constraints (GISC) lidar. The phase information of the signal light is detected by coherent detection. By analysing the phase change of the signal light, the vibrating information of the object is obtained. Backed up by experiments, coherent detection GISC lidar is demonstrated to image vibrating objects. Based on the short-time Fourier transform, two objects vibrating with different phases and amplitudes are experimentally demonstrated to image independently by coherent detection GISC lidar. Through the vibrating information, the capability of object identification can even be proposed. We strongly believe that coherent detection GISC lidar will pave the way for vibrating objects imaging and identification.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is a novel staring imaging method proven to reconstruct objects with spatial resolution from a single pixel detector [17]. Through calculating the intensity correlation function of the echo from the objects and the transmitted spatially modulated light, GI can obtain the image of the unknown objects. GI has aroused wide interest in remote sensing [812], hyperspectral microscopy [13] and biomedical imaging [14,15] for its high sensitivity, high resolution and robustness. Since the first outdoor GI lidar experiment was taken in 2012 [8], GI lidar has attracted lots of attention as a new lidar [9]. Shortly after, GI lidars for three dimensional imaging are successively reported [11,16]. These GI lidars are based on direct intensity detection which requires the intensity of the echo must be greater than the background noise. In 2016, Yang et al. proposed a heterodyne 3D GI model which utilizes heterodyne detection for the echo photon-current [17]. Heterodyne detection allows the signal is weaker than background noise as long as the noise is uncorrelated with the echo [18,19]. It suggests that GI via heterodyne detection may show its strength in low signal to noise (SNR) situation. Deng et al. proposed a pulse-compression ghost imaging via coherent detection [20,21], in which the movement of the object is detected by optical coherent detection. The experimental investigations of heterodyne GI and pulse-compression GI via coherent detection are completed in [22,23]. The corresponding factors that may influence the imaging quality of these two new types of ghost imaging lidar are investigated as well [21,23]. Besides, GI via coherent detection has been proved the ability to image even the echo power is 5 pW which is 3 orders lower than ghost imaging via direct intensity detection [22]. It shows the imaging power of GI via coherent detection in weak echo. Inspired by optical oppler effect, the micro-doppler information introduced by micro-motion of objects may also be helpful to image the object [24]. Considering that common object usually composes of several parts among which relative movement exists, the micro-Doppler effect may take extra motion information, such as vibrating frequency and amplitude, rather than speed information only compared with direct intensity detection. Therefore, it would be wealthful to verify the feasibility of GI via coherent detection in the scenery of objects existing micro-vibration.

More importantly, the micro-doppler information stands for a signature to identify the objects [25]. Objects vibrating with different state can be detected by coherent detection at the same time. In this paper, we propose a lidar named coherent detection GISC lidar, which is a time-frequency domain GI lidar via coherent detection. The imaging ability of coherent detection GISC lidar is demonstrated. Time-frequency analysis is used to distinguish between objects that are of different vibration states, and to reconstruct the objects image. The capability of identification is discussed.

2. Model and theory

The schematic of coherent detection GISC lidar is shown in Fig. 1. The source is a continuous working (CW) narrow-band fiber laser (NKT: Koheras Basik$^{\textrm{TM}}$) with wavelength $\lambda$ = 1550 nm. The laser light is intensity modulated through the Mach-Zehnder type electro-optical modulator (EOM, iXblue: MXLN-10) which is derived by a chirped signal generator (CSG). The chirped amplitute-modulated (CAM) light is then splitted into two path: one is working as local oscillicator (LO) injected into the integrated coherent receiver (ICR, Optoplex: Integrated ${90^ \circ }$ Hybrid Coherent Receiver); the other is amplified by a single frequency fiber amplifier(SFFA). SFFA ensures the linewidth of the laser will not broaden during fiber amplification. After the collimator, the CAM light illuminates the digital micromirror device (DMD, Texas Instruments: DLP 7000). DMD modulates the spatial distribution of the CAM light as the preloaded pattern ${I_r}({x_r})$ on DMD. The lens images the light on DMD plane to the object plane. The object plane is moving and within which there is relative movement between objects. The objects are composed of two narrow slots and attached to a speaker. Two slots are separately located on the surface of the speaker. One is near the center of the speaker; the other is farther from the center of the speaker. The distribution of objects ensures the two slots vibrating with different states. The coupler collects the echo reflected from the objects and injects it into the ICR as signal light. The ICR composes of a 2*4 90$^{\circ }$ optical hybrid and two balanced detectors. In the ICR, the signal light and LO are phase shifted and mixed before injected into the balanced detectors. The outputs of the ICR are two orthogonal current signals I and Q from two balanced detectors. The frequency of I/Q is the beating frequency of LO and signal light. Bandpass module (BPM) filters the out-of-band noise of I/Q. The I/Q signal is acquired and stored in PC. Combining the I/Q, a complex signal $C$ is time-frequency analyzed by short time Fourier transformation (STFT) process and the corresponding time-frequency spectrum ${I_s}(t,f)$ can be obtained. By summing up the signal part of ${I_s}(t,f)$, the signal of the objects is obtained as ${I_s}$. It should be pointed out that the arbitrary function generator (AFG) works as a synchronization device to control CSG, DMD and acquisition card in PC. By measuring the intensity correlation between ${I_r}({x_r})$ and $I_s$, we try to obtain the objects’ information.

 figure: Fig. 1.

Fig. 1. Schematic of coherent detection GISC lidar. EOM: Electro-optical Modulator, CSG: Chirped Signal Generator, SFFA: Single Frequency Fiber Amplifier, DMD: Digital Micromirror Device, ICR: Integrated Coherent Receiver, LO: Local oscillicator, BPM: Bandpass Module, AFG: Arbitrary Function Generator.

Download Full Size | PDF

According to GI theory [21], the intensity fluctuation correlation function between the preloaded patterns ${I_r}({x_r})$ and the intensity ${I_s}$ can be expressed as

$$G({x_r}) = \left\langle {\delta {I_r}({x_r})\delta {I_s}} \right\rangle,$$
where $\left \langle \bullet \right \rangle$ stands for ensemble average over independent DMD patterns, $\delta {I_r}({x_r}) = {I_r}({x_r}) - \left \langle {{I_r}({x_r})} \right \rangle$ and $\delta {I_s} = {I_s} - \left \langle {{I_s}} \right \rangle$.

The CAM light can still be regarded as monochromatic light since the temporal bandwidth is narrow relative to the optical frequency . The complex envelope of the CAM light after the EOM can be expressed as

$$E_s^n(x_0,t) = \left[ {1 + ms(t - nT)} \right]P(t - nT)A_s^n(x_0)exp(j\phi_0),$$
where $n$ stands for the $n_{th}$ pulse, $m$ is the modulation depth, $T$ is the pulse interval, $s(t)$ denotes the chirped signal from the CSG and $s(t) = \cos (2\pi {f_0}t + \pi \beta {t^2})$. $f_0$ is the starting modulated frequency and $\beta = B/T_p$ is the frequency modulation rate where $B$ is the bandwidth of the chirped signal, $T_p$ denotes the modulation pulse width. $A_s^n(x_0)$ is the amplitude of $E_s^n(x_0,t)$ and can be taken as a constant, $\phi _0$ is the initial phase. $P(t)$ is a simple pulse waveform and denotes as
$$P(t) = \left\{ \begin{array}{ll} 1, &0 < t \le {T_p}\\ 0, &else \end{array} \right..$$

As the CAM light is spatial modulated by the DMD, the pattern after the DMD plane can be described as

$$E_s^n(x_s,t) = \left[ {1 + ms(t - nT)} \right]P(t - nT)E_s^n(x_s),$$
where $E_s^n(x_s)$ is the $n_{th}$ DMD pattern. The temporal-spatial modulation is accomplished. The patterns preloaded on DMD should meet
$$\left\langle {E_s^n(x_s)E_s^{n, * }(x{'_s})} \right\rangle = {I_0}\delta \left( {{x_s} - x{'_s}} \right),$$
where $I_0$ is a constant and $\delta (x)$ is the Dirac’s delta function.

The propagation of CAM light can be described by the extended Huygens Fresnel principle [26]. The CAM light illuminates the objects and the backscattered light of the objects propagates to the receiving plane. The light field at the object plane $z$ is

$$E_o^n({x_o},t) = \frac{{\exp \left[ {jk\left( {{l_1} + {l_2}} \right)} \right]}}{M}\exp \left( { - \frac{{j\pi }}{{\lambda Mf}}x_o^2} \right)E_s^n(\frac{{{x_o}}}{M},t - \frac{{{l_1} + {l_2}}}{c}),$$
where $x_o$ denotes the transverse coordinate at the object plane, $k$ denotes the wave number of the light, $l_1$ is the length between DMD and lens, $l_2$ is the length between lens and object, $M = {l_2}/{l_1}$ is the magnification of the lens, $c$ is the speed of the light.

When the objects vibrate along the optical axis, the range $l_2$ can be denoted as $l_2 = r_0 + A_psin(2\pi f_vt + \phi _p)$, where $r_0$ is the initial position of the objects, $A_p$ stands for the amplitude of the vibration for the $p_{th}$ object, $f_v$ denotes the vibration frequency of the object, $\phi _p$ is the initial phase of the vibration for the $p_{th}$ object. In our case, the vibrating direction of the speaker is parallel to the incident CAM light. The vibration amplitude of the speaker is less than 2 mm, which keeps the objects in the depth of the filed of the transmitting lens during their movement.

Supposing the distances of objects to the lens $l_2$ and to the coupler $l_3$ are equal, $l_2 = l_3$. Following Ref. [20], for the $p_{th}$ object, the backscattered light field at the coupler plane is

$$E_{c,p}^n({x_c},t) = {\frac{{\exp \left[ {jk{l_2}} \right]}}{{j\lambda {l_2}}}\int {d{x_o}} E_o^n({x_o},t - \frac{{{l_2}}}{c}){o_p}({x_o})\exp \left[ {\frac{{j\pi ({x_c} - {x_o})^2}}{{\lambda {l_2}}}} \right]} ,$$
where $x_c$ denotes the transverse coordinate at the coupler plane, $o_p(x_o)$ denotes the reflection coefficient of the $p_{th}$ object.

The signal light $E_s^n(t)$ is a sum of the CAM light returning from the all objects,

$$\begin{aligned} E_s^n(t) &= \sum_p {E_{s,p}^n(t)} = \sum_p {\int {d{x_c}} E_{c,p}^n({x_c},t)} \\ &= \sum_p {\left[ {1 + ms(t - nT-\tau)} \right]P(t - nT-\tau)A_{s,p}^n\exp \left[ {j\phi _{s,p}^n} \right]}, \end{aligned}$$
where $\tau = 2l_2/c$ denotes the time delay between the LO and the signal light, $A_{s,p}^n$ is the amplitude of $E_{s,p}^n(t)$, $\phi _{s,p}^n$ stands for the phase of $E_{s,p}^n(t)$. Then the signal light and the LO are injected into the ICR. The LO light field is assumed to be uniform, namely
$$E_{LO}^n(t) = \left[ {1 + ms(t - nT)} \right]P(t - nT)A_{LO}\exp \left[ {j\phi _{LO}^n} \right],$$
where $A_{LO}$ is the amplitude of $E_{LO}^n(t)$, $\phi _{LO}^n$ stands for the phase of $E_{LO}^n(t)$.

The LO and the signal light are temporal coherent, which ensures the coherence take places in ICR. According to Ref. [20], the four output of the 2*4 90$^{\circ }$ optical hybrid are

$$\left\{ \begin{array}{l} E_s^n(t) + E_{LO}^n(t)\\ E_s^n(t) - E_{LO}^n(t)\\ E_s^n(t) + E_{LO}^n(t)\exp (j\pi /2)\\ E_s^n(t) - E_{LO}^n(t)\exp (j\pi /2) \end{array} \right. .$$

In the balance detector, the optical mixing takes place. The modulated light is de-modulated and detected by two photodiode. The current output of the two balance detectors I/Q is

$$\left\{ \begin{array}{l} I:{\left| {E_s^n(t) + E_{LO}^n(t)} \right|^2} - {\left| {E_s^n(t) - E_{LO}^n(t)} \right|^2}\\ Q:{\left| {E_s^n(t) + E_{LO}^n(t)\exp (j\pi /2)} \right|^2} - {\left| {E_s^n(t) - E_{LO}^n(t)\exp (j\pi /2)} \right|^2} \end{array} \right. .$$

It can be easily seen that current I is the sum of the crossterm of the LO and signal light and that current Q is 90$^{\circ}$ shifted I. The current I/Q is

$$\left\{ \begin{array}{l} I:2\left[ {E_s^n(t)E{{_{LO}^n}^*}(t) + E{{_s^n}^*}(t)E_{LO}^n(t)} \right]\\ Q:2\left[ {E_s^n(t)E{{_{LO}^n}^*}(t)\exp ( - j\pi /2) + E{{_s^n}^*}(t)E_{LO}^n(t)\exp (j\pi /2)} \right] \end{array} \right. ,$$
where ${\ast}$ denotes conjunction. Through Eq. (12), the high frequency light turns into baseband electrical signal with narrow bandwidth. Substitute Eq. (8) and Eq. (9) into Eq. (12), the I/Q is
$$\left\{ \begin{aligned}&I:4\sum_p \begin{array}{c} A_{s,p}^n{A_{LO}}\left[ {1 + \frac{1}{2}{m^2}\cos (2\pi \beta \tau t + \theta _p^n) + ms(t - nT - \tau ) + ms(t - nT)} \right] \\ {\times}P(t - nT)\cos \left[ {2\pi {f_{d,p}}(t)t + \phi _{s,p}^n - \phi _{LO}^n} \right] \end{array} \\ &Q:4\sum_p \begin{array}{c} A_{s,p}^n{A_{LO}}\left[ {1 + \frac{1}{2}{m^2}\cos (2\pi \beta \tau t + \theta _p^n) + ms(t - nT - \tau ) + ms(t - nT)} \right] \\ {\times}P(t - nT)\sin \left[ {2\pi {f_{d,p}}(t)t + \phi _{s,p}^n - \phi _{LO}^n} \right] \end{array} ,\end{aligned}\right.$$
where ${\theta _p^n} = 2\pi {f_0}\tau - \pi \beta {\tau ^2}$, $f_{d,p}(t)$ denotes the micro-doppler frequency of the $p_{th}$ object and ${f_{d,p}}(t) = \frac {1}{{2\pi }}\frac {{d\phi }}{{dt}} = \frac {1}{{2\pi }}\frac {{dk2{l_2}}}{{dt}} = \frac {{4\pi {f_v}{A_p}\cos (2\pi {f_v}t + {\phi _p})}}{\lambda }$. After filtered by BPM, I/Q is acquired by acquisition card and processed in PC, the complex signal $C = I + jQ$ is
$$C = \sum_p {4A_{s,p}^n{A_{LO}}} \left[ {1 + \frac{1}{2}{m^2}\cos (2\pi \beta \tau t + \theta _p^n)} \right]\exp \left\{ {j\left[ {2\pi {f_{d,p}}(t)t + \phi _{s,p}^n - \phi _{LO}^n} \right]} \right\}.$$
$C$ consists of two frequency component: one is $f_{d,p}(t)$, the other is the mixing of ranging component $\beta \tau$ and $f_{d,p}(t)$. It should be pointed out that their exists a limit on measurable frequency of vibration. For electronics equipment often works within special bandwidth. Through the bandwidth of ICR and acquisition card, the limit of the micro-doppler frequency is set. According to Shannon’s sampling theorem [27], $2{f_{d,p}}(t) < \min \{ B{W_{ICR}},B{W_{ACQ}}\}$. Here $B{W_{ICR}}$ and $B{W_{ACQ}}$ are the bandwidth of ICR and acquisition card, respectively. Therefore,
$${f_{d,p}} = \frac{{4\pi {f_v}{A_p}\cos (2\pi {f_v} + {\phi _p})}}{\lambda } < \frac{1}{2}\min \{ B{W_{ICR}},B{W_{ACQ}}\} ,$$
where $\min \{ A,B\}$ returns the smaller value of A and B. According to Eq. (15), the limit of measurable frequency of vibration is
$${f_v} < \frac{\lambda }{{8\pi {A_p}}}\min \{ B{W_{ICR}},B{W_{ACQ}}\}.$$

In the post-processing of short-time fourier transformation, $C$ is splitted into $N$ slice. The $i_{th}$ slice of the time-frequency image is

$$STFT_C^n(i,f) = \sum_{g ={-} \infty }^\infty {C[g]H[g - iR]\exp ( - j2\pi fg)},$$
where $H(t)$ is a window function with length $L$, $R$ is the hop size between two successive slices. Thus, a time-frequency image for the $STFT_C^n$ is obtained. Through choosing appropriate $L$, a $STFT_C^n$ suitable for observation gets. In image $STFT_C^n$, the spectrum changes continuously. According to this characteristic, the spectrum of each object can be separated according to their different vibration states.

By summing up the continuous curve in $STFT_C^n$ image, the magnitude for the object echo in one measurement $n$ is accessed

$${I^n}(f) = \sum_i {\left| {STFT_C^n(i,{f_{peak}})} \right|} ,$$
where $f_{peak}$ denotes the doppler frequency in the $i_{th}$ slice of $STFT_C^n$. Through short-time fourier transformation, time-frequency analysis shows the variation of the doppler or micro-doppler spectrum.

Finally, by utilization the fluctuation correlation function, the object can be reconstructed as

$$G(x) = \frac{1}{M_c}\sum_{n = 1}^{M_c} {{I^n}(f)I_r^n(x)} - \frac{1}{M_c}\sum_{n = 1}^{M_c} {{I^n}(f)} \frac{1}{M_c}\sum_{n = 1}^{M_c} {I_r^n(x)},$$
where $M_c$ is the measurement number, ${I_r^n(x)}$ is the $n_{th}$ predetermined pattern. It should be pointed out that, according to Eq. (18), the power of the curve that contributes to $I_f(f)$ can be selected according to the motion characteristics of the objects. It means that the objects in the same filed of view can be imaged independently. The imaging principle of coherent detection GISC lidar is the same with traditional GI. What makes it distinguishing is that it can obtain the moving and vibrating information and image vibrating objects at the same time. Futhermore, coherent detection GISC lidar can independently image the object.

3. Experimental results

To verify the imaging capability of coherent detection GISC lidar, an experimental setup is established according to Fig. 1. The concrete parameters in the experiments are as follow: the objects consist of two slots, each with width 1 mm, and the distance of them is about 2 mm. The distance between the objects and the lens is 0.5 m. The vibrating frequency of the speaker is 500 Hz. The patterns preloaded in the DMD are Hadamard patterns. The patterns number is 1024. The transverse size of the pattern at the DMD plane is set as 109 $\mu m$ $\times$ 109 $\mu m$ and we choose a reflection area with 32 $\times$ 32 pixels (one pixel is equal to the transverse size of the patterns) for projection. The measurement number is 1,024. For simplicity, the experiments are divided into three parts: (1) the imaging validation of coherent detection GISC lidar; (2) multiple measurements to improve image quality; (3) sub-object independent imaging experiment. In the experiments, the chirped amplitude modulation is used only for static objects imaging.

Firstly the imaging capability of coherent detection GISC lidar is investigated. The chirped electrical signal of the CSG is set to 500 MHz bandwidth with staring frequency 100 MHz, the pulse width $T_p$ is set to 5 ms. The pulse repetition frequency is set to 100 Hz. The objects are static. The imaging results is shown in Fig. 2. It can be seen that the main body of the objects are reconstructed. However, the imaging quality is unsatisfying. The distance $l_2$ is so short that the beating frequency is close to the DC where electrical noise is heavy. It may be the main reason that leads to the unsatisfying imaging quality.

 figure: Fig. 2.

Fig. 2. Experimental results of coherent detection GISC lidar. The left is the objects photoed with near infrared CCD; the right is the reconstruction result.

Download Full Size | PDF

The following is the demonstration of vibrating objects for coherent detection GISC lidar. The objects are vibrating with frequency at 500 Hz. DMD is set to exposure 5 ms every other 5 ms. Figure 3 illustrates the imaging results. Figure 3(a) is the image of $STFT_C^n$ in one measurement. As we can vaguely see, two slots vibrate with different phase and amplitude, which is consistent with the experimental arrangement. The vibrating period of the two slots is about 2 ms, thus the corresponding vibrating frequency is about 500 Hz. Figure 3(b) shows the objects photoed with near infrared CCD. Figure 3(c) is the reconstruction result. The imaging quality is better than that in Fig. 2. It is probably because the micro-doppler frequency of the two slots is much far from the low frequency domain. During the post processing, low-frequency components are filtered out. By selecting effective signals, the imaging quality improves.

 figure: Fig. 3.

Fig. 3. Experimental results of coherent detection GISC lidar. (a) The image of $STFT_C^n$; (b) the objects photoed with near infrared CCD; (c) the reconstruction result for measurement number 1024.

Download Full Size | PDF

Although the reconstruction result of the vibrating objects is better than static objects, the imaging quality is not quite satisfying. Increasing the measurement number is taken as one effective method to improve performance [28,29]. To improve the imaging results, the relationship between frame number of single-pattern exposure and imaging performance is investigated. Frame number is defined as the exposure number of a single pattern on DMD. The total Hadamard patterns number is 1024 as well. The imaging results are shown in Fig. 4 (1)-(11). The frame number for single-pattern exposure varies from one to eleven for Fig. 4 (1)-(11). The imaging performance in Fig. 4 (1)-(11) goes better as the frame number ncreases. To evaluate the quality of reconstructed images, the structural similarity (SSIM) index is introduced [30]. Assuming the object ground truth is $X$, and the reconstructed image is $Y$ the SSIM index of the two images can be determined:

$$SSIM(X,Y) = \frac{{(2{\mu _X}{\mu _Y} + {c_1})(2{\sigma _{XY}} + {c_2})}}{{(\mu _X^2 + \mu _Y^2 + {c_1})(\sigma _X^2 + \sigma _Y^2 + {c_2})}},$$
where $\mu _X$ is the mean value of $X$, $\mu _Y$ is the mean value of $Y$, $\sigma _X$ is the variance of $X$, $\sigma _Y$ is the variance of $Y$, $\sigma _{XY}$ is the variance of $X$ and $Y$, $c_1 = (k_1L)^2$ and $c_2 = (k_2L)^2$, are constants used to maintain stability, $L$ is the dynamic range of the pixel values, $k_1= 0.01$, $k_2 = 0.03$ and $L = 255$. The right of Fig. 4 shows the SSIM curves of the reconstructed image with varing frame numbers for single-pattern exposure. It illustrates that when frame number for single-pattern exposure is smaller than 4, SSIM increases sharply; and when frame number for single-pattern exposure is over 4, SSIM improves slowly and becomes steadily.

 figure: Fig. 4.

Fig. 4. Experimental results of frame number of single-pattern exposure. (1)-(11) The frame number for single-pattern exposure varies from one to eleven; the right is the SSIM curve.

Download Full Size | PDF

Inspired by the image in Fig. 3(a), we try to reconstruct the two slots independently. Because the field of view is not big enough, the difference between the amplitude of the two slots is not that great. The micro-doppler vibrating curves of the two slots can not be distinguished clearly. Nevertheless, there are still some points that can be recognized as independent points from two slots separately in Fig. 3(a). These points are around the micro-doppler extremum frequency in Fig. 3(a). By utilization of 10 frames of single-pattern exposure experiment, the intensity of these points are summed up independently and the imaging result is shown in Fig. 5. Figure 5(a) shows the image of short-time fourier transformation for $C$. As shown in Fig. 5(b) and (c), they are the reconstruction results of left slot and right slot, respectively. For there are few points that can be used to imaging the two slots independently, the imaging results are not satisfactory. However, it can still be regarded as a demonstration of the independent imaging of two slots.

 figure: Fig. 5.

Fig. 5. Experimental results of independent imaging of two slots. (a) The image of $STFT_C^n$; (b) the reconstructed left slot; (c) the reconstructed right slot.

Download Full Size | PDF

4. Discussions

As we can see from Fig. 3(a), the micro-doppler cruve can be accessed and demodulated by coherent detection GISC lidar. Generally, time-frequency image is used to extract frequency information only. In this work, the amplitude and phase phase of the micro-doppler curve are used to reconstruct the objects information. Compared with previous GI lidar, coherent detection GISC lidar can access to motion information in a smaller scale. Besides, objects move, vibrate and rotate with unique characteristics. Their motion information can be detected by coherent detection. According to their motion information, the objects can independently imaged even they are in the same field of view. It may greatly help object imaging and identification. Besides, for some special scenario, with the help of micro-doppler curve, the object could even be identified before imaged. If the motion data base of these characteristics is established, identification before reconstruction can be realized. Figure 6 shows a simulated image of $STFT_C^n$ for a moving motor and two blades of common DJI phantom 4 unmanned aircraft vechile. According to [25], micro-doppler classification method can be taken.

 figure: Fig. 6.

Fig. 6. Simulated image of $STFT_C^n$ for a moving motor and two blades of common DJI phantom 4 unmanned aircraft vechile.

Download Full Size | PDF

5. Conclusion

In conclusion, coherent detection GISC lidar is demonstrated. In coherent detection GISC lidar, objects vibrates with different states can be imaged independently even in the same field of view and illuminated at the same time. The motion status can be recorded all the time. Through multi-frame exposure for single-pattern, the imaging quality improvement has been demonstrated. This works indicates that the optical micro-doppler information might be useful for classification and identification.

Funding

Youth Innovation Promotion Association of the Chinese Academy of Sciences (2013162-2017); Defense Industrial Technology Development Program of China (D040301); Inter-satellite High-speed Laser Communication Machine Program (112004-AD2003).

Disclosures

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92(3), 033601 (2004). [CrossRef]  

2. J. Cheng and S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004). [CrossRef]  

3. D.-Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005). [CrossRef]  

4. D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]  

5. M. D’Angelo and Y. Shih, “Quantum imaging,” Laser Phys. Lett. 2(12), 567–596 (2005). [CrossRef]  

6. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11(4), 949–993 (2012). [CrossRef]  

7. H. Guo, R. He, C. Wei, Z. Lin, L. Wang, and S. Zhao, “Compressed ghost edge imaging,” Chin. Opt. Lett. 17(7), 071101 (2019). [CrossRef]  

8. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101(14), 141123 (2012). [CrossRef]  

9. M. Chen, E. Li, W. Gong, Z. Bo, X. Xu, C. Zhao, X. Shen, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints in real atmosphere,” Opt. Photonics J. 03(02), 83–85 (2013). [CrossRef]  

10. B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A 29(5), 782–789 (2012). [CrossRef]  

11. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

12. C. Wang, X. Mei, L. Pan, P. Wang, W. Li, X. Gao, Z. Bo, M. Chen, W. Gong, and S. Han, “Airborne near infrared three-dimensional ghost imaging lidar via sparsity constraint,” Remote Sens. 10(5), 732 (2018). [CrossRef]  

13. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]  

14. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

15. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

16. H. Yu, E. Li, W. Gong, and S. Han, “Structured image reconstruction for three-dimensional ghost imaging lidar,” Opt. Express 23(11), 14541–14551 (2015). [CrossRef]  

17. X. Yang, Y. Zhang, C. Yang, L. Xu, Q. Wang, and Y. Zhao, “Heterodyne 3d ghost imaging,” Opt. Commun. 368, 1–6 (2016). [CrossRef]  

18. C. Allen and S. Gogineni, “A fiber-optic-based 1550-nm laser radar altimeter with rf pulse compression,” in IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS’99 (Cat. No. 99CH36293), vol. 3 (IEEE, 1999), pp. 1740–1742.

19. C. Allen, Y. Cobanoglu, S. K. Chong, and S. Gogineni, “Performance of a 1319 nm laser radar using rf pulse compression,” in IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), vol. 3 (IEEE, 2001), pp. 997–999.

20. C. Deng, W. Gong, and S. Han, “Pulse-compression ghost imaging lidar via coherent detection,” Opt. Express 24(23), 25983–25994 (2016). [CrossRef]  

21. C. Deng, L. Pan, C. Wang, X. Gao, W. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017). [CrossRef]  

22. L. Pan, C. Deng, W. Gong, and S. Han, “Experimental demonstration of pulse-compression ghost imaging via coherent detection,” in AOPC 2020: Optical Sensing and Imaging Technology, vol. 11567 (International Society for Optics and Photonics, 2020), p. 115670S.

23. L. Pan, C. Deng, Z. Bo, X. Yuan, D. Zhu, W. Gong, and S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 28(14), 20808–20816 (2020). [CrossRef]  

24. A. Eden, “The search for christian doppler,” in The Search for Christian Doppler, (Springer, 1992), pp. 1–4.

25. S. Björklund, T. Johansson, and H. Petersson, “Evaluation of a micro-doppler classification method on mm-wave data,” in 2012 IEEE Radar Conference, (IEEE, 2012), pp. 0934–0939.

26. J. Goodman, “Introduction to fourier optics, mcg-hill book com,” (1968).

27. C. E. Shannon, “Communication in the presence of noise,” Proc. IRE 37(1), 10–21 (1949). [CrossRef]  

28. F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

29. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

30. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2 (Ieee, 2003), pp. 1398–1402.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic of coherent detection GISC lidar. EOM: Electro-optical Modulator, CSG: Chirped Signal Generator, SFFA: Single Frequency Fiber Amplifier, DMD: Digital Micromirror Device, ICR: Integrated Coherent Receiver, LO: Local oscillicator, BPM: Bandpass Module, AFG: Arbitrary Function Generator.
Fig. 2.
Fig. 2. Experimental results of coherent detection GISC lidar. The left is the objects photoed with near infrared CCD; the right is the reconstruction result.
Fig. 3.
Fig. 3. Experimental results of coherent detection GISC lidar. (a) The image of $STFT_C^n$; (b) the objects photoed with near infrared CCD; (c) the reconstruction result for measurement number 1024.
Fig. 4.
Fig. 4. Experimental results of frame number of single-pattern exposure. (1)-(11) The frame number for single-pattern exposure varies from one to eleven; the right is the SSIM curve.
Fig. 5.
Fig. 5. Experimental results of independent imaging of two slots. (a) The image of $STFT_C^n$; (b) the reconstructed left slot; (c) the reconstructed right slot.
Fig. 6.
Fig. 6. Simulated image of $STFT_C^n$ for a moving motor and two blades of common DJI phantom 4 unmanned aircraft vechile.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

G ( x r ) = δ I r ( x r ) δ I s ,
E s n ( x 0 , t ) = [ 1 + m s ( t n T ) ] P ( t n T ) A s n ( x 0 ) e x p ( j ϕ 0 ) ,
P ( t ) = { 1 , 0 < t T p 0 , e l s e .
E s n ( x s , t ) = [ 1 + m s ( t n T ) ] P ( t n T ) E s n ( x s ) ,
E s n ( x s ) E s n , ( x s ) = I 0 δ ( x s x s ) ,
E o n ( x o , t ) = exp [ j k ( l 1 + l 2 ) ] M exp ( j π λ M f x o 2 ) E s n ( x o M , t l 1 + l 2 c ) ,
E c , p n ( x c , t ) = exp [ j k l 2 ] j λ l 2 d x o E o n ( x o , t l 2 c ) o p ( x o ) exp [ j π ( x c x o ) 2 λ l 2 ] ,
E s n ( t ) = p E s , p n ( t ) = p d x c E c , p n ( x c , t ) = p [ 1 + m s ( t n T τ ) ] P ( t n T τ ) A s , p n exp [ j ϕ s , p n ] ,
E L O n ( t ) = [ 1 + m s ( t n T ) ] P ( t n T ) A L O exp [ j ϕ L O n ] ,
{ E s n ( t ) + E L O n ( t ) E s n ( t ) E L O n ( t ) E s n ( t ) + E L O n ( t ) exp ( j π / 2 ) E s n ( t ) E L O n ( t ) exp ( j π / 2 ) .
{ I : | E s n ( t ) + E L O n ( t ) | 2 | E s n ( t ) E L O n ( t ) | 2 Q : | E s n ( t ) + E L O n ( t ) exp ( j π / 2 ) | 2 | E s n ( t ) E L O n ( t ) exp ( j π / 2 ) | 2 .
{ I : 2 [ E s n ( t ) E L O n ( t ) + E s n ( t ) E L O n ( t ) ] Q : 2 [ E s n ( t ) E L O n ( t ) exp ( j π / 2 ) + E s n ( t ) E L O n ( t ) exp ( j π / 2 ) ] ,
{ I : 4 p A s , p n A L O [ 1 + 1 2 m 2 cos ( 2 π β τ t + θ p n ) + m s ( t n T τ ) + m s ( t n T ) ] × P ( t n T ) cos [ 2 π f d , p ( t ) t + ϕ s , p n ϕ L O n ] Q : 4 p A s , p n A L O [ 1 + 1 2 m 2 cos ( 2 π β τ t + θ p n ) + m s ( t n T τ ) + m s ( t n T ) ] × P ( t n T ) sin [ 2 π f d , p ( t ) t + ϕ s , p n ϕ L O n ] ,
C = p 4 A s , p n A L O [ 1 + 1 2 m 2 cos ( 2 π β τ t + θ p n ) ] exp { j [ 2 π f d , p ( t ) t + ϕ s , p n ϕ L O n ] } .
f d , p = 4 π f v A p cos ( 2 π f v + ϕ p ) λ < 1 2 min { B W I C R , B W A C Q } ,
f v < λ 8 π A p min { B W I C R , B W A C Q } .
S T F T C n ( i , f ) = g = C [ g ] H [ g i R ] exp ( j 2 π f g ) ,
I n ( f ) = i | S T F T C n ( i , f p e a k ) | ,
G ( x ) = 1 M c n = 1 M c I n ( f ) I r n ( x ) 1 M c n = 1 M c I n ( f ) 1 M c n = 1 M c I r n ( x ) ,
S S I M ( X , Y ) = ( 2 μ X μ Y + c 1 ) ( 2 σ X Y + c 2 ) ( μ X 2 + μ Y 2 + c 1 ) ( σ X 2 + σ Y 2 + c 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.