Abstract
At present, a large number of samplings are required to reconstruct an image of the objects in ghost imaging. When imaging moving objects, it will be hard to perform enough samplings during the moment when the objects can be taken as immobile, causing the reconstructed image of the objects deteriorating. In this paper, we propose a temporal intensity difference correlation ghost imaging scheme, in which a high-quality image of the moving objects within a complex scene can be extracted with much fewer samplings. The spatial sparsity of the moving objects is utilized, while only a linear algorithm is required. This method decreases the number of required samplings, thus relaxing the requirement on high refresh frequency of illumination source and high speed detector, to obtain the information of moving objects with ghost imaging.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Ghost imaging (GI) is an active imaging scheme in which the information of object is obtained based on high-order coherence of light field [1–5]. In GI system, the distribution of the illumination light field imprinted on the object is recorded by a reference CCD camera, and the intensity of the light transmitted or reflected by the object is collected by a point-like detector. Neither of the two detectors can retrieve the image of the object independently. The image of the object can be reconstructed from the second-order correlation between the signals of the light intensity recorded by both detectors. Specially, when the distribution of the light field recorded by the reference CCD camera can be calculated or obtained in advance, the reference arm can be removed and GI can be performed only with a point-like detector. This is the basic idea of computational ghost imaging(CGI) [6,7]. CGI is a single-pixel imaging scheme, which allows us to image 2- or 3-dimensional objects using a point-like detector [8,9]. Besides, the response and reading speed of point-like detectors are usually far faster than that of a common array sensor, so the intensity of the light can be recorded more quickly thus speed up imaging process of GI. Moreover, point-like detector usually provides higher sensitivity than array sensor, makes it possible to acquire the image of the object with low flux of photons [10]. In addition, using a single-pixel detector, GI can get the image of the object in some operating wavelength such as X-ray or Terahertz, in which spectrum range array sensors are usually expensive or hard to get [11–14]. Moreover, GI can perform a better ability against disordered media than traditional imaging in some cases [15–18]. Last but not least, the sampling process in GI can cooperate with the data process of machine learning to improve the quality of the reconstructed image [19,20].
In GI, the spatial information of the scene to be imaged is acquired from the second-order coherence of the illumination light field [21], for which averaging over enough long time is required. During this process, which is performed by averaging over a large number of samplings in practice, the object is required to be static or approximately immobile otherwise the quality of the reconstructed image will decrease. Moreover, the contrast to noise ratio of the reconstructed image is proportional to the square root of the number of the samplings and inversely proportional to the size of the scene to be imaged [22,23]. That is, when the field of view of the scene is larger, more samplings are required within the moment that the scene can be seen as immobile. So when there is moving objects in the scene, high refresh frequency of illumination and high speed detection will be both necessary. Even, the faster the objects are moving, the faster the illumination and detection system are required [24]. For the cases that the speed of system is not as fast as required, people proposed to acquire the information of the object during its moving process with modified imaging scheme or particular algorithm [25–27], which usually requires a high-precision tracking and aiming system as assist or prior information of the moving object. This makes the imaging system complicated or hard to track multiple objects. In addition, to improve the number of samplings per unit time, multispectral light source are used to illuminate the object and the intensity of each spectra is recorded individually [28]. However, in this method the number of samplings required is not really reduced. Compressive sensing (CS) is a highly effective way to reduce the number of samplings required to obtain information [29]. Compressive GI was also presented for tracking the moving object at low light levels with fewer samplings [30]. This benefits from the fact that the moving object is usually spatially sparse with static background. However, the reconstruction of the image in CS is to solve a convex optimization problem, in which large time consumption and huge memory are usually required. If linear algorithm can be developed towards this issue, it will undoubtedly beneficial to improve the timeliness of information acquisition of the moving object.
In this paper, we propose a temporal intensity difference correlation GI (TDGI) to image moving objects within complex background. In our scheme, the image of the moving objects is reconstructed with greatly reduced number of samplings compared with traditional GI (TGI). Besides, the virtue of the spatial sparsity of the moving objects is utilized, with only linear algorithm required. Data processing of our scheme is much faster than that of compressive GI. Thus the time consumption to reconstruct image of the moving object is greatly reduced. In the experiments, tracking and imaging of two moving objects with varying speed and direction are achieved. Since relative movement between multiple moving objects can be handled, our method can also work if the shape of the object is changing. This makes the scheme promising in tracking and imaging moving objects.
2. Theoretical analysis
In ghost imaging system with thermal light, the distribution of the light field on reference CCD and that of the light field imprinted on the object is
In GI system, the light transmitted or reflected by the object is collected by a point-like detector, which is called bucket detector. For a temporal-varying scene, the image of the scene at $t$ can be reconstructed with,
A temporal-varying scene can be discretized as a set of scene frames, $O(x,\;y,\;t_k), k=1,2\cdots M$, and each scene frame can be treated as static within the moment $\Delta t_k= t_{k+1}-t_k$. During $\Delta t_k$, $N$ frames of speckle pattern $I(\vec r_r,\;t_{kn}), n=1,2\cdots N$ can be used to illuminate the scene frame and counterpart signal of bucket detector $B(t_{kn}),\;n=1,2\cdots N$ can be obtained. For two different scene frames $O(x,\;y,\;t_k)$ and $O(x,\;y,\;t_l)$, the difference between the two sequence of signal from bucket detector at $t_k$ and $t_l$ is
3. Experimental results
The experimental setup is shown in Fig. 1, a laser beam with wavelength of 532nm is expanded by a 4-f system and then scattered by a rotating ground glass (RGG). After the RGG, there is a 4-f system, with an aperture located in the focal plane, allows the quasi-parallel light in the scattering field to pass. After the Fourier plane of the lens $L4$, a beamsplitter is used to divide the light into two arms, each of which passes through a 2-f system. Then the light field in the reference arm is recorded by CCD1(AVT Stingray F-125 B). In object arm the illumination pattern is reflected by a DMD(TI DN 2503686 ), which is used to display the scene to be imaged. For calculating Mean Square Error (MSE) [33] between the intensity distribution function of the scene displayed and the reconstructed image in GI, the displayed scene is imaged onto CCD2(AVT Stingray F-125 B) by lens $L7$. So both traditional imaging (TI) and ghost imaging can be performed in this setup. When GI is performed, the intensity recorded by CCD2 will be integrated and then discretized to 0-255 in GI. With this discretization, the dynamic range of the integrated signal, which has effect on the result in GI, from CCD2 will be closer to that of the signal from a real 8-bit bucket detector. In our scheme, the same sequence of speckle patterns are used to illuminate the scene at different durations. It requires the system to produce the same sequence of speckle patterns repeatedly. This is achieved by using a precision step motor (SHINANO Y07-43D1-4275) to control the RGG to rotate in a repeatable trajectory, so this setup is also a CGI system. The reference illumination pattern can be stored by CCD1 in advance, then the same sequence of illumination pattern is produced and TDGI can be performed only with CCD2.
Firstly, we consider the case that one moving object in the complex scene to show that our method can largely reduced the number of samplings required in ghost imaging. The scene to be imaged is a top view of a block with a car going across, which is shown in Fig. 2(a), the scene can be discretized to fives scene frames $S(\vec r_o,\;t_k),\;k=1,2\cdots 5$, consists of the background $(S0)$ and with a car in four different positions $(S1$-$S4)$, which is shown in the first row of Fig. 2(a). The duration of each scene frame is $\Delta t_k$, within which the movement distance of the car is less than the size of a speckle so the car can be taken as immobile. At $t_1$, $N$ frames of speckle patterns $I(\vec r_r,\;t_{1n}), n=1,2\cdots N$ is used to illuminate the scene $(S0)$, which is the block without the car, and the signal sequence of the bucket detector $B(t_{1n})$ is obtained and stored. At $t_l, l=2,3\cdots 5$ the same sequence of speckle patterns are used, and ${B(t_{ln})}$ is obtained. Then $B_{TD}(t_{ln})$ is calculated and the image of the car in the scene frame $S(\vec r_o,\;t_l)$ is reconstructed with Eq. (10). The results are shown in the third row of Fig. 2(a), each of which is from 500 samplings. For a comparison, TGI with Eq. (5) is also performed to image the whole scene of S1-S4 with the same 500 samplings, respectively. The result is shown in the second row of Fig. 2(a), in which the position of the car is marked by a dashed box. We can see that the information of the car is submerged in the noise. In practice, the image of the car can also be composited with the image of the block, which can be obtained in advance, to acquire the information of the relative position between the car and the block. The size of the scene on the CCD2 is $400 \times 400$ pixels and the average Full Width at Half Maximum (FWHM) of the speckle unit is about $10 \times 10$ pixels. So the average number of the speckle imprinted on the scene is about $1600$. The size of the car is $1056$ pixels so its sparsity is about 0.007 compared with the whole scene. Each of the results in Fig. 2(a) is reconstructed from 500 samplings so the sample rate is about 0.313. Benefits from the spatial sparsity of the car, with the same number of samplings, the quality of results from TDGI is far higher than that from TDI. To demonstrate that TDGI can reduce the number of samplings required in GI, The MSE of the reconstructed image in TGI and that in TDGI with different samplings are also calculated. The result is shown in Fig. 2(b), the bottom coordinate is the number of samplings of TDGI and the top coordinate is the number of samplings of TGI. In TDGI, higher quality image of the moving object can be obtained with greatly reduced number of samplings than that in TGI.
To demonstrate the performance of our method in more complex situation, the scene with multiple moving objects is also considered. The first row of Fig. 3(a) is the five discretized scene frames to be imaged, which consists of the background $(S0)$ with two cars in four different positions ($S1$-$S4$). The speed and the direction of each moving car is various. And there are also relative movement between two cars, which can be extended to the case that the shape of the moving object is changing during its evolution. The results obtained by TDGI are shown in the third row of Fig. 3(a). For a comparison, TGI are also performed with the same number of samplings and the results are shown in the second row of Fig. 3(a). The position of the moving cars is marked by dashed box. In this experiment, average number of the speckles imprinted on the scene is about $5625$. Each of the reconstructed image in Fig. 3(a) is obtained from 3000 samplings, respectively, which means the sample rate for each image is about 0.533. To demonstrate that TDGI can reduce the number of samplings required in GI, the MSE of the reconstructed image in TGI and that in TDGI with different samplings are also calculated. The result is shown in Fig. 3(b). In TDGI, higher quality image of the moving objects can be obtained with greatly reduced number of samplings than that in TGI. There results implies that our method can improve the performance of ghost imaging in tracking and imaging multiple moving objects and still effective in the case that the shape of the moving object is changing during its evolution.
CS is widely used in GI to improve the quality of the reconstructed image in the cases that the image is not required in real time. However, in practical, tracking is usually achieved from sequential images of the moving object during its evolution. It requires to “see” where the object is in the reconstructed image at a desirable frequency. Therefore, to obtain the image of the object in real time is significant. GI process consists of sampling and data processing. At present, benefiting from the development of illumination sources with high refresh frequency [34–37], tens of thousands of samplings can be performed within subsecond in a common GI system. However, the time consumption of different data processing varies greatly. This makes data processing play an important role in the timeliness of a tracking and imaging system, and the time consumption of the data processing is really an issue to be considered. From this point of view, the comparison between the time consumption of the data process in our scheme and that of CS is demonstrated. The reconstructed images are shown in Fig. 4(a). The first row is the result from CS, in which the number of samplings is notated by $N_C$. The second row is the result from TDGI, in which the number of samplings is notated by $N_T$. The two images in the same dashed box numbered by $K$ are of the same quality characterized with MSE. Figure 4(b) shows the MSE of the reconstructed images in Fig. 4(a). The bottom coordinates is the index of the reconstructed images numbered with $K$. We can see that with less than 1600 samplings, the quality of the reconstructed images from TDGI and that from CS method are almost the same. However, more time consumption will be required in CS method than TDGI. With the number of samplings increase, CS method can achieve an image with higher quality than TDGI with the same number of samplings. As mentioned before, the time consumption of sampling in GI is subsecond, which is negligible compared with that of data processing. So the comparison is performed between the time consumption of the data processing in TDGI and that in CS method, which is shown in Fig. 4(c). Comparing Figs. 4(b) and 4(c), the quality of the images reconstructed from TDGI and that from CS method are almost the same. However, the time consumption in TDGI is about an order of magnitude less than that in CS, which is based on gradient projection for sparse reconstruction in two dimensional discrete cosine transform (2D-DCT) domain [38]. Both of the two algorithms run with a desktop with Intel Xeon CPU$\times$8 @ 2.10 GHz and a 96 GB RAM.
Moreover, in TDGI, the image of the moving objects is reconstructed from the second-order correlation between the temporal difference of the bucket signal and the illumination pattern. TDGI is different from the method that calculating the difference between two images of the scene reconstructed via TGI at two different durations. This will be demonstrated as follows. At two different scene frames $G(\vec r_r,\;t_k)$ and $G(\vec r_r,\;t_l)$, two different sequences of speckle patterns $I(\vec r_r,\;t_{kn})$ and $I(\vec r_r,\;t_{ln})$ are used to illuminate the scene and providing signals of bucket detector $B(t_{kn})$ and $B(t_{ln})$. The image of the temporal-varying component of the scene $\Delta G'(\vec r_r,\;t_l)$ is reconstructed with
4. Discussion
In this paper, we propose a temporal intensity difference correlation GI scheme, in which the information of the static component in the scene is removed and the image of the moving objects can be obtained with far fewer samplings. Besides, data processing with linear algorithm makes the scheme is of good real-time performance, which is significant for tracking. For more general situation, if the information of the moving objects and the background are both required, our scheme can be a supplementary way to TGI. The image of the moving objects can be obtained by our scheme with greatly reduced time and then the image of the background can be obtained by TGI without illegibility caused by the moving objects. The performance improvement by our scheme benefits from the spatial sparsity of the moving objects, since the number of samplings required to image the moving objects in TGI is determined by the size of the whole scene, while in TDGI determined by the size of the moving objects. If the speed of the moving objects is too fast that enough samplings, which is determined by the size of the objects, can not be performed within the moment that the objects can be seen as immobile, the image reconstructed via TDGI will also deteriorate. TDGI is demonstrated with a CGI system here and can also be used in a single-pixel camera by modulate the light reflected from two different scene frames with the same mask sequence. This scheme can be used in target tracking and live tissue imaging, in which less samplings means less probability to be perceived or less damage. Besides, with less samplings, this method can largely reduce the data volume required when GI is performed in surveillance video.
Further more, in our scheme, the sparsity of the moving objects is utilized with intensity correlation algorithm, which is linear and can run far faster than solving the convex optimization problem in CS. It improves the timeliness of information acquisition, which will be of practical significance in tracking and imaging moving objects. Besides, the spatial sparsity of the objects is utilized in our scheme just by means of subtracting the signals of the bucket detector. This is probably difficult to achieve by traditional imaging, since the quality of an image captured by a camera will be lower if it is just subtracted by another image. We believe our scheme can be a good start of taking advantage of the sparsity of the objects by using a linear algorithm.
Funding
National Natural Science Foundation of China (11774431, 61701511); Science and Technology Project of Hunan Province (2017RS3043); College of Advanced Interdisciplinary Studies.
References
1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]
2. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A 70(1), 013802 (2004). [CrossRef]
3. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92(3), 033601 (2004). [CrossRef]
4. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. H. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]
5. S. S. Hodgman, W. Bu, S. B. Mann, R. I. Khakimov, and A. G. Truscott, “Higher-Order Quantum Ghost Imaging with Ultracold Atoms,” Phys. Rev. Lett. 122(23), 233601 (2019). [CrossRef]
6. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]
7. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]
8. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]
9. W. Gong, C. Zhang, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]
10. P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J Padgett, “Imaging with a small number of photons,” Nat. Commun. 6(1), 5913 (2015). [CrossRef]
11. H. Yu, R. Lu, S. Han, H. Xie, G. Du T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]
12. A. X. Zhang, Y. H. He, L. A. Wu, L. M. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]
13. Y. Altmann, S. McLaughlin, M. J. Padgett, V. K. Goyal, A. O. Hero, and D. Faccio, “Quantum-inspired computational imaging,” Science 361(6403), eaat2298 (2018). [CrossRef]
14. J. P. Zhao, Y. W. E K. Williams, X. C. Zhang, and R. W. Boyd, “Spatial sampling of terahertz fields with sub-wavelength accuracy via probe-beam encoding,” Light: Sci. Appl. 8(1), 55 (2019). [CrossRef]
15. W. Tan, X. Huang, S. Nan, Y. Bai, and X. Fu, “Effect of the collection range of a bucket detector on ghost imaging through turbulent atmosphere,” J. Opt. Soc. Am. A 36(7), 1261–1266 (2019). [CrossRef]
16. Y. Zhang, W. Li, H. Wu, Y. Chen, X. Su, Y. Xiao, and Y. Gu, “High-visibility underwater ghost imaging in low illumination,” Opt. Commun. 441, 45–48 (2019). [CrossRef]
17. Y. K. Xu, W. T. Liu, E. F. Zhang, Q. Li, H. Y. Dai, and P. X. Chen, “Is ghost imaging intrinsically more powerful against scattering?” Opt. Express 23(26), 32993–33000 (2015). [CrossRef]
18. L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. 43(8), 1670–1673 (2018). [CrossRef]
19. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]
20. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Deep-learning-based ghost imaging,” Sci. Rep. 8(1), 6469 (2018). [CrossRef]
21. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, and K. Waki, “Ghost cytometry,” Science 360(6394), 1246–1251 (2018). [CrossRef]
22. K. W. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express 18(6), 5562–5573 (2010). [CrossRef]
23. J. Li, D. Yang, B. Luo, G. Wu, L. Yin, and H. Guo, “Image quality recovery in binary ghost imaging by adding random noise,” Opt. Lett. 42(8), 1640–1643 (2017). [CrossRef]
24. H. Li, J. Xiong, and G. H. Zeng, “Lensless ghost imaging for moving objects,” Opt. Eng. 50(12), 127005 (2011). [CrossRef]
25. E. R. Li, Z. W. Bo, M. L. Chen, W. L. Gong, and S. S. Han, “Ghost imaging of a moving target with an unknown constant speed,” Appl. Phys. Lett. 104(25), 251120 (2014). [CrossRef]
26. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]
27. D. B. Phillips, M. J. Sun, M J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]
28. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowwan, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]
29. W. T. Liu, T. Zhang, J. Y. Liu, P. X. Chen, and J. M. Yuan, “Experimental quantum state tomography via compressed sampling,” Phys. Rev. Lett. 108(17), 170403 (2012). [CrossRef]
30. O. S. Magana-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. 102(23), 231104 (2013). [CrossRef]
31. I. Reed, “On a moment theorem for complex Gaussian processes,” IEEE Trans. Inf. Theory 8(3), 194–195 (1962). [CrossRef]
32. D. Z. Cao, J. Xiong, and K. G. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005). [CrossRef]
33. S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016). [CrossRef]
34. M. Bache, E. Brambilla, A. Gatti, and L. A. Lugiato, “Ghost imaging schemes: fast and broadband,” Opt. Express 12(24), 6067–6081 (2004). [CrossRef]
35. L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh-Hadamard transform,” Photonics Res. 4(6), 240–244 (2016). [CrossRef]
36. Y. Wang, Y. Liu, J. Suo, G. Situ, C. Qiao, and Q. Dai, “High speed computational ghost imaging via spatial sweeping,” Sci. Rep. 7(1), 45325 (2017). [CrossRef]
37. Z. H. Xu, W. Chen, J. Penuelas, M. Padgett, and M. J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]
38. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]