Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast tracking and imaging of a moving object with single-pixel imaging

Open Access Open Access

Abstract

Because of the low temporal resolution, it is difficult to imaging a moving object using single-pixel imaging. In previous studies, either the frame rate is limited, or the speed and direction is limited to constant. In this work, a fast tracking and imaging method for moving objects is proposed. By using cake-cutting order Hadamard illumination patterns and the TVAL3 algorithm, low-resolution images of each frame are obtained. The displacement is calculated via the cross-correlation between the low-resolution images, and the illumination patterns are modified according to the location results. Finally, a high-quality object image is obtained. This scheme is suitable for moving object imaging with varying speeds and directions. The simulation and experimental results prove that for a 128 × 128 pixels scene, the location and imaging can be realized when 30 samplings are performed for each time interval. We experimentally demonstrate that the proposed method can image a moving object with variational speed with a resolution of 128 × 128 pixels at a frame rate of 150 fps by using a 9 kHz digital micromirror device. The proposed scheme can be used for three-dimensional and long-distance moving object imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Single-pixel imaging (SPI) is a nonlocal imaging technique that can image a two- or three-dimensional scene using a single-pixel detector (SPD) [14]. In SPI, a series of illumination patterns are projected on the object one by one, and the total intensity of the object scene is detected by the SPD. The object image is recovered from the illumination patterns and the detected intensity sequence. Because of its special imaging principle, SPI has garnered significant attention and has been used in numerous applications, such as complicated environment imaging [58], X-ray SPI [910], and lidar detection [11,12].

However, SPI has the drawback of a long acquisition time [13]. In general, hundreds of illumination patterns are required to recover an object image with acceptable quality. However, the refreshing rate of digital micromirror devices (DMDs) or spatial light modulators (SLMs) is limited. Therefore, it is a challenge for the SPI to image a moving object. At present, there are several schemes to address this problem.

The first type of scheme involves estimating the moving speed by using an optimization algorithm, thereafter obtaining high-quality imaging results [13,14]. This method can image a moving object with an unknown constant speed. However, if the moving direction or speed is not constant, the number of unknowns to be estimated would be increased, which further increase the complexity of the optimization algorithm.

The second type of scheme involves increasing the imaging speed of SPI. To decrease the acquisition time of SPI, different illumination patterns, algorithms, and different modulation technologies have been analyzed [15,16]. For instance, the Hadamard matrix [1] and Fourier basis [17], rather than random speckle, can be used to decrease the sampling ratio. Machine learning [18] can reconstruct an image based on fewer measurements than the orthogonal sampling methods. In the aspect of light source, LED illumination module can increase the pattern rate to 500 kHz [16,19]. Based on these developments, single-pixel video methods have also been proposed [20,21].

The third type of scheme involves obtaining the motion information of a moving object with a small number of measurements in a short time interval during which the object can be treated as immobile [22,23]. In each time interval, an unclear rough image can be obtained. Cross-correlation [22] or low-order moments [23] of images are calculated to obtain the motion information of an object. Thereafter, a high-quality image of the moving object can be reconstructed. Even though this type of method can image a slow-moving object, the sampling number in each time interval should be decreased further to achieve a higher frame rate.

In this study, a fast tracking and imaging method for moving objects based on SPI is proposed. We consider the situation in which the moving speed and direction are unknown and not constant. By using the cake-cutting Hadamard basis-sort illumination pattern sequence [24,25] and TVAL3 (Total Variation Augmented Lagrangian ALternating-direction ALgorithm) algorithm [26,27], low-resolution images during each time interval were obtained. The location is determined via cross-correlation between the low-resolution images. Thereafter, the illumination patterns were modified according to the location results. Finally, a high-quality object image was obtained. The simulation and experimental results prove that for a 128 × 128 pixels scene, tracking and imaging can be realized when 30 samplings are performed during each time interval. We experimentally demonstrate that the proposed method can image a moving object with variational speed at a frame rate of 150 fps by using a 9 kHz DMD.

2. Method

2.1. SPI based on compressive sensing (CS)

In SPI system, suppose the object is I(x, y), and the N illumination patterns are $H = \{ {H_1},{H_2},\ldots ,{H_n},\ldots ,{H_N}\}$. The nth detected intensity of SPD is

$${B_n} = \int\!\!\!\int {I(x,y) \times {H_n}(x,y)dxdy}$$
and $B = \{ {B_1},{B_2},\ldots ,{B_n},\ldots ,{B_N}\}$. For SPI based on CS, the matrix form of Eq. (1) is
$$b = Au$$
where u is the vectorized representation of I(x, y), and b is the column vector form of B. A is the measurement matrix. The nth row of A is the vectorized representation of Hn. When u is assumed to be sparse, a lot of optimization algorithms [28,29] can be used to solve Eq. (2). Specifically, TVAL3 solver can reconstruct the image u through by minimizing the augmented Lagrangin:
$$\mathop {\min }\limits_u \sum\nolimits_i {{{||{{D_i}u} ||}_2} + \frac{\mu }{2}} ||{Au - b} ||_2^2,\quad s.t.\;\;u \ge 0$$
where ${||\cdot ||_2}$ is the L2 norm. Diu is the ith component of the discrete gradient of u, and μ is the penalty parameter of the model [27]. TVAL3 solver is remarkably fast and often outperforms other state-of-the-art TV solvers for CS reconstruction [26]. As an outstanding CS reconstruction solver, TVAL3 has been used in many computational imaging areas [30,31]. In this paper, TVAL3 solver is used.

2.2 Selection of illumination pattern

Cake-cutting Hadamard basis sort illumination patterns [24,25] are used to decrease the sampling ratio and increase the frame rate. Suppose the frame number of the dynamic scene is K, the object image is ${I_0} = \{ {I^1},\ldots ,{I^k},\ldots ,{I^K}\}$, and Ik is the kth frame image. The sampling number in each time interval is N = N1 + N2. The first N1 illumination patterns are the top N1 cake-cutting order Hadamard basis. The last N2 illumination patterns are selected from the remainder of the Hadamard basis successively according to the cake-cutting order. A schematic of this process is shown in Fig. 1. The illumination pattern sequence in the kth time interval is ${H^k} = \{ H_1^k,H_2^k,\ldots ,H_N^k\}$, and the illumination pattern sequence in the entire image process is $H = \{ {H^1},\ldots ,{H^k},\ldots ,{H^K}\}$. By projecting the illumination patterns $H = \{ {H^1},\ldots ,{H^k},\ldots ,{H^K}\}$ on the dynamic scene ${I_0} = \{ {I^1},\ldots ,{I^k},\ldots ,{I^K}\}$, the detected total intensity of the SPD is $B = \{ {B^1},\ldots ,{B^k},\ldots ,{B^K}\}$, in which ${B^k} = \{ B_1^k,B_2^k,\ldots ,B_N^k\}$.

 figure: Fig. 1.

Fig. 1. Design of illumination pattern sequence.

Download Full Size | PDF

2.3 Tracking and imaging method

The procedure of location determination and imaging is shown in Fig. 2 and described in detail as follows:

  • 1. In the kth time interval, the illumination pattern sequence ${H^k} = \{ H_1^k,H_2^k,\ldots ,H_N^k\}$ and intensity sequence ${B^k} = \{ B_1^k,B_2^k,\ldots ,B_N^k\}$ are calculated to obtain the kth low-quality image ${I_{LR}}^k$ using the TVAL3 solver [26]. For the entire image process, we can obtain low-quality K frame images ${I_{LR}} = \{ {I_{LR}}^1,{I_{LR}}^2,\ldots ,{I_{LR}}^K\}$.
  • 2. Determine the location of the object through the K frame low-quality images. For the kth image, the cross-correlation peak between $I_{LR}^1$ and $I_{LR}^k$ is calculated. The displacement between the cross-correlation peak and the self-correlation peak of $I_{LR}^1$ is equal to the displacement of the object between two different time intervals $\{ \Delta {x^k},\Delta {y^k}\} (k = 1,2,\ldots ,K)$.
  • 3. The illumination patterns $H = \{ {H^1},\ldots ,{H^k},\ldots ,{H^K}\}$ are translationally shifted into $H^{\prime} = \{ H^{\prime1},\ldots ,H^{\prime k},\ldots ,H^{\prime K}\}$ according to the location result $\{ \Delta {x^k},\Delta {y^k}\} (k = 1,2,\ldots ,K)$.
  • 4. The new illumination pattern sequence $H^{\prime} = \{ {H^{\prime 1}},\ldots ,{H^{\prime k}},\ldots ,{H^{\prime K}}\}$ and intensity sequence $B = \{ {B^1},\ldots ,{B^k},\ldots ,{B^K}\}$ are calculated to obtain the high-quality image using the TVAL3 solver.

 figure: Fig. 2.

Fig. 2. Procedure of location and imaging process.

Download Full Size | PDF

3. Simulation results

In the simulation, an object image containing 64 × 64 pixels shown in Fig. 3(a) is used. The number of full samplings with the Hadamard model is 4096. The moving speed of the object is constant and the trajectory is shown in Fig. 3(b). Because the moving speed is unknown, a short time interval should be selected to ensure that the object is nearly stationary in this time interval. Here, we set the sampling number in one time interval as N = 30, and N1 = N2 = 15. The frame number of the dynamic scenes is K=140. According to Section 2, the total sampling number for this 140 frames scene is 4200. The highest order of the cake-cutting order Hadamard model is 2115. Here we use the reconstructed images from the recorded single-pixel intensity data and projected Hadamard patterns for comparison, which is called conventional SPI method. The result by using conventional SPI method is shown in Fig. 3(c), which is blurred. The reconstruction result with the proposed scheme is shown in Fig. 3(d). The detailed structure of the object can be observed from Fig. 3(d). The location results in the X and Y directions are shown in Fig. 3(e)–3(f). The mean square error (MSE) between the true location and calculated location is 0.3309 in the X direction and 0.4173 in the Y direction. The correlation coefficient (CC) between Fig. 3(d) and Fig. 3(a) is 0.9055.

 figure: Fig. 3.

Fig. 3. Imaging result for K=140 frame dynamic object with constant speed and direction. (a) Object image. (b) Moving trajectory of object. (c) Conventional SPI result. (d) Result of proposed method when N = 30, N1 = N2 = 15. (e)(f) Comparison between true and calculated locations of object in each frame. (e) Results in X direction. (f) Results in Y direction. The unit of Y-axis is pixel.

Download Full Size | PDF

We also consider the situation that the moving speed is variational and the moving trajectory is randomly chosen. The displacement between two adjacent frames is 0 or 1 pixel with a randomly selected direction, as shown in Fig. 4(b). The 140 frames dynamic scene and moving trajectory of the object are provided in the supplemental material Visualization 1. The SPI results with conventional method and proposed scheme are shown in Fig. 4(c) and 4(d). The location results in the X and Y directions are as shown in Fig. 4(e)–4(f). The MSE between the true location and calculated location is 0.9209 in the X direction and 1.5693 in the Y direction. The CC value between Fig. 4(d) and 4(a) is 0.8890. Even the location accuracy in Fig. 4 is less than that in Fig. 3, the imaging quality is as good as Fig. 3. Figures 3 and 4 shows that the proposed scheme can correctly track and image a moving object when the moving speed is 0.033 pixel/sampling and the moving direction is random. For commercially available DMD, the binary pattern display rate is approximately 10–20 kHz. The Hadamard matrix comprise only ±1 values. Two samplings are required for one Hadamard measurement. Therefore, 0.003 s may be required for 30 Hadamard models measurement. Hence, in theory, the highest frame rate of our proposed scheme is approximately 333 fps for 64 × 64 pixels resolution.

 figure: Fig. 4.

Fig. 4. Imaging result for K=140 frame dynamic scene (see Visualization 1). (a) Object image. (b) Moving direction of object in two adjacent frames is selected randomly from the eight directions or maintained constant. (c) Conventional SPI result. (d) Result of proposed method when N = 30, N1 = N2 = 15. (e)(f) Comparison between true and calculated locations of object in each frame. (e) Results in X direction. (f) Results in Y direction. The unit of Y-axis is pixel.

Download Full Size | PDF

In addition, the MSE of the location results and the correlation coefficient (CC) of the recovered image against the sampling number per image interval are analyzed. Suppose the frame number of the dynamic scenes is K = 40. The sampling number in each time interval is N. When N is changed from 10 to 100, and N1 = N2 = N/2, the variations in the MSE value between the true and calculated locations in X and Y directions are shown in Fig. 5(a). The variation in the CC value between the original and recovered images against N is also shown in Fig. 5(a). Particularly, the recovered images of the proposed method when N = 20, 40, and 60 are shown in Fig. 5(b)–5(d). Figure 5 indicates that the location precision and imaging quality are both increasing with the increase of N.

 figure: Fig. 5.

Fig. 5. Influence of N on proposed method when K=40 frames. (a) Variation of MSE and CC value against N. (b)–(d) Recovered image of proposed method when N = 20, 40 and 60.

Download Full Size | PDF

Therefore, for a definite SPI system based on our proposed method, there exists a minimum N to ensure that the image quality achieves a pre-set criterion. When we intend to image a moving object with an unknown speed and direction, the minimum N can be selected to ensure the highest frame rate. For instance, the minimum N of the SPI system is set to N = 30. Thereafter, for a moving object with an unknown speed, if the speed is slower than 0.033 pixel/sampling, a high location accuracy can be recovered. However, if the speed is faster than 0.033 pixel/sampling, the location accuracy would decrease, and the corresponding imaging quality would also decrease.

4. Experimental results

To verify the simulation results, a series of experiments for moving objects on the SPI system are conducted. The experimental setup is illustrated in Fig. 6. A 632.8 nm laser beam is expanded by L1 (f1 = 5 cm) and L2 (f2 = 20 cm) and thereafter modulated by a DMD (VisionFly6500 GX DLP). The size of the DMD is 1920 × 1080 pixels, with each pixel of size 7.56 μm × 7.56 μm. The maximum binary pattern refreshing rate of the DMD is 9 kHz. The reflected beam from the DMD is projected onto the object plane using a 4-f system (L3 and L4, f3 = f4 = 10 cm). The image of the object is shown in Fig. 6(b). The size of the object is 2.50 mm × 2.50 mm. The transmitted light from the object is converged by L5 (f5 = 5 cm) and thereafter detected by an SPD (PDA 100A2, Thorlabs). The detected signal is shown with an oscilloscope and subsequently transmitted to the computer.

 figure: Fig. 6.

Fig. 6. (a) Experiment setup. TIR prism is total internal reflection prism. f1 = 5 cm, f2 = 20 cm, f3 = f4 = 10 cm, f5 = 5 cm. (b) Object image. Red arrow points the moving direction of object. (c) Conventional SPI result with 3200 Hadamard illumination patterns.

Download Full Size | PDF

In the experiment, cake-cutting order Hadamard models with 128 × 128 pixels are used. One Hadamard model measurement is acquired by projecting two complementary “0–1” distributions on object. In DMD, 8×8 pixels are treated as a new pixel. Therefore, 1024×1024 pixels are used in this experiment. When the object is stable, the conventional SPI result with 3200 Hadamard illumination patterns is shown in Fig. 6(c). Next, the object is placed on a manual linear translation stage. The object is moving in the direction of the red arrow in Fig. 6(b). The resolution and error of the translation stage is 10 μm and 4 μm, respectively.

In the first example, the sampling number in one time interval is N = 100 sampling, and N1 = N2 = 50. The frame number is K = 20 frames. The total measurement number is 2000 and the highest cake-cutting order is 1050. When the displacement of the object is larger than the size of a pixel, 60.48 μm, the SPI results would become blurry. When the average moving speed of the object is approximately vground truth=0.45, 0.9, 1.35, and 2.7 mm/s (10, 20, 30, and 60 μm/frame) respectively, the results of the conventional SPI and those of the proposed method are shown in Fig. 7. Figure 7(a)–7(d) show that when the moving speed is slow, the conventional SPI is still able to recover the object image. With the increase in the moving speed, the SPI result becomes blurred. As a contrast, the image quality of the proposed method is stable and high, which are shown in Fig. 7(e)–7(h). The structure of the object is distinguishable.

 figure: Fig. 7.

Fig. 7. Comparison of original SPI and proposed method. (a)-(d) Original SPI results and (e)-(h) proposed method results when K=20, N = 100, N1 = N2 = 50.

Download Full Size | PDF

The estimated trajectory of object in the X and Y directions are shown in Fig. 8(a) and 8(b). The solid lines are the fitting lines of estimated positions. The slopes of the fitting lines in Fig. 8(a) and 8(b) are the estimated average moving speed component in X and Y directions, vx and vy respectively. The estimated average moving speed of object is $v = \sqrt {v_x^2 + v_y^2}$. Since the actual moving speed is not a constant, we can compare the estimated average moving speed v and the actual average moving speed vground truth (0.45, 0.9, 1.35, and 2.7 mm/s). Figure 8(c) prove that the estimated speed is close to the truth speed. Figures 78 prove the feasibility and accuracy of our proposed method.

 figure: Fig. 8.

Fig. 8. Moving trajectory of object in (a) X direction and (b) Y direction. The solid lines are fitting results whose slopes are estimated average moving speed component in X and Y directions. (c) Comparison between estimated and truth average moving speed of object.

Download Full Size | PDF

If the moving speed of the object is increased, implying that the sampling number N in one time interval should be decreased, the location precision and image quality would also be decrease. Figure 9 shows the image results for different values of N and K. Figure 9(a) shows the results when K = 40, N = 50, and v=5.4 mm/s (the total measurement number is 2000 and the highest cake-cutting order is 1025). Figure 9(b) shows the results when K = 40, N = 30, and v=9 mm/s (the total measurement number is 1200 and the highest cake-cutting order is 615). Figure 9(c) shows the results when K = 70, N = 30, and v=9 mm/s (the total measurement number is 2100 and the highest cake-cutting order is 1065). Figure 9(a) and 9(b) prove that when N is decreased, the location precision also decreases, which leads to low image quality. In addition, it is observed in Fig. 9(b) and 9(c) that when the frame number K is increased, the total measurement number and the highest cake-cutting order are also increased, which leads to an increase in imaging quality. The variations in the CC value between the image result and the original image against K when N = 50 and 100 are shown in Fig. 10.

 figure: Fig. 9.

Fig. 9. Image results of original SPI and proposed method for different values of N and K. (a) K = 40, N = 50, v=5.4 mm/s. (b) K = 40, N = 30, v=9 mm/s. (c) K = 70, N = 30, v=9 mm/s.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Variation of CC value against K when N = 50 and 100.

Download Full Size | PDF

In practical application, the frame number K should be chosen according to the moving speed of object and the required imaging resolution. For example, for a moving object with unknow speed, we firstly determine the sampling number in one time interval N according to the parameters of experiment system, the moving speed of object, and the required imaging resolution. This process has been described in the last paragraph of Section 3. Secondly, the total measurement number is chosen according to an appropriate sampling ratio. Thirdly, the frame number K is calculated according to K = Ntotal/N. For the experiment system in Fig. 6, the imaging resolution is set as 128×128 pixels. Suppose the moving speed of object is slow and a high localization accuracy is required, we can set N=100. Since TVAL3 solver can obtain the reconstruction result with low sampling ratio, the total measurement number can be set as Ntotal=2000. Therefore K=20 frames. The corresponding imaging results are shown in Fig. 7. If the moving speed of object is increasing, the number of N should be decreased and K should be increased accordingly, the corresponding imaging results are shown as Fig. 9.

The results of the optical experiments are consistent with the simulation results in Section 3 and prove the feasibility of the proposed scheme. According to Fig. 9, an acceptable image quality can be achieved when the sampling number N in one time interval is 30. The display rate of DMD is 9 kHz, and the highest frame rate of our proposed location and imaging system is approximately 150 fps when the image resolution is 128 × 128 pixels.

5. Conclusion

In this study, we proposed a fast tracking and imaging scheme based on SPI. The cake-cutting order Hadamard illumination patterns and the TVAL3 algorithm are used to decrease the sampling number in one time interval. For each time interval, a low-quality image is obtained and the object is located. The illumination patterns are modified according to the location results and the high-quality image could be recovered. Experimental results prove that our system could recover 128 × 128 pixels dynamic scenes at 150 fps by using a single-pixel camera and a DMD with a 9 kHz projection rate. In comparison with the motion estimation schemes from neighboring frames in previous studies, our proposed method requires fewer sampling numbers, resulting in a higher frame rate. Simultaneously, our scheme is flexible and can image a moving object with changed speed and direction. In future work, imaging of rotating object will be attempted. The proposed scheme can be used for three-dimensional and long-distance moving object imaging.

Funding

National Natural Science Foundation of China (11947028); Fundamental Research Funds for the Central Universities (JUSRP12041); Open Foundation for CAS Key Laboratory of Quantum Information (KQI201).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

2. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

3. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. F. Q. Li, M. Zhao, Z. M. Tian, F. Willomitzer, and O. Cossairt, “Compressive ghost imaging through scattering media with deep learning,” Opt. Express 28(12), 17395–17408 (2020). [CrossRef]  

6. Z. Gao, J. Yin, Y. Bai, and X. Fu, “Imaging quality improvement of ghost imaging in scattering medium based on Hadamard modulated light field,” Appl. Opt. 59(27), 8472–8477 (2020). [CrossRef]  

7. Z. Gao, X. Cheng, K. Chen, A. Wang, Y. Hu, S. Zhang, and Q. Hao, “Computational ghost imaging in scattering media using simulation-based deep learning,” IEEE Photonics J. 12(5), 1–15 6803115 (2020). [CrossRef]  

8. M. Bina, D. Magatti, M. Molteni, A. Gatti, L. A. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013). [CrossRef]  

9. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

10. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117(21), 219902 (2016). [CrossRef]  

11. H. Yu, E. Li, W. Gong, and S. Han, “Structured image reconstruction for three-dimensional ghost imaging lidar,” Opt. Express 23(11), 14541–14551 (2015). [CrossRef]  

12. N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. M. Smith, and M. J. Padgett, “Deep learning optimized single-pixel LiDAR,” Appl. Phys. Lett. 115(23), 231101 (2019). [CrossRef]  

13. S. M. Jiao, M. J. Sun, Y. Gao, T. Lei, Z. W. Xie, and X. C. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

14. E. R. Li, Z. W. Bo, M. L. Chen, W. L. Gong, and S. S. Han, “Ghost imaging of a moving target with an unknown constant speed,” Appl. Phys. Lett. 104(25), 251120 (2014). [CrossRef]  

15. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: A review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

16. Z. H. Xu, W. Chen, J. Penuelas, M. Padgett, and M. J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express. 26(3), 2427–2434 (2018). [CrossRef]  

17. Z. B. Zhang, X. Ma, and J. G. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

18. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

19. W. G. Zhao, H. Chen, Y. Yuan, H. B. Zheng, J. B. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-speed color imaging with single-pixel detectors at low light level,” Phys. Rev. Appl. 12(3), 034049 (2019). [CrossRef]  

20. Z. B. Zhang, X. Y. Wang, G. A. Zheng, and J. G. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

21. C. F. Higham, R. M. Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018). [CrossRef]  

22. S. Sun, J. H. Gu, H. Z. Lin, L. Jiang, and W. T. Liu, “Gradual ghost imaging of moving objects by tracking based on cross correlation,” Opt. Lett. 44(22), 5594–5597 (2019). [CrossRef]  

23. D. Y. Yang, C. Chang, G. H. Wu, B. Luo, and L. F. Yin, “Compressive ghost imaging of the moving object using the low-order moments,” Appl. Sci. 10(21), 7941 (2020). [CrossRef]  

24. W. K. Yu, “Super sub-nyquist single-pixel imaging by means of cake-cutting hadamard basis sort,” Sensors 19(19), 4122 (2019). [CrossRef]  

25. P. G. Vaz, D. Amaral, L. Ferreira, A. Morgado, and J. Cardoso, “Image quality of compressive single-pixel imaging using different hadamard orderings,” Opt. Express 28(8), 11666–11681 (2020). [CrossRef]  

26. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Comput. Optim. Appl. 56(3), 507–530 (2013). [CrossRef]  

27. O. Sefi, Y. Klein, E. Strizhevsky, I. P. Dolbnya, and S. Shwartz, “X-ray imaging of fast dynamics with single-pixel detector,” Opt. Express 28(17), 24568–24576 (2020). [CrossRef]  

28. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE T. inform. Theory 53(12), 4655–4666 (2007). [CrossRef]  

29. J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 14–20 (2008). [CrossRef]  

30. W. W. Zhang, D. Q. Yu, Y. C. Han, W. J. He, Q. Chen, and R. Q. He, “Depth estimation of multi-depth objects based on computational ghost imaging system,” Opt. Laser Eng. 148, 106769 (2022). [CrossRef]  

31. G. Calisesi, M. Castriotta, A. Candeo, A. Pistocchi, C. D’Andrea, G. Valentini, A. Farina, and A. Bassi, “Spatially modulated illumination allows for light sheet fluorescence microscopy with an incoherent source and compressive sensing,” Biomed. Opt. Express 10(11), 5776–5788 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Dynamic scene in Figure 4

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Design of illumination pattern sequence.
Fig. 2.
Fig. 2. Procedure of location and imaging process.
Fig. 3.
Fig. 3. Imaging result for K=140 frame dynamic object with constant speed and direction. (a) Object image. (b) Moving trajectory of object. (c) Conventional SPI result. (d) Result of proposed method when N = 30, N1 = N2 = 15. (e)(f) Comparison between true and calculated locations of object in each frame. (e) Results in X direction. (f) Results in Y direction. The unit of Y-axis is pixel.
Fig. 4.
Fig. 4. Imaging result for K=140 frame dynamic scene (see Visualization 1). (a) Object image. (b) Moving direction of object in two adjacent frames is selected randomly from the eight directions or maintained constant. (c) Conventional SPI result. (d) Result of proposed method when N = 30, N1 = N2 = 15. (e)(f) Comparison between true and calculated locations of object in each frame. (e) Results in X direction. (f) Results in Y direction. The unit of Y-axis is pixel.
Fig. 5.
Fig. 5. Influence of N on proposed method when K=40 frames. (a) Variation of MSE and CC value against N. (b)–(d) Recovered image of proposed method when N = 20, 40 and 60.
Fig. 6.
Fig. 6. (a) Experiment setup. TIR prism is total internal reflection prism. f1 = 5 cm, f2 = 20 cm, f3 = f4 = 10 cm, f5 = 5 cm. (b) Object image. Red arrow points the moving direction of object. (c) Conventional SPI result with 3200 Hadamard illumination patterns.
Fig. 7.
Fig. 7. Comparison of original SPI and proposed method. (a)-(d) Original SPI results and (e)-(h) proposed method results when K=20, N = 100, N1 = N2 = 50.
Fig. 8.
Fig. 8. Moving trajectory of object in (a) X direction and (b) Y direction. The solid lines are fitting results whose slopes are estimated average moving speed component in X and Y directions. (c) Comparison between estimated and truth average moving speed of object.
Fig. 9.
Fig. 9. Image results of original SPI and proposed method for different values of N and K. (a) K = 40, N = 50, v=5.4 mm/s. (b) K = 40, N = 30, v=9 mm/s. (c) K = 70, N = 30, v=9 mm/s.
Fig. 10.
Fig. 10. Variation of CC value against K when N = 50 and 100.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

B n = I ( x , y ) × H n ( x , y ) d x d y
b = A u
min u i | | D i u | | 2 + μ 2 | | A u b | | 2 2 , s . t . u 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.