Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-pixel imaging of a randomly moving object

Open Access Open Access

Abstract

Single-pixel imaging enjoys advantages of low budget, broad spectrum, and high imaging speed. However, existing methods cannot clearly reconstruct the object that is fast rotating or randomly moving. In this work, we put forward an effective method to image a randomly moving object based on geometric moment analysis. To the best of our knowledge, this is the first work that reconstructs the shape and motion state of the target without prior knowledge of the speed or position. By using the cake-cutting order Hadamard illumination patterns and low-order geometric moment patterns, we obtain a high-quality video stream of the target which moves at high and varying translational and rotational speeds. The efficient method as verified by simulation and experimental results has great potential for practical applications such as Brownian motion microscopy and remote sensing.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The ability to image a randomly moving object is of great importance in a wide variety of applications such as microscopic particle tracking in Brownian motion, biomedical imaging, industrial monitoring of rotating electrical machines, lidar detection, satellite attitude detection and space docking. As an emerging computational imaging technique [14], single-pixel imaging (SPI) has manifested obvious advantages of low budget, broad spectrum and high imaging speed [58]. Specifically, existing SPI imaging methods can be mainly divided into two categories, i.e. Hadamard SPI [911] and Fourier SPI [1214], both of which have their own merits [15]. The former is more frequently used due to easy binarization and orthogonal properties[1621]. With the improvement in hardware facilities, SPI can also be used in multi-color imaging [2225] and three-dimensional imaging [14,2628], which shows great application potential.

Despite these breakthroughs, most existing methods focus on static object imaging, which is often combined with compressed sensing algorithms such as TVAL3 [29]. It remains a challenging task to track and reconstruct the randomly moving scene without any prior knowledge of the motion parameters, including rotational center, rotational speed and translational speed. In scenarios such as real-time monitoring, it is difficult to image the moving object due to the upper limit of the refreshing rate of the digital micro-mirror device (DMD). Several attempts have been made to tackle this problem, with major achievements in translational single-pixel tracking. In 2019, Zhang et al. proposed an approach based on Fourier SPI [30], which adopted six Fourier basis patterns for tracking, and achieved a temporal resolution of 1,666 frames per second by using a DMD that operates at 10 kHz. Besides, Zha et al. presented in 2021 a new method termed geometric moment detection, which required only 3 geometric moment patterns to illuminate a moving object in one frame, leading to the tracking of a moving object at a frame rate up to 7.4 kHz [31].

Although single-pixel tracking can already capture the translational motion information of fast-moving objects in real time, SPI can only image objects moving at low translational speed. There are two aspects of attempts to strengthen the performance. One way is to make improvements at the hardware facility level, such as spinning mask [32,33] and LED array [34], which can reach the frame rate of 100 fps and 1,000 fps respectively. In this way, they can project much more patterns into one frame within which the object can be treated as static. Another way is to design new algorithms. In [35], Wu et al. calculated the displacement of the object via the cross correlation between the low-resolution images. They then modified the illumination matrix to reconstruct the final high-quality image without prior knowledge of the motion state. However, these methods assume that the object must be regarded as stationary in the sub-time period, which would fail for a fast-moving object. For targets that translate and rotate simultaneously, it remains a great challenge to image the object with SPI due to the lack of rotational information. To tackle this problem, Sun et al. proposed a gradual ghost imaging method based on cross correlation in 2019 [36], which worked well for low-speed translating and rotating objects without prior knowledge of the motion state. When the motion type of the object is known, Jiao et al. imaged the object rotating at any speed [37]. Nevertheless, both methods have great limitations: the former can only image objects with extremely low rotational speed, while the latter requires strong prior knowledge of the object and suffers from high computational cost.

In this work, we put forward an effective and efficient method to image a randomly moving object based on geometric moment analysis. To our best knowledge, this is the first work that reconstructs the shape and motion state of the target without any prior knowledge of the translational velocity, translational direction, rotational center, rotational velocity and rotational direction. Inspired by geometric moment detection [31], we periodically insert five low-order moment patterns between the cake-cutting order Hadamard patterns [35] to reconstruct a high-quality video stream of the moving target. Specifically, we first calculate the motion parameters from those moment patterns, and then update the Hadamard transfer matrix according to the relative motion. As demonstrated in both simulation and experimental results, our method can reconstruct a randomly moving object faithfully with a rotational speed of 1,800 revolutions per minute (r/min), which is very attractive for practical imaging applications such as biomedical imaging and remote sensing.

2. Method

Single-pixel imaging relies on the correlation measurements between the object and the measurement patterns, which can either be projected onto the scene, or be used as detection masks. In this paper, we consider a two-dimensional object O(x,y), and the measurement patterns P ={P1, P2, …, PN}. The recorded single-pixel intensity sequence In can be represented as

$${I_n} = \sum\limits_{x,y} {O({x,y} ){P_n}({x,y} )} \;,$$
where n is the index of the pattern. Due to the sparsity of the target, the number of measurements needed can be much smaller than the number of pixels of the scene. For static objects, the scene can be reconstructed with the measured intensity sequence using the regularized least squares method.

To reconstruct a randomly moving object, we combine the cake-cutting order Hadamard patterns with the low-order geometric moment patterns. The former can be used for compressed sensing imaging while the latter is used to estimate the motion state of the object. The geometric moments mpq of a two-dimensional image is defined as

$${m_{pq}} = \sum\limits_{x,y} {{x^p}{y^q}O({x,y} )} \;,$$
where (p + q) is the order of the geometric moment. The low-order moment information of the object can be obtained by designing the patterns G00, G10, G01, G20, G11, G02 as
$$\scalebox{0.9}{$\begin{array}{c} {G_{00}} = \left[ {\begin{array}{cccc} 1&1& \cdots &1\\ 1&1& \cdots &1\\ \vdots & \vdots & \vdots & \vdots \\ 1&1& \cdots &1 \end{array}} \right],\,{G_{10}} = \left[ {\begin{array}{cccc} 1&2& \cdots &N\\ 1&2& \cdots &N\\ \vdots & \vdots & \vdots & \vdots \\ 1&2& \cdots &N \end{array}} \right],\,{G_{01}} = \left[ {\begin{array}{cccc} 1&1& \cdots &1\\ 2&2& \cdots &2\\ \vdots & \vdots & \vdots & \vdots \\ N&N& \cdots &N \end{array}} \right],\\ {G_{20}} = \left[ {\begin{array}{@{}cccc@{}} {{1^2}}&{{2^2}}& \cdots &{{N^2}}\\ {{1^2}}&{{2^2}}& \cdots &{{N^2}}\\ \vdots & \vdots & \vdots & \vdots \\ {{1^2}}&{{2^2}}& \cdots &{{N^2}} \end{array}} \right],\,{G_{11}} = \left[ {\begin{array}{@{}cccc@{}} {1 \times 1}&{2 \times 1}& \cdots &{N \times 1}\\ {1 \times 2}&{2 \times 2}& \cdots &{N \times 2}\\ \vdots & \vdots & \vdots & \vdots \\ {1 \times N}&{2 \times N}& \cdots &{N \times N} \end{array}} \right],\,{G_{02}} = \left[ {\begin{array}{@{}cccc@{}} {{1^2}}&{{1^2}}& \cdots &{{1^2}}\\ {{2^2}}&{{2^2}}& \cdots &{{2^2}}\\ \vdots & \vdots & \vdots & \vdots \\ {{N^2}}&{{N^2}}& \cdots &{{N^2}} \end{array}} \right]. \end{array}$}$$

The grayscale geometric moment patterns are binarized in both the simulation and experiment by using the Floyd-Steinberg dithering method [13]. The centroid (xc, yc) of the object can be obtained using the zeroth-order and first-order geometric moments with

$${x_c} = \frac{{{m_{10}}}}{{{m_{00}}}}\;,\;{y_c} = \frac{{{m_{01}}}}{{{m_{00}}}}\;.$$

Besides, according to the theory of principal component analysis, the direction or major axis of the object can be expressed as the dominant eigenvector of the following matrix

$$C = \frac{1}{{{m_{00}}}}\left[ {\begin{array}{cc} {{\mu_{20}}}&{{\mu_{11}}}\\ {{\mu_{11}}}&{{\mu_{02}}} \end{array}} \right]\;,$$
where ${\mu _{20}} = {m_{20}}/{m_{00}} - x_c^2,\,{\mu _{11}} = {m_{11}}/{m_{00}} - {x_c}{y_c},\,\textrm{and}\,{\mu _{02}} = {m_{02}}/{m_{00}} - y_{c}^{2}$ [38]. We define the directional angle of the object ${\boldsymbol{\theta}}$ to be the angle from the positive direction of x-axis to the major axis.

To image the object with unknown moving speed, we further divide one frame into N slices. By NH, NG, and NS we denote the number of cake-cutting Hadamard patterns, the number of geometric moment patterns, and the number of total patterns used in one slice, respectively. A schematic detection pattern sequence is shown in Fig. 1. The symbol Hn represents the Hadamard pattern and the symbol Gxy denotes the geometric moment pattern. We use the parameters xc, yc to represent the centroid of the object in each slice and xr, yr to represent the rotational center in horizontal and vertical directions. The parameter ${\boldsymbol{\theta}}$ represents the major axis of the object. Based on geometric moment analysis, we calculate the parameters xc, yc and ${\boldsymbol{\theta}}$ from m00, m10, m01, m20, m02, m11. Noting that for most real-world objects, the centroid does not coincide with the center of rotation, we approximate the rotational center by finding the slices with the closest directional angle

$$\mathrm{\Delta }{\mathrm{\theta }^\ast } = \mathop {\textrm{argmin}}\limits_{\mathrm{\Delta} \mathrm{\theta}} \textrm{ mod}({\mathrm{\Delta \theta ,2\pi }} )\;,$$
where Δθ is chosen from the set of all possible values of differences of the directional angle between two arbitrary slices. We therefore recover the rotational center xr, yr, the translational speed vx, vy and the angular speed ω based on the intensity of the geometric moment measurements.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed method. The cake-cutting order Hadamard patterns are divided into 20 slices, as shown in the left. Dithered geometric moment patterns inserted at the end of each slice are shown in the upper right, which help to determine the rotational center xc, yc and the directional angle ${\boldsymbol{\theta}}$. The parameters xr, yr, vx, vy and ω are then calculated and used to adjust the measurement matrix A. Four adjusted Hadamard patterns are shown in the bottom right.

Download Full Size | PDF

Since the forward motion of an object is equivalent to the reverse motion of the patterns, the transfer matrix can be adjusted and serves as the input of the compressed sensing algorithm. The rotational center of the object can be calculated with the geometric moment patterns. According to this rotational center, a disk with a sufficiently large radius is chosen and the corresponding locations of the following Hadamard basis (the red circles in the bottom right of Fig. 1) are rotated based on the motion parameters. Here, we utilize the alternating direction method of multipliers (ADMM) to solve the following optimization problem

$$\begin{array}{c} {\mathop {\min }\limits_{\boldsymbol{x}} \;{{||{D\boldsymbol{x}} ||}_1}}\\ {\rm{s}.\rm{t}.\;\textrm{A}\boldsymbol{x} = \boldsymbol{b}} \end{array}\;,$$
where the vector x is the albedo of the object, the vector b is the measured intensity sequence, and D represents the matrix of the discrete cosine transform.

In practice, it is difficult to dither the second-order geometric moment patterns directly, because the gray value increases with the square of the pixel number. As a result, a proper scale factor is chosen to match the correlation between the first-order and second-order geometric moment patterns.

3. Simulation results

We carry out simulations of two scenarios to test the efficiency of the proposed method, namely, a simple bar and the letters “THU”, each of which contains 16 frames. The object image contains 128 × 128 pixels and the slice number N is set as 20. For each slice, we use 164 Hadamard patterns and 5 additional geometric moment patterns. Since the zeroth-order moment of the scene remains unchanged during each frame, we only measure the intensity of this pattern once for each frame. Thus, the total compression ratio is 20.63%. The number of geometric moment patterns used to determine the motion state of the object is 100, which is only 3.05% of the number of Hadamard patterns used.

To fully demonstrate the feasibility of our method, we consider complicated moving scenarios illustrated in Fig. 2. The varied motions include constant translational and rotational speeds, continuous variations of translational and rotational speeds in magnitude, translational and rotational speed jump in magnitude and direction, as well as ultralow rotational speed.

 figure: Fig. 2.

Fig. 2. Test scenario with variations of translational and rotational speeds. (a) Translational speeds in the horizontal and vertical directions. (b) Rotational speed.

Download Full Size | PDF

Figure 3 illustrates the reconstructed trajectory and shape of the object. It is shown that the proposed method provides faithful reconstructions of the moving object with randomly varying translational and rotational speeds, achieving a high-quality reconstruction with a rotational speed of up to 1800 r/min. The edges of the reconstructed bars of frames 3 and 11 are not sharp due to the approximation errors resulting from the constant motion assumptions in each frame. Nevertheless, the proposed method provides clear reconstructions. The moving process and the reconstructed video stream are provided in the supplemental material Visualization 1. For better visualization, the period of two adjacent frames is set as 2 s, which corresponds to 0.17 s in reality.

 figure: Fig. 3.

Fig. 3. Reconstruction results of the bar. (a) Motion trajectory of the object. The red curve represents the ground truth and the blue curve represents the estimated trajectory of the rotational center. (b) Reconstructed albedo. Scale bars, 25 pixels.

Download Full Size | PDF

To further demonstrate the accuracy of the proposed method, Table 1 lists the relative differences between the measured motion parameters and the ground truth, where Δω, Δvx, and Δvy refer to the absolute errors of the rotational velocity, as well as the translational velocities in horizontal and vertical directions.

Tables Icon

Table 1. Relative errors of the reconstructed motion parameters of the bar.a

Next, we verify the proposed method with a more complex object containing three letters “THU”. The motion state is the same as in the previous example. To vividly describe the rotational characteristic, we show in Fig. 4 the reconstruction results without adjusting the angular orientation. The moving process and the reconstructed video stream are provided in Visualization 2 of the supplemental material. The relative errors are listed in Table 2. Figure 5 shows reconstruction results of the letters “THU” with gray-scaled albedo. It is shown that the target is still reconstructed faithfully with distinguishable details.

 figure: Fig. 4.

Fig. 4. Reconstruction results of the letters “THU” with binary albedo. The first and second rows show the ground truth and the reconstruction results, respectively. Scale bars, 25 pixels.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Reconstruction results of “THU” with gray-scaled albedo. Scale bars, 25 pixels.

Download Full Size | PDF

Tables Icon

Table 2. Relative errors of the reconstructed motion parameters of the letters “THU”.a

To further illustrate the advantage of proposed method, we compare in Fig. 6 our results with those obtained with the cross-correlation-based ghost imaging (CBGI) method [36]. Although the CBGI method can recover the object without rotation and provides a rough estimation of the image at the low rotational speed (2π rad/s), it does not work well when the rotational speed is high (40π rad/s). In contrast, the proposed method reconstructs the targets faithfully in all cases, with sharp boundaries and no background noise. It is worth noting that by increasing the slice number, objects can be reconstructed even with a rotational speed higher than 10000 r/min, which is equivalent to the speed of a servo motor.

 figure: Fig. 6.

Fig. 6. Comparisons of CBGI and the proposed method. The translational speed is 40 pixels/s in both the horizontal and vertical directions. The first column shows the results without rotation. The second column adds the rotational dimension with low rotational speed. The third column shows the reconstruction results at high rotational speed. Scale bars, 25 pixels.

Download Full Size | PDF

4. Experimental results

The experimental setup is illustrated in Fig. 7. An expanded 532 nm laser beam (MLL-III-532 nm) illuminates on DMD1 (F6500 Type A, Fldiscovery Technology), which, regarded as the object, has 1920 × 1080 pixels, with the pixel size of 7.56 μm. After the DMD1, there is a 4-f system (L1 and L2, ${f_1} = {f_2} = 175\;mm$), through which the reflected beam is projected onto the DMD2 (F4320 XGA, Fldiscovery Technology), which has 1024 × 768 pixels, with the pixel size of 13.68 μm. The transmitted light through the 4-f system is further modulated by the DMD2 and finally focused on the detector (PDA36A2, Thorlabs). The signal is recorded by an oscilloscope. In order to calibrate and match the size of the two DMDs, 7 × 7 and 4 × 4 pixel binning are carried out on DMD1 and DMD2 respectively, leading to 928 × 928 and 512 × 512 pixels practically used, with the corresponding area size of both DMDs as 7 mm. The FPGA used for synchronization between the two DMDs is not illustrated in Fig. 7.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

The object “4” with 128 × 128 pixels is employed in this experiment. The speed setting is shown in Fig. 8. The estimated trajectory of the rotational center of the object is shown in Fig. 9(a) along with the true trajectory, while the relative errors are shown in Table 3. One can see that the two curves almost coincide and the relative errors of the velocity are small, indicating that our method can correctly recover the motion state of the object. The results shown in Fig. 9(b) illustrate that the proposed method can reconstruct the object faithfully. The moving process and the reconstructed video stream are provided in Visualization 3 of the supplemental material.

 figure: Fig. 8.

Fig. 8. The experimental setting with variations of translational and rotational speeds. (a) Translational speeds in the horizontal and vertical directions. (b) Rotational speed.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Reconstruction results of the object “4”. (a) Motion trajectory of the object. The square-marked curve represents the ground truth. The star-marked curve represents the estimated trajectory of the rotational center. (b) Reconstructed albedo of the frames 1, 3, 8, 12 and 15. Scale bars, 25 pixels.

Download Full Size | PDF

Tables Icon

Table 3. Relative errors of the reconstructed motion parameters of the object “4”.a

Furthermore, we compare in Fig. 10 the reconstruction results with the CBGI method. CBGI can hardly recover the object with the rotational speed of 2π rad/s and fails to reconstruct the target at 40π rad/s, while the proposed method can provide distinct reconstructions at both rotational speeds, indicating that the proposed method is more robust to measurement noise.

 figure: Fig. 10.

Fig. 10. Comparisons of reconstruction results of CBGI and the proposed method. The first column shows the results of the object rotating at a low speed. The second column corresponds to the results in the fast rotation scenario. Scale bars, 25 pixels.

Download Full Size | PDF

5. Discussion

We have demonstrated that the relative error of rotational speed is within 1%, indicating the applicability of the proposed scheme in the rotational dimension. As for the translational dimension, it is important that the deviation of translational position falls within 1.23 pixels during one frame for both 40 pixels/s and 80 pixels/s, which is sufficiently small for an imaging task. For example, when imaging the unmanned aerial vehicle in an area of 1 m × 1 m, the translational error can be controlled below 10 mm per frame.

To study the impact of the slice number, we simulate the scenario of the object “4” with different translational and rotational speeds. As shown in Fig. 11, faithful reconstructions are obtained at low speeds. When the speed increases, more slices are needed for high-quality reconstructions. Also considering that larger slice number leads to longer measurement time, and we choose a moderate slice number as 20 in this work.

 figure: Fig. 11.

Fig. 11. Imaging results of the object “4” with different slice numbers. Scale bars, 25 pixels.

Download Full Size | PDF

6. Conclusion

In this paper, we have proposed an effective method based on geometric moment analysis to image a randomly fast-moving object. Each frame is divided into 20 slices and the motion state of each slice can be obtained. In both simulations and experiments, our proposed method can provide faithful reconstructions of the target. Compared with existing motion estimation schemes, our method realizes the reconstruction of fast-rotating objects for the first time to the best of our knowledge, even when the speed changes irregularly. Also the proposed method is very promising for the tasks of real-time attitude detection and three-dimensional imaging.

Funding

National Natural Science Foundation of China (11971258, 12071244, 61975087).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

2. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

3. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

4. B. Sun, M. Edgar, R. Bowman, L. Vittert, S. Welsh, A. Bowman, and M. Padgett, “Differential computational ghost imaging,” in Computational Optical Sensing and Imaging (Optical Society of America, 2013), paper CTu1C-4.

5. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

6. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

7. M.-J. Sun and J.-M. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: A brief review,” Sensors 19(3), 732 (2019). [CrossRef]  

8. S. Jiang, X. Li, Z. Zhang, W. Jiang, Y. Wang, G. He, Y. Wang, and B. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499–22507 (2019). [CrossRef]  

9. M. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669–8 (2015). [CrossRef]  

10. Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3D single-pixel video,” J. Opt. 18(3), 035203 (2016). [CrossRef]  

11. M.-J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016). [CrossRef]  

12. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225–6226 (2015). [CrossRef]  

13. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 1–9 (2017). [CrossRef]  

14. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016). [CrossRef]  

15. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

16. Z. Ye, B. Su, P. Qiu, and W. Gao, “Secured regions of interest (SROIs) in single-pixel imaging,” Sci. Rep. 9(1), 1–8 (2019). [CrossRef]  

17. F. Sha, S. K. Sahoo, H. Q. Lam, B. K. Ng, and C. Dang, “Improving single pixel imaging performance in high noise condition by under-sampling,” Sci. Rep. 10(1), 1–8 (2020). [CrossRef]  

18. P. G. Vaz, D. Amaral, L. R. Ferreira, M. Morgado, and J. Cardoso, “Image quality of compressive single-pixel imaging using different Hadamard orderings,” Opt. Express 28(8), 11666–11681 (2020). [CrossRef]  

19. M.-J. Sun, L.-T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464–3467 (2017). [CrossRef]  

20. D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]  

21. Y. Zhang, J. Cao, H. Cui, D. Zhou, B. Han, and Q. Hao, “Retina-like Computational Ghost Imaging for an Axially Moving Target,” Sensors 22(11), 4290 (2022). [CrossRef]  

22. M. Yao, Z. Cai, X. Qiu, S. Li, J. Peng, and J. Zhong, “Full-color light-field microscopy via single-pixel imaging,” Opt. Express 28(5), 6521–6536 (2020). [CrossRef]  

23. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

24. Z. Qiu, Z. Zhang, and J. Zhong, “Efficient full-color single-pixel imaging based on the human vision property—“giving in to the blues”,” Opt. Lett. 45(11), 3046–3049 (2020). [CrossRef]  

25. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

26. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010–12016 (2016). [CrossRef]  

27. M. P. Edgar, M.-J. Sun, G. M. Gibson, G. C. Spalding, D. B. Phillips, and M. J. Padgett, “Real-time 3D video utilizing a compressed sensing time-of-flight single-pixel camera,” in Optical Trapping and Optical Micromanipulation XIII (SPIE, 2016), 9922, pp. 171–178.

28. W. Jiang, Y. Yin, J. Jiao, X. Zhao, and B. Sun, “2,000,000 fps 2D and 3D imaging of periodic or reproducible scenes with single-pixel detectors,” Photonics Res. 10(9), 2157–2164 (2022). [CrossRef]  

29. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Comput. Optim. Appl. 56(3), 507–530 (2013). [CrossRef]  

30. Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35401 (2019). [CrossRef]  

31. L. Zha, D. Shi, J. Huang, K. Yuan, W. Meng, W. Yang, R. Jiang, Y. Chen, and Y. Wang, “Single-pixel tracking of fast-moving object using geometric moment detection,” Opt. Express 29(19), 30327–30336 (2021). [CrossRef]  

32. E. Hahamovich, S. Monin, Y. Hazan, and A. Rosenthal, “Single pixel imaging at megahertz switching rates via cyclic Hadamard masks,” Nat. Commun. 12(1), 4516 (2021). [CrossRef]  

33. W. Jiang, J. Jiao, Y. Guo, B. Chen, Y. Wang, and B. Sun, “Single-pixel camera based on a spinning mask,” Opt. Lett. 46(19), 4859–4862 (2021). [CrossRef]  

34. Z.-H. Xu, W. Chen, J. Penuelas, M. Padgett, and M.-J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

35. J. Wu, L. Hu, and J. Wang, “Fast tracking and imaging of a moving object with single-pixel imaging,” Opt. Express 29(26), 42589–42598 (2021). [CrossRef]  

36. S. Sun, J.-H. Gu, H.-Z. Lin, L. Jiang, and W.-T. Liu, “Gradual ghost imaging of moving objects by tracking based on cross correlation,” Opt. Lett. 44(22), 5594–5597 (2019). [CrossRef]  

37. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

38. J. Flusser, T. Suk, and B. Zitová, 2D and 3D Image Analysis by Moments (John Wiley & Sons, 2016), Chap. 2.

Supplementary Material (3)

NameDescription
Visualization 1       Reconstructed video result of the bar
Visualization 2       Reconstructed video result of the letters "THU"
Visualization 3       Reconstructed video result of object "4" in the experiment

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of the proposed method. The cake-cutting order Hadamard patterns are divided into 20 slices, as shown in the left. Dithered geometric moment patterns inserted at the end of each slice are shown in the upper right, which help to determine the rotational center xc, yc and the directional angle ${\boldsymbol{\theta}}$. The parameters xr, yr, vx, vy and ω are then calculated and used to adjust the measurement matrix A. Four adjusted Hadamard patterns are shown in the bottom right.
Fig. 2.
Fig. 2. Test scenario with variations of translational and rotational speeds. (a) Translational speeds in the horizontal and vertical directions. (b) Rotational speed.
Fig. 3.
Fig. 3. Reconstruction results of the bar. (a) Motion trajectory of the object. The red curve represents the ground truth and the blue curve represents the estimated trajectory of the rotational center. (b) Reconstructed albedo. Scale bars, 25 pixels.
Fig. 4.
Fig. 4. Reconstruction results of the letters “THU” with binary albedo. The first and second rows show the ground truth and the reconstruction results, respectively. Scale bars, 25 pixels.
Fig. 5.
Fig. 5. Reconstruction results of “THU” with gray-scaled albedo. Scale bars, 25 pixels.
Fig. 6.
Fig. 6. Comparisons of CBGI and the proposed method. The translational speed is 40 pixels/s in both the horizontal and vertical directions. The first column shows the results without rotation. The second column adds the rotational dimension with low rotational speed. The third column shows the reconstruction results at high rotational speed. Scale bars, 25 pixels.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. The experimental setting with variations of translational and rotational speeds. (a) Translational speeds in the horizontal and vertical directions. (b) Rotational speed.
Fig. 9.
Fig. 9. Reconstruction results of the object “4”. (a) Motion trajectory of the object. The square-marked curve represents the ground truth. The star-marked curve represents the estimated trajectory of the rotational center. (b) Reconstructed albedo of the frames 1, 3, 8, 12 and 15. Scale bars, 25 pixels.
Fig. 10.
Fig. 10. Comparisons of reconstruction results of CBGI and the proposed method. The first column shows the results of the object rotating at a low speed. The second column corresponds to the results in the fast rotation scenario. Scale bars, 25 pixels.
Fig. 11.
Fig. 11. Imaging results of the object “4” with different slice numbers. Scale bars, 25 pixels.

Tables (3)

Tables Icon

Table 1. Relative errors of the reconstructed motion parameters of the bar.a

Tables Icon

Table 2. Relative errors of the reconstructed motion parameters of the letters “THU”.a

Tables Icon

Table 3. Relative errors of the reconstructed motion parameters of the object “4”.a

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

$${I_n} = \sum\limits_{x,y} {O({x,y} ){P_n}({x,y} )} \;,$$
$${m_{pq}} = \sum\limits_{x,y} {{x^p}{y^q}O({x,y} )} \;,$$
$$\scalebox{0.9}{$\begin{array}{c} {G_{00}} = \left[ {\begin{array}{cccc} 1&1& \cdots &1\\ 1&1& \cdots &1\\ \vdots & \vdots & \vdots & \vdots \\ 1&1& \cdots &1 \end{array}} \right],\,{G_{10}} = \left[ {\begin{array}{cccc} 1&2& \cdots &N\\ 1&2& \cdots &N\\ \vdots & \vdots & \vdots & \vdots \\ 1&2& \cdots &N \end{array}} \right],\,{G_{01}} = \left[ {\begin{array}{cccc} 1&1& \cdots &1\\ 2&2& \cdots &2\\ \vdots & \vdots & \vdots & \vdots \\ N&N& \cdots &N \end{array}} \right],\\ {G_{20}} = \left[ {\begin{array}{@{}cccc@{}} {{1^2}}&{{2^2}}& \cdots &{{N^2}}\\ {{1^2}}&{{2^2}}& \cdots &{{N^2}}\\ \vdots & \vdots & \vdots & \vdots \\ {{1^2}}&{{2^2}}& \cdots &{{N^2}} \end{array}} \right],\,{G_{11}} = \left[ {\begin{array}{@{}cccc@{}} {1 \times 1}&{2 \times 1}& \cdots &{N \times 1}\\ {1 \times 2}&{2 \times 2}& \cdots &{N \times 2}\\ \vdots & \vdots & \vdots & \vdots \\ {1 \times N}&{2 \times N}& \cdots &{N \times N} \end{array}} \right],\,{G_{02}} = \left[ {\begin{array}{@{}cccc@{}} {{1^2}}&{{1^2}}& \cdots &{{1^2}}\\ {{2^2}}&{{2^2}}& \cdots &{{2^2}}\\ \vdots & \vdots & \vdots & \vdots \\ {{N^2}}&{{N^2}}& \cdots &{{N^2}} \end{array}} \right]. \end{array}$}$$
$${x_c} = \frac{{{m_{10}}}}{{{m_{00}}}}\;,\;{y_c} = \frac{{{m_{01}}}}{{{m_{00}}}}\;.$$
$$C = \frac{1}{{{m_{00}}}}\left[ {\begin{array}{cc} {{\mu_{20}}}&{{\mu_{11}}}\\ {{\mu_{11}}}&{{\mu_{02}}} \end{array}} \right]\;,$$
$$\mathrm{\Delta }{\mathrm{\theta }^\ast } = \mathop {\textrm{argmin}}\limits_{\mathrm{\Delta} \mathrm{\theta}} \textrm{ mod}({\mathrm{\Delta \theta ,2\pi }} )\;,$$
$$\begin{array}{c} {\mathop {\min }\limits_{\boldsymbol{x}} \;{{||{D\boldsymbol{x}} ||}_1}}\\ {\rm{s}.\rm{t}.\;\textrm{A}\boldsymbol{x} = \boldsymbol{b}} \end{array}\;,$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.