Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Bi-frequency 3D ghost imaging with Haar wavelet transform

Open Access Open Access

Abstract

Recently, ghost imaging has been attracting attention because its mechanism could lead to many applications inaccessible to conventional imaging methods. However, it is challenging for high-contrast and high-resolution imaging, due to its low signal-to-noise ratio (SNR) and the demand of high sampling rate in detection. To circumvent these challenges, we propose a ghost imaging scheme that exploits Haar wavelets as illuminating patterns with a bi-frequency light projecting system and frequency-selecting single-pixel detectors. This method provides a theoretically 100% image contrast and high-detection SNR, which reduces the requirement of high dynamic range of detectors, enabling high-resolution ghost imaging. Moreover, it can highly reduce the sampling rate (far below Nyquist limit) for a sparse object by adaptively abandoning unnecessary patterns during the measurement. These characteristics are experimentally verified with a resolution of $512\times 512$ and a sampling rate lower than 5%. A high-resolution ($1000\times 1000\times 1000$) 3D reconstruction of an object is also achieved from multi-angle images.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) has been attracting more and more attentions owing to its unique imaging technique [13]. Compared with traditional optical imaging techniques, GI illuminates a scene with a sequence of optical patterns and uses a bucket detector to collect the total reflected or transmitted light from the object. The correlation between the bucket signal and the patterns reveals the image of the object. Computational ghost imaging (CGI) was later proposed [4] and experimentally demonstrated soon after [5], providing a practical scheme for the applications of GI. Without recourse to imaging lenses or spatial resolving detectors, GI possesses many advantages, such as lensless imaging capability [6], high detection sensitivity [7], and being applicable to scenarios lacking array detectors [810]. GI therefore has a lot potential applications in different fields from optical imaging [11], X-ray imaging [1214], biological diagnostics [15] to atomic sensing [16,17].

However, GI is still challenging for high-resolution imaging and high SNR detection. The imaging resolution is subject to the size of speckles projected onto a scene. A smaller size of the speckles yields a higher resolution, causing more speckles emerging within the scene. Consequently, the mean value of the bucket signal will increase and its variation becomes relatively small. As the resolution increases, the variance of the bucket signal is harder and harder to be detected, demanding a very high dynamic range of single-pixel detectors. Moreover, for an $N$-pixel image (the average number of speckles on the scene is $N$), GI requires at least $N$ samples to recover the image, so called the Nyquist limit of the measurement. Especially, GI with the random speckles usually needs much more samples than $N$. Thus, for high-resolution imaging, a large number of samples will cause low imaging speed, standing in the way of the application of ghost imaging.

To overcome these challenges, we introduce the wavelet transform (WT) [18,19] into the light source modulation of the CGI system. The speckle patterns are designed based on a modified 2D Haar wavelets [20], which are projected onto an object with a bi-frequency illuminating source. The reflected light from the object is measured with two frequency-selecting single-pixel detectors. The image is recovered based on the theory of inverse wavelet transform (IWT). We term this method "Bi-frequency Wavelet Ghost Imaging (BiWave-GI)", which has the following three advantages. (1) It provides a principally 100% image contrast. Experimentally, we can easily acquired a contrast more than 90% under a resolution of $512\times 512$. (2) BiWave-GI detects an object from a low resolution to a high resolution. On each resolution level, it performs a bi-block (only two nonzero pixels in each wavelet matrix) scanning on the object. This results in a very high detection SNR, which dramatically reduces the requirement of high dynamic range of detectors. Even with 1-bit detectors (only differentiate on or off), BiWave-GI can still provide high-contract and high-resolution results. (3) On each resolution level, we adaptively abandon unnecessary samples based on the detection of the previous lower resolution level. Thus, for a real-space spare object, BiWave-GI can measure the object far below the Nyquist limit without any prior knowledge of the object or any iterative reconstruction algorithm, it is different than the methods using compressive sensing [21], which can be applied in all space but required prior knowledge of the object. We experimentally demonstrate a high-resolution imaging with a sampling rate lower than 5%. These advantages enable us easily and efficiently obtain high-contrast and high-resolution images with BiWave-GI, which also pave a way for 3D ghost imaging with high-resolution. Experimentally, we combine BiWave-GI and space carving algorithm [2224] together and reconstruct a 3D imaging with $1000\times 1000\times 1000$ resolution.

There are some researches on using multi-resolution patterns for ghost imaging. Zhang’s group used phase-shifting sinusoid structured illumination to acquire the Fourier spectrum of an object from low- to high-frequency, which can achieve high-quality images [25]. This method would inevitably lose part of the object information (high-frequency components) when imaging below Nyquist limit. Zhou’s group rearranged the Hardamard patterns to realize continuous multi-resolution imaging simply and quickly [26], but did not mention whether their method has the capability of imaging below Nyquist limit. Both of these two works neither discussed the problem of the detection SNR of the single-pixel detection, nor investigated imaging at low dynamic range of a detector.

Recently, wavelet analysis has been considered as an important tool for ghost imaging and single-pixel imaging [2730]. Some researches focused on using wavelet analysis to post-process images rather than using wavelet bases as 2D illuminating patterns [2729], which did not take the advantage of high detection SNR as we acquired in our experiments. There is a work that used 1D Haar wavelet patterns to illuminate a moving object and obtained fast imaging [30]. This previous work could not implement low sampling rate detection, because it lacks 2D wavelet analysis during the measurement. Our method provides a comprehensive solution to the challenges including high-contrast, high-resolution and adaptive low sampling rate, which would make a big step to the application of ghost imaging.

2. Principle

Wavelets basis functions are mathematical functions that map data onto different frequency components, and then study each component with a resolution matched to its scale. They have advantages over traditional Fourier methods in analyzing physical situations where the signal contains discontinuities and sharp spikes [31]. We here select Haar wavelets [32] (one of the simplest wavelets), whose mother wavelet function is a binary function:

$$\begin{aligned} \varphi (t)=\begin{cases} \ \ \ 1, & t\in [0,\frac{1}{2}] \\ {-1}, & t\in [\frac{1}{2},1] \\ \ \ \ 0, & otherwise\end{cases}. \end{aligned}$$
The Haar wavelet basis functions (daughter wavelets) are constructed from the mother wavelet by scaling and shifting operation. For a $N$-pixel Haar transform, the $j$-th daughter wavelet is written as:
$$H_j=\sqrt{2^{s-q}}\varphi(2^{s-q}\cdot t-k),$$
where $q=log_2N$. $s=0,1,2,\ldots ,q-1$, denoting the scaling level. $k=0,1,\ldots ,2^j-1$, representing the shifting factor. $j=2^{s+1}-1+k$, indicating the $j$-th basis. We then reshape each daughter wavelet to a $n\times n$ ($n=\sqrt {N}$) 2D matrix. There are many different types of 2D transformations. Here, we choose the simplest one that is directly dividing a basis into $n$ segments, each of which becomes a row of the 2D matrix. It can be formulated as below:
$$M_j(x,y)=\sqrt{2^{s-q}}\varphi\left(2^{s-q}\cdot ((y-1)n+x)-k\right).$$
In the subsection of “Low sampling rate”, we will show another type of 2D matrices, whose transformation is a little complicated, but which has a lower sampling rate and is more suitable for 2D images .

A schematic of the experimental setup is shown in Fig. 1. A sequence of the 2D matrices are projected onto an object with a DMD (digital micro-mirror device). The resolution is $512\times 512$ pixels. Since the Haar wavelets consist of $+1$, $0$ and $-1$ in mathematics. We therefore use two frequencies of light to represent $+1$ and $-1$, respectively. In the experiments, we use DMD’s embedded red and green light sources to implement the bi-frequency configuration. Correspondingly, the reflected light from the objects are measured with two single-pixel detectors. Red and green band-pass filters are placed in front of the two detectors respectively, making frequency-selecting detectors. When the object is illuminated with $M_j$, the two detectors respectively measure the intensities from two different-frequencies light, and give two bucket signals denoting as $I^{1}_j$ and $I^{2}_j$. The total bucket signal is constructed as $B_j=I^{1}_j-I^{2}_j$. After projecting all set of Modified Haar 2D matrices onto the object, we obtain a set of bucket signals:

$$\begin{aligned}&\{M_1,M_2,\ldots,M_N\}\cdot \textbf{O}bj=\{B_1,B_2,\ldots,B_N\}\\ &\equiv \textbf{M} \cdot Obj=\textbf{B}. \end{aligned}$$
The image is then reconstructed by inverse wavelet transform:
$$\textbf{O}bj = \textbf{M}^{{-}1} \cdot \textbf{B}.$$
Theoretically, the reconstruction has zero background, which is different than the ghost imaging with random speckle patterns.

 figure: Fig. 1.

Fig. 1. The schematic of the experimental setup. The light sources is a DMD with the transmitting lens (focal length 2.5mm). F1 and F2 are red and green filters; D1 and D2 are bucket detectors (Hamamatsu PMT H13320-03 with a photocathode area size of 3.7$\times$13.0 mm$^2$). We tested the imager with a 3D object (a shuttlecock model with the size of 7.5cm$\times$6cm$\times$6cm) and a sparse object (letters of "XJTU" with a duty ration of 1.5%). The distance between the DMD and the object is 0.4m.

Download Full Size | PDF

Multi-resolution (or multi-scale) analysis is an important characteristics of Haar wavelets. As described by Eq. (3), in each scaling level, 2D Haar wavelets acts as using two light strips (1 and -1) to scan (by the shifting operation) an object, and reconstruct an image at this resolution level. This scanning operation causes a high detection SNR for each scaling level. Thus, it dramatically reduces the requirement of the high dynamic range of detectors for ghost imaging.

In the following, we will experimentally show that this imager provides an almost background-free image with a high resolution and very low dynamic range of detectors, as well as a low sampling rate for a sparse object.

3. Experiment results and discussions

In the experiment, we image the shuttlecock model shown in Fig. 2(a) using the modified Haar wavelet patterns {$M_j$} with a $512\times 512$ resolution. The experimental results are shown in Fig. 2. As predicted by the above theory, the backgrounds of the images are almost zeros. Note that, we did not use any imaging enhancement process to remove the background. Since we are able to capture high-contrast and high-resolution images with different angles, a 3D image can be acquired with active passive 3D methods such as space carving algorithm [2224].

 figure: Fig. 2.

Fig. 2. The images of the shuttlecock at multiple angles using Biwave-GI with a resolution of $512\times 512$.

Download Full Size | PDF

We first generated a big cube with a scale of $2048\times 2048\times 2048$ and divided it into $1000\times 1000\times 1000$ small cubes, each small cube with a resolution of 2.048. A total of 72 images with different angles (5 degrees apart) were obtained experimentally. Then, the image result in a certain angle is binarized, and the silhouette of the object is extracted. Based on the silhouette result, the big cube is engraved along the direction, the small cubes without object are removed. The images of each angle are processed according to this method, and the results of the big cube engraving at each angle are intersected to obtain the 3D reconstruction result with $1000\times 1000\times 1000$ resolution shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. The 3D imaging result of the shuttlecock with space carving algorithm. The resolution is $1000\times 1000\times 1000$.

Download Full Size | PDF

3.1 Low dynamic range requirements for detectors

In the process of ghost imaging, a set of light patterns is projected onto an object, gets reflected from the object, and then is detected by a bucket detector. The variation of the bucket signal reflects how the light patterns are modulated by the object. Therefore, how precisely the variation is measured determines the quality of the recovered image. However, when the resolution increases, the variance of the bucket signal decreases while its mean gets higher, which requires a detector with a sufficient high dynamic range. In reality, the dynamic range of a detector is usually very limit. Thus, this instinctive scheme prevents ghost imaging from high-resolution.

To evaluate the contrast of the variation of a bucket signal, we define a detection SNR (DSNR) as below:

$$DSNR = \Delta B/\bar{B},$$
where $\Delta B$ is the standard deviation of the bucket signal, and $\bar {B}$ is the mean of the bucket signal. If we project random speckle patterns onto the object with a resolution of 512$\times$512, the mean value of the bucket signal is very high, while its variance is quite small, as shown in Fig. 4(a). The DSNR is 0.0028. When Hadamard patterns of 512$\times$512 are used, its DSNR is 0.0062, better than that with the random speckles. From Fig. 4(b), we can see that, the bucket signal still has a large mean and a very small variance in most time except several spikes. These spikes result from some special patterns which have a large continuous region of "+1". These two cases suggest that a typical ghost imaging has a low DSNR. In contrast, the bucket signal of BiWave-GI is quite distinguishable (see Fig. 4(c)) with a big DSNR of 2.2747. This is mainly due to the two features of BiWave-GI: (1) Each pattern only has two single pixels (+1 and -1, represented with two different frequencies in our system). Note that the sizes of the pixels are gradually enlarged from a low-resolution scale to a high one. (2) Each measurement of the bucket signal is the difference of the light intensities from two spots of the object, which are illuminated by the +1 and -1 pixels (using bi-frequency illumination in our system), respectively. Therefore, one can see almost-zero data points in Fig. 4(c).

 figure: Fig. 4.

Fig. 4. The normalized bucket signals of three schemes. (a) Random patterns scheme; (b) Hadamard patterns scheme; (c) BiWave-GI scheme.

Download Full Size | PDF

If the required precision to distinguish the signal is $\epsilon$, the required dynamic range is at least $D=(\bar {B}+\Delta B/2)/\epsilon$, equivalent to $log_2 D$ bits. It should be emphasized that, only the variation of the bucket signal is meaningful to ghost imaging. For instant, if $\epsilon >\Delta B$ (although D can be very large due to a large $\bar {B}$), the variation of B cannot be sensed by the detector, with which an image is unable to be recovered. To sense such a variation, the minimum requirement is $\epsilon <\Delta B$, which also defines a minimum requirement for the dynamic range: $D=1/2+1/DSNR$. Thus, the theoretical required dynamic ranges of Figs. 4(a)–4(c) are 9 bits, 8 bits and 1 bit, respectively. Note that, meeting such a minimum requirement does not mean meeting the requirement for recovering an image. For instance, for ghost imaging with random patterns at a resolution of $512\times 512$, a dynamic range of 8 bits is still far from enough.

As analyzed above, Biwave-GI can dramatically reduce the requirement of the dynamic range. It can still recover a clear image even with a dynamic range of 2. For a comparison purpose, we carry out computational ghost imaging experiments with random speckles (RCGI), Multi-Scale Hardamard Speckles (MSHCGI) and Bi-frequency Multi-Scale Hadamard Speckles (Bi-MSCGI), respectively, at different dynamic ranges. In order to make a fair comparison with our method, we rearranged the Hadamard patterns in the order from low resolution to high resolution based on the method of [26], so called MSCGI. We also combine MSCGI and bi-frequency illumination approach together and generate another method called Bi-MSCGI. In Bi-MSCGI, each Hadamard pattern is separated into "+1" part and "-1" part, which are projected out separately with bi-frequency. The difference of their bucket measurements is the bucket detection of this whole pattern. Note that, in MSCGI and the other typical Hadamard experiments, they use light or no light to represnt "+1" and "-1", respectively. The qualities of the recovered images are evaluated by the Structural Similarity Index (SSIM), defined as:

$$SSIM(x,y)\equiv \frac{(2\mu_x\mu_y+C_1)(2\sigma_{xy}+C_2)}{(\mu_x^2+\mu_y^2+C_1)(\sigma_x^2+\sigma_y^2+C_2)},$$
where $\mu _x$, $\mu _y$, $\sigma _x$,$\sigma _y$, and $\sigma _{xy}$ are the local means, standard deviations, and cross-covariance for images x, y. And $C1 = (0.01*L)^2$, $C2 = (0.03*L)^2$, $C3=C2/2$; where L is the specified dynamic range value. To ensure a fair comparison, we reconstructed the image with the same resolution and the same number of speckles for the four methods: for a resolution of $512\times 512$, the maximum number of speckles illuminating the scene is $512\times 512=262144$.

Figure 5 shows images reconstructed with MSHCGI, Bi-MSHCGI and Biwave-GI, where the detector dynamic ranges were changed from $2$(1 bit) to $65536$ (16 bits). Since RCGI failed to recover an image of a $64\times 64$ resolution under a dynamic range of 16 bits, the results of RCGI is not shown on the figure.

 figure: Fig. 5.

Fig. 5. Experiment results of MSHCGI, Bi-MSHCGI and Biwave-GI under different detector dynamic ranges (1, 4, 8 and 16 bits).

Download Full Size | PDF

Figure 6 shows the SSIM of recovered images at different dynamic ranges. The SSIM of Biwave-GI rises very quickly along with the increasing of the dynamic range. Even at a 1 bit dynamic range, Biwave-GI still have a SSIM of more than 0.4, and an image with good quality is recovered as shown in Fig. 5. In contrast, the SSIMs of both Bi-MSHCGI and MSHCGI keep a low value of $\sim 0.1$ until the 8-bit dynamic range. On the other hand, Bi-MSHCGI has a better performance than MSHCGI, due to the use of the bi-frequency scheme.

 figure: Fig. 6.

Fig. 6. SSIM value curves of MSHCGI, Bi-MSHCGI and Biwave-GI under different detector dynamic ranges (1 to 16 bits), respectively.

Download Full Size | PDF

Additionally, we simply investigate how noise affects BiWave-GI by introducing an external noise into the bucket detection. We represent the noise strength as a ratio in percentage: $R_{noise}=\bar {I}_{n}/\bar {B}$, where $\bar {I}_{n}$ is the mean of the noise and $\bar {B}$ is the mean of the bucket detection. The noise has a $1\%$ fluctuation throughout the bucket detection. Figure 7 shows the SNR of recover images versus $R_{noise}$ from 0% to 100%, which suggests that Biwave-GI is robust to such a small fluctuation noise. In reality, noise usually comes from environmental illuminations, such as sunlight or lamp light. The flashing rate of such a noise is usually much higher (thermal light’s intensity fluctuation) or far lower (such as caused by oscillations of alternating current power) than our projecting rate. Thus, the fluctuation of noise will be cancelled out during the measurement. Using the noise with a $1\%$ fluctuation can simulate many scenarios. In addition, Biwave-GI takes the difference of two bucket measurements related to "+1" and "-1" pixels, respectively, which can effectively cancel out the environmental noise. Furthermore, a low detection dynamic range can make the measurements insensitive to small noise.

 figure: Fig. 7.

Fig. 7. SNR dependences of BiWave-GI on the external noise in bucket detector value.

Download Full Size | PDF

3.2 Low sampling rate

Another challenge of ghost imaging is that the number of the required illuminating patterns increases along with the increasing of the required resolution. For orthogonal patterns such as Hadamard speckles, the number of the patterns is equal to the resolution. For example, the resolution of $512\times 512$ requires $262144$ patterns (the sampling rate is 100%). If random speckles are used, much more patterns are needed (the sampling rate $\gg 100\%$). This would result in very long time to project a large number of patterns, limiting the application of ghost imaging with high resolutions. Although the methods using compressed sensing can reduce the sampling rate, they require prior knowledge of spare objects and time-consuming calculations.

We here demonstrate that Biwave-GI can highly reduce the sampling rate for a spare object without any prior knowledge or complicated calculations. Because Haar wavelet can analyze signals from low to high scaling level (resolution) gradually, we design a self-adaptive projecting scheme that reduces the number of patterns for the next higher resolution based on the last reconstruction of the lower resolution.

In the experiment, we group the wavelet bases at the same scaling level as a cluster. For a $512\times 512$ resolution, there are $19$ clusters, and their corresponding resolutions are getting higher and higher. We project patterns on an object cluster by cluster. On each scaling level, we recover an image at this resolution level, from which we can tell which bases of the next scaling level would result in zero or very small bucket signal. We then remove those bases from the next cluster, reducing the number of the projected patterns.

Figure 8(b) shows the experiment results of Biwave-GI that only uses 12614 patterns (the sampling rate is $4.8\%$) for a spare object with letters of "XJTU". Note that, the duty ration (object’s area/scene’s area) is 1.5%, which directly affects the sampling rate. The smaller is the duty ratio, the lower would be the sampling rate. In practical, a target in the sky has a very small duty ratio, which would be a feasible scenario for applying this imaging method.

 figure: Fig. 8.

Fig. 8. (a) The orginal object. (b) The recovered image of ‘XJTU’ at a sampling rate of $4.8\%$ with $M_j$. (c) The recovered image of ‘XJTU’ at a sampling rate of $2.4\%$ with $Q_j$.

Download Full Size | PDF

Furthermore, different transformation from 1D Harr wavelets to 2D matrices would affect the sampling rate. We here test a quadratic blocking transformation: at lowest scaling level, the 2D scene is divided into four blocks (top-left,top-right,bottom-left,bottom-right); at the next level, each block is divided into four sub-blocks; this dividing is repeated until the highest scaling is reached. To represent this type of 2D wavelets, its mother wavelet can be formulated as

$$\begin{aligned} \varphi (x,y)=\begin{cases} \ \ \ 1, & x\in [0,\frac{1}{2}]\cap y\in [0,\frac{1}{2}]\\ {-1}, & x\in [\frac{1}{2},1] \cap y\in [0,\frac{1}{2}] \\ \ \ \ 0, & otherwise\end{cases}. \end{aligned}$$
A $n\times n$ 2D wavelet basis function can be constructed from the mother wavelet:
$$Q_j(x,y)=\sqrt{2^{s-L}}\varphi(2^{s-L}\cdot x-\alpha,2^{s-L}\cdot y-\beta/2),$$
where $L=log_2 n$. $s=0,1,2,\ldots ,L-1$, denoting the scaling level. $\alpha =0,1,\ldots ,2^j-1$ and $\beta =0,1,\ldots ,2^{j+1}-1$, representing the shifting factors along x and y axes respectively. $j=(4^{s+1}-1)/3+2^\alpha +\beta$, indicating the $j$-th base. After $Q_j(x,y)$ are used as illuminating patterns, the sampling rate to recover the same object can be reduced to $2.4\%$, as shown in Fig. 8(c).

4. Conclusion

We have proposed and demonstrated a method exploiting a modified Haar waveles as illuminating patterns for ghost imaging, which enables background-free high resolution imaging. The method also greatly reduces the requirement of the dynamic range of bucket detectors. For a real-space sparse object, the imager can recover the image with a very low sampling rate, expediting the imaging speed. Furthermore, if the prior knowledge of a sparse object can be obtained, a special transformation $T$ could be designed to further reduce the sampling rate. These advantages pays a way to the application of ghost imaging.

Funding

National Natural Science Foundation of China (11503020).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. T. Pittman, Y. Shih, D. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]  

3. D. Zhang, Y. Zhai, L. Wu, and X. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

6. G. Scarcelli, V. Berardi, and Y. Shih, “Can two-photon correlation of chaotic light be considered as correlation of intensity fluctuations?” Phys. Rev. Lett. 96(6), 063602 (2006). [CrossRef]  

7. P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6(1), 5913 (2015). [CrossRef]  

8. H. Peng, Z. Yang, D. Li, and L. Wu, “The application of ghost imaging in infrared imaging detection technology,” in Selected Papers of the Photoelectronic Technology Committee Conferences held June–July 2015, vol. 9795 (International Society for Optics and Photonics, 2015), p. 97952O.

9. H. Liu and S. Zhang, “Computational ghost imaging of hot objects in long-wave infrared range,” Appl. Phys. Lett. 111(3), 031110 (2017). [CrossRef]  

10. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2(12), 1049–1052 (2015). [CrossRef]  

11. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

12. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]  

13. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

14. A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

15. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360(6394), 1246–1251 (2018). [CrossRef]  

16. R. I. Khakimov, B. Henson, D. Shin, S. Hodgman, R. Dall, K. Baldwin, and A. Truscott, “Ghost imaging with atoms,” Nature 540(7631), 100–103 (2016). [CrossRef]  

17. K. Baldwin, R. Khakimov, B. Henson, D. Shin, S. Hodgman, R. Dall, and A. Truscott, “Ghost imaging with atoms and photons for remote sensing,” in Optics and Photonics for Energy and the Environment, (Optical Society of America, 2017), pp. EM4B-1.

18. S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989). [CrossRef]  

19. I. Daubechies, Ten lectures on wavelets, vol. 61 (SIAM, 1992).

20. A. Haar, “Zur theorie der orthogonalen funktionensysteme,” Math. Ann. 69(3), 331–371 (1910). [CrossRef]  

21. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5(1), 9280 (2015). [CrossRef]  

22. R. Szeliski, “Rapid octree construction from image sequences,” Comput. Vis. Image Underst. 58(1), 23–32 (1993). [CrossRef]  

23. A. Laurentini, “How far 3d shapes can be understood from 2d silhouettes,” IEEE Trans. Pattern Anal. Mach. Intell. 17(2), 188–195 (1995). [CrossRef]  

24. K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” Int. J. Comput. Vis. 38(3), 199–218 (2000). [CrossRef]  

25. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

26. C. Zhou, T. Tian, C. Gao, W. Gong, and L. Song, “Multi-resolution progressive computational ghost imaging,” J. Opt. 21(5), 055702 (2019). [CrossRef]  

27. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3(1), 1545 (2013). [CrossRef]  

28. W. Yu, M. Li, X. Yao, X. Liu, L. Wu, and G. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express 22(6), 7133–7144 (2014). [CrossRef]  

29. F. Rousset, N. Ducros, A. Farina, G. Valentini, C. D’Andrea, and F. Peyrin, “Adaptive basis scan by wavelet prediction for single-pixel imaging,” IEEE Trans. Comput. Imaging 3(1), 36–46 (2017). [CrossRef]  

30. M. Alemohammad, J. R. Stroud, B. T. Bosworth, and M. A. Foster, “High-speed all-optical haar wavelet transform for real-time image compression,” Opt. Express 25(9), 9802–9811 (2017). [CrossRef]  

31. G. Strang, “Wavelet transforms versus fourier transforms,” Bull. Amer. Math. Soc. 28(2), 288–306 (1993). [CrossRef]  

32. G. Strang and T. Nguyen, Wavelets and filter banks (SIAM, 1996).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The schematic of the experimental setup. The light sources is a DMD with the transmitting lens (focal length 2.5mm). F1 and F2 are red and green filters; D1 and D2 are bucket detectors (Hamamatsu PMT H13320-03 with a photocathode area size of 3.7$\times$13.0 mm$^2$). We tested the imager with a 3D object (a shuttlecock model with the size of 7.5cm$\times$6cm$\times$6cm) and a sparse object (letters of "XJTU" with a duty ration of 1.5%). The distance between the DMD and the object is 0.4m.
Fig. 2.
Fig. 2. The images of the shuttlecock at multiple angles using Biwave-GI with a resolution of $512\times 512$.
Fig. 3.
Fig. 3. The 3D imaging result of the shuttlecock with space carving algorithm. The resolution is $1000\times 1000\times 1000$.
Fig. 4.
Fig. 4. The normalized bucket signals of three schemes. (a) Random patterns scheme; (b) Hadamard patterns scheme; (c) BiWave-GI scheme.
Fig. 5.
Fig. 5. Experiment results of MSHCGI, Bi-MSHCGI and Biwave-GI under different detector dynamic ranges (1, 4, 8 and 16 bits).
Fig. 6.
Fig. 6. SSIM value curves of MSHCGI, Bi-MSHCGI and Biwave-GI under different detector dynamic ranges (1 to 16 bits), respectively.
Fig. 7.
Fig. 7. SNR dependences of BiWave-GI on the external noise in bucket detector value.
Fig. 8.
Fig. 8. (a) The orginal object. (b) The recovered image of ‘XJTU’ at a sampling rate of $4.8\%$ with $M_j$. (c) The recovered image of ‘XJTU’ at a sampling rate of $2.4\%$ with $Q_j$.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

φ ( t ) = {       1 , t [ 0 , 1 2 ] 1 , t [ 1 2 , 1 ]       0 , o t h e r w i s e .
H j = 2 s q φ ( 2 s q t k ) ,
M j ( x , y ) = 2 s q φ ( 2 s q ( ( y 1 ) n + x ) k ) .
{ M 1 , M 2 , , M N } O b j = { B 1 , B 2 , , B N } M O b j = B .
O b j = M 1 B .
D S N R = Δ B / B ¯ ,
S S I M ( x , y ) ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
φ ( x , y ) = {       1 , x [ 0 , 1 2 ] y [ 0 , 1 2 ] 1 , x [ 1 2 , 1 ] y [ 0 , 1 2 ]       0 , o t h e r w i s e .
Q j ( x , y ) = 2 s L φ ( 2 s L x α , 2 s L y β / 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.