Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D quantum ghost imaging

Open Access Open Access

Abstract

We present current results of a novel, to the best of our knowledge, type of setup for quantum ghost imaging based on asynchronous single photon timing using single photon avalanche diode (SPAD) detectors, first presented in [Appl. Opt. 60, F66 (2021) [CrossRef]  ]. The scheme enables photon pairing without fixed delays and, thus, overcomes some limitations of the widely used heralded setups for quantum ghost imaging [Nat. Commun. 6, 5913 (2015) [CrossRef]  ]. It especially allows three-dimensional (3D) imaging by direct time of flight methods, the first demonstration of which will be shown here. To our knowledge, it is also the first demonstration of 3D quantum ghost imaging at all.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Quantum ghost imaging (QGI) is a novel imaging technique allowing the separation of an object’s illumination and its image acquisition that was first realized by Pittman et al. [1]. Since its discovery, it has gathered increasing interest due to the minimal amount of illumination required, allowing image acquisition with less than one photon per pixel, and the possibility of spectrally separating imaging and illumination, allowing imaging at wavelengths without the need for a camera in this regime [2]. Beyond this, there are some currently mostly theorized advantages over classical illumination schemes, which could be investigated further using this imaging scheme [35]. While some of these advantages are shared by the related classical ghost imaging or compressive sensing schemes using structured illumination, others stem from its nature of exploiting single photon correlations, maximizing the information content of individual photons [2].

QGI is based on creating two entangled photons, usually using spontaneous parametric downconversion (SPDC), and exploiting their temporal and spatial correlation in order to reconstruct the spatial properties of one photon by the measurement of its entangled partner. Owing to the nonlinear nature of SPDC, the entangled photons share their moment and location of creation, while their momenta are anti-correlated due to momentum conservation [6]. After their creation, they are separated usually using either polarization or spectral filters. The partner in the illumination arm, here called the idler, is imaged onto a scene and detected by a single-pixel bucket detector, while the imaging photon, here called the signal, is imaged onto a spatially resolving detector. Due to the spatial correlation of signal and idler beams arising from the correlations of their entangled photons, an image of the scene can be extracted by matching the spatially detected signal photons with their entangled idler partner. In order to correctly identify the paired photons, QGI uses their temporal coincidence originating from their simultaneous moment of creation as a filter mechanism. Using this coincidence detection combined with the spatial correlation, one is able to image with both single photons, signal and idler, and without sophisticated camera technology in the spectrum of interest.

Most current setups use a heralding scheme in order to match entangled photons as shown by Morris et al. [7]. Here the detection of the interacting photon triggers a camera that spatially detects the imaging photon. Due to the detection and trigger delay inherent in this system, the imaging photon has to be delayed so it can be detected within the camera’s measurement time frame. However, in order to reconstruct an image of the scene, one additionally has to maintain the spatial correlation of the photon pair by carefully imaging the idler photons onto the scene in the same image plane as the imaging photon is imaged onto the detector. Thus, the imaging arm has to have an image preserving delay line, which is one of the main disadvantages of this scheme. Since the detection of the interacting photon triggers the detection of the imaging photon, the delay has to be at least its time of flight (ToF), making image preservation difficult for longer distances. Furthermore, the image preservation constraints complicate three-dimensional (3D) imaging since, in order to obtain 3D information, the delay line would also have to be adaptable. As a result of both the delay line and the limited depth resolution achievable by gated cameras, current QGI setups using synchronised detectors are not capable of efficiently realizing either remote sensing or 3D imaging.

We implemented a novel system by replacing the widely used ICCD cameras by a time-tagging single photon camera, in this work an array of single photon avalanche diodes (SPADs) [3]. SPAD arrays allow the direct measurement of single photons with timing resolutions in the ps range and do not rely on indirect detection like image intensifiers or similar technologies. In order to realize a time resolution in SPAD cameras, dedicated timing circuitry, so called time-to-digital converters (TDCs), have to be integrated for every row, column, or even individual pixels, depending on the application and constraints. For this work, we use a SPAD array with individual TDCs for every pixel to enable both temporal and spatial single photon detection. Using this scheme, the detection of the signal photon is decoupled from the idler detection and can be done independently. The matching of entangled pairs is done after detection by timestamp comparison of both signal and idler detections. While uncorrelated photons will have random delays to each other, entangled photons will have a dedicated delay, depending on the ToF of the two photons. Since the camera is fixed in space, the delay only depends on the ToF of the idler photon and can be translated to depth information of the scene under illumination. Here we show our first 3D measurements using this asynchronous detection scheme, which to our knowledge is also the first realization of 3D QGI.

2. SPAD DETECTORS

All photon detection in our scheme is based on SPADs, low-noise SPADs operated slightly above their $p {-} n$ junction breakdown voltage. This allows a single incident photon to trigger an electron avalanche, amplifying the single detection to a measurable signal [8]. For this paper, we used two different SPADs, one commercial, fiber-coupled, single-pixel SPAD, developed for communication purposes, and a spatially resolving SPAD array detector, which works as a camera. SPAD arrays have been widely investigated in the last 15 years, with a trend toward smaller pixel pitch and larger arrays. Megapixel sensors have been fabricated with a single-bit time-gated counter in each pixel [9], while arrays with per-pixel timestamping capabilities are so far limited in size to few tens of kilopixels due to the relatively large pixel size [10,11]. The SPAD camera used in the setup was developed by Fondazione Bruno Kessler (FBK) and tailored to quantum imaging applications, in particular super-resolution optical quantum microscopes based on entangled photons [12]. In order to extract the spatiotemporal correlations existing among groups of photons, the SPAD camera consists of ${{32}} \times {{32}}\;{\rm{pixels}}$, each of which has a dedicated, compact TDC. The pixel readout is dependent on the number of detections within this frame due to the dedicated zero-suppression readout mechanisms implemented on-chip. These are namely the row-skipping method (in case of empty rows) and the frame-skipping method (in case of frames having a number of detections below a user defined threshold) and, therefore, on the illumination. In most cases, however, it is in the order of µs; in this work, for example, the mean timeout was about 6.9 µs.

In terms of noise and sensitivity, the SPADs have a mean dark count rate of 600 Hz, a peak photon detection probability (PDP) of about 27% and 15% at 420 nm (best PDP) and 550 nm, respectively, with an excess bias of 3.0 V [13], and a state-of-the-art pixel fill factor (FF) of 20% with a 45 µm pixel pitch. The overall PDP is the product of PDP and FF. The cross talk probability is mostly caused by photons generated by the avalanche process and propagating to adjacent pixels, and it has been measured in the order of ${10^{- 4}}$. In terms of timestamping capabilities, the TDCs have a resolution of 210 ps and an 8-bit counter (256 time bins), resulting in a maximum observation time in a single frame of 50 ns. The peak-to-peak differential (bin-to-bin variations) and integral (cumulative variation) nonlinearity across the observation window have been measured to be at $1.28 \times$ and $1.92 \times$ the TDC bin. Being static, these non-idealities can be partly compensated in post-processing. The imager has been mounted in a dedicated module (seen in Fig. 1) consisting of a custom printed circuit board hosting the sensor and a commercial field programmable gate array (FPGA)-based board. The FPGA generates the control signals for the SPAD imager and collects and transfers the data to a host PC, where a dedicated software allows the user to set the main acquisition parameters, monitor the acquired signal, and save the raw data. In addition, the FPGA implements an electronic delay line and an asynchronous synchronization logic to optimize the acquisition process for ghost imaging experiments [14].

 figure: Fig. 1.

Fig. 1. FBK SuperEllen detector. The SPAD image sensor module used in this work, consisting of a top printed circuit board hosting the sensor and a bottom board equipped with an FPGA and a USB interface.

Download Full Size | PDF

3. BASIC SETUP

The two setups used within this work are shown in Fig. 2, while a more principal implementation of asynchronous QGI can be found in [15]. An entangled pair source consisting of a laser and a nonlinear crystal continuously produces photon pairs consisting of an IR- and a VIS-photon. The VIS-photon is directly detected spatially and temporally, while the IR photon is interacting with a scene and then is detected temporally. Comparison of the detection times (ToA) of all detections leads to a specific temporal delay when comparing entangled partners, whose position solely depends on the ToF of the idler photons. Due to the spatial correlation of the entangled pairs, the IR photon distribution (i.e., an image of the scene) can directly be extracted from spatial distribution of the entangled VIS photons. Further, by analyzing the ToF of the idler, depth information can be obtained for every pixel, allowing 3D imaging.

 figure: Fig. 2.

Fig. 2. Setups used for 3D-imaging. (a) Michelson setup. The idler photons are split into two arms and reflected back by mirrors primed for imaging. The difference in ToF is achieved by different arm lengths and, thus, different optical path lengths. This scheme was chosen to enable 3D imaging on a single optical table for improved fiber coupling of the IR detector and simplified determination of path and time delays. (b) Free space setup. This setup was chosen to demonstrate the 3D capabilities of the setup under a more application-oriented environment. For simplicity and clarity, the reflection of the primed beam splitter was omitted but will also be coupled into the IR detector. The setup had to be constructed over two optical tables to increase the spatial overlap while decreasing the angular offset of both reflected beams at the input fiber coupler of the IR-SPAD in order to achieve good incoupling.

Download Full Size | PDF

In order to show the 3D capabilities of the setup, two scenes to be imaged were designed as shown in Fig. 2. One resembles a Michelson interferometer, hence dubbed “Michelson Setup,” and uses a 50:50 beam splitter to split the IR photons in two arms of different lengths, each primed with a dedicated transmission mask. The emission is afterward superimposed again on the beam splitter and (partly) detected. This setup was chosen to show both images can be clearly distinguished, even for the same pixel. It was further used to test and improve the timing information of the setup since the optical path lengths can easily be measured and adapted. The second setup, dubbed “free space setup,” is closer to the envisioned application, imitating a 3D scene with multiple objects to be imaged. It uses a 30:70 beam splitter (BS) in order to provide similar photon counts from both BS and the mirror placed behind it. The IR photons are first directed at the beam splitter with 30% of them being reflected directly onto the IR-SPAD. The transmitted photons will hit the mirror placed behind the beam splitter, (partially) pass through the beam splitter again, and also hit the IR-SPAD. It was designed to both show a more realistic scenario as well as test our current limitations, e.g., regarding fiber coupling.

As a source of entangled photons, we use SPDC, which is an optical nonlinear process based on second-order nonlinearities in an optical medium, commonly a crystal. Due to phase-matching conditions resulting from energy and momentum constraints, this effect can usually only be exploited by birefringent media, whose refractive index depends on the polarization of light. Nonlinear crystals are of special interest for SPDC due to the potential of crystal poling, a technique that allows us to tailor the orientation of the crystal lattice with sub-µm precision. This allows us to design and manufacture crystals with embedded periodic structures, enabling quasi-phase matching and highly efficient collinear SPDC with tailored output spectra. These crystals are currently the most adaptable and efficient pair sources and were also used here via a periodically poled KTP (ppKTP) crystal of 1 mm height, 2 mm length, and 6 mm width, whereby only a 2 mm wide strip could be poled due to technical limitations. The poling period was chosen to support highly efficient collinear type-0 phase matching at 550 nm (signal) and 1550 nm (idler) when pumping the crystal with a 405 nm laser. These wavelengths were chosen in order to exploit mature laser and detector technology [405 nm is the wavelength used in Blu-ray, and 1550 nm is a common wavelength for (quantum) telecommunications], while ensuring good detection of the SPAD and at the same time demonstrating the capability of quantum sensing to separate illumination and imaging wavelengths.

In order to use SPDC for imaging, an unambiguous spatial correlation is needed, which can be achieved in two principle ways. One can use a collimated beam and the resulting spatial correlation directly or one can focus a laser in the crystal and use the resulting anti-correlation of photons. For this work, a focused scheme was used since efficient poled crystals are currently only available with small apertures, which highly limits the resolution achievable using a collimated beam. The pump laser is focused into the SPDC crystal in order to obtain plane phase fronts within the crystal, which leads to an unambiguous momentum anti-correlation of the photon pairs. The focus properties have to be carefully maintained hereby, otherwise the correlation will be “washed out” and the image will be blurred, similar to the effects shown in [16].

Since the entangled photons are spectrally well separated, we use dichroic mirrors to separate the photons into signal (550 nm) and idler (1550 nm) arms. The idler photons are then imaged onto the scene, and photons scattered by it are collected by a single pixel single photon detector (ID230, with a 62.5 GRIN Multimode Fiber). In the signal arm, the residual pump light is separated using optical filters and a beam dump. The signal photons are then imaged onto the SPAD camera in the image plane corresponding to the plane in which the idler is imaged onto the scene, thus preserving spatial correlations. This is achieved by using the same aspheric lenses in both arms and by fine tuning their position according to their focal lengths in the IR- and VIS-regimes.

Both the IR and VIS detections are time-tagged by shared time-correlated single photon counting (TCSPC) electronics, creating a common timebase and allowing us to reference the detections to one another. Although timestamping the idler detection can easily be done using TCSPC electronics, the signal timestamp is obtained by registering its measurement trigger, giving the start of each measurement window, and combining this with the relative timestamp given by the SPAD array, referenced to the start of the measurement. By comparing the time differences of VIS and IR detections, coincidence measurement is enabled, resulting in a peak of the number of detections at a specific temporal offset. Since the path length of the signal arm is fixed, the position of this temporal offset solely depends on the idler’s path length and, thus, the depth of the scene the idler photons are interacting with. This, together with the spatial information extracted from the camera detections within this peak, allows a full 3D reconstruction of the scene under illumination. The minimum depth resolution of the setup is given by the maximum temporal resolution, which for this setup is given by the SPAD array at roughly 208 ps, corresponding to a depth resolution of 3 cm in air. However, this is only a lower limit since the actual resolution further depends on the timing properties of IR-SPAD and TCSPC electronics as well as the stability of the arrays TDCs.

The image can be improved further by estimating and subtracting part of the background noise arising from accidental, uncorrelated detections by obtaining their distribution from a non-peak section of the coincidence evaluation and scaling these values according to the temporal windows. Similar systems using a timestamp comparison for coincidence detection can be found in [17,18].

4. RESULTS

In order to test the 3D capabilities of the scheme, we used two test setups, one similar to a Michelson interferometer and one using a free space scene, as shown in Fig. 2. The fiber-coupled IR-SPAD (ID230, IDQ) made the use of diffuse reflections, as experienced in a real-world application, only partly applicable due to the highly increased losses associated with the fiber-coupling constraints. Instead, we relied on mirrors and beam splitters with primed defects in order to perform imaging for both setups and to obtain good overlap and coupling efficiency for both reflections. Since the spot size of the VIS and IR emission were relatively large compared to the aperture of the fiber coupler and the camera’s active area, we additionally relied on beam expanders to match the emission detected by the camera to the spot coupled into the fiber and verified this by coincidence detection. These coincidence data were used further to refine the coarse timing correction described in [12] by analyzing the temporal progression of the coincidence peak for every pixel, depending on the timestamps returned by the SPAD array. This allowed the simple pixel-dependent linear timing correction factor to be determined more accurately and enabled coincidence analysis with full peak widths of about 2 ns and FWHM of roughly 700 ps (see Figs. 3 and 4). For 3D imaging, this FWHM, an important measure to distinguish different objects, corresponds to a depth resolution of about 10 cm. In order to obtain the images shown in Figs. 3 and 4, we used further image processing similar to the methods shown in [15]. We first estimated the uncorrelated photon noise by obtaining the photon distribution of a non-coincidence window, weighting this according to the size of the measurement windows and then subtracted this distribution from the actual image. For this work, however, we dispensed the subtraction of dark noise from the image since the amount of dark noise was very low compared to the measurement signal and its effects on the resulting image negligible. For both setups, we used an acquisition time of 2 h to obtain high contrast images with negligible noise effects; the number of coincidence photons, however, is significantly smaller for the Michelson setup compared to the free space setup. This is due to the setup itself, which inherently has losses of 50% using an optimal beam splitter (50:50), and the significantly reduced return signal due to the transmission masks. The free space setup has in comparison only inherent losses of 20%, which is roughly the amount of light reflected back by the beam splitter (30:70) after being reflected from the mirror. With a frame measurement time of 60 ns and an average timeout of 6.9 µs, the 2 h measurement window results in an actual “active measurement time” of 1.04 min. Measurements on this time scale are expected to be achieved using a free running SPAD camera, which is the current bottleneck regarding measurement time.

 figure: Fig. 3.

Fig. 3. Results of the Michelson setup: (a) Coincidence evaluation of the measurement. The first peak contains the image of the short arm, and the second peak contains the image of the long arm of the setup. The full peak is about 2 ns wide with a FWHM of 700 ps. This temporal resolution has not been obtained from the raw data but was improved by dedicated post-processing, e.g., a compensation of the temporal drift described in [12]. (b) and (c) Images of the primed mirrors used for imaging, consisting of regular mirrors with 3D printed transmission masks glued on top. (b) The Fraunhofer logo was placed in the short. (c) FBK/IOSB logo in the long arm. (d) and (e) Images obtained with the setup, taken over 2 h. The images contain a total of $5.07 \times {10^4}$ and $4.68 \times {10^4}$ photons. A background noise of $3.82 \times {10^4}$ photons was estimated using a non-coincidence window, resulting in a total number of coincidence photons of (d) $1.25 \times {10^4}$ and (e) $8.59 \times {10^3}$, respectively. The IOSB label as seen in (c) is only partially visible in (e) due to the limited field of view. Also the image shown is linearized and normalized to the largest detected value, leading to a suppression of the “I” and “S” due to the limited camera resolution and associated averaging effects.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Results of the free space setup: (a) Coincidence evaluation of the measurement. The first peak comes from the reflection of the beam splitter, the second peak from the primed mirror. The peaks of this measurement also showed a width of about 2 ns wide with a FWHM of 700 ps, the same as with the Michelson setup shown in Fig. 3. (b) Beam splitter of the setup, primed with a roughly circular spot of black tape. Indicated are also the beam reflected directly to the detector (red) and the beam reflected by the posterior mirror and transmitted through the beam splitter to the detector (green). (c) Rear mirror, primed with a curved piece of black tape. (d) Image of the first peak, corresponding to the reflection of the beam splitter. The blocked spot is clearly visible. Further, the image is slightly cut off on the left side due to the incoupling of the fiber-coupled IR detector being optimized on the reflection of the rear mirror and not the angle-shifted reflection of the beam splitter. This leads to both an overall decrease as well as a nonuniform coupling efficiency, resulting in a nonuniform picture. (e) Image of the mirror, showing a smiley. Only the “smile” itself comes from the manipulation of the mirror. The right eye is a shadow of the beam being transmitted through the beam splitter onto the mirror, as can be seen in (d), while the left eye is a shadow of the same tape, but the beam is hereby reflected from the mirror and transmitted through the beam splitter to be detected afterward. The background noise of the images was estimated to be $1.16 \times {10^5}$ photons for both 3.5 ns windows evaluated, while the actual coincidence photons were estimated to be $3.07 \times {10^4}$ for the first and $3.82 \times {10^4}$ for the second image.

Download Full Size | PDF

5. OUTLOOK

In this work, we demonstrated the capability of 3D imaging by asynchronous QGI. As shown here, this scheme surpasses current heralding setups with regards to space consumption, calibration effort, and its application in remote sensing.

Intense research in the field of SPAD image sensor development is currently focused to improve the overall signal-to-noise ratio while increasing the array size, with FBK at the forefront of these developments. In particular, SPAD-based image sensors would highly benefit from the 3D integration of two layers of silicon, a top one including only sensing elements and a bottom one dedicated to the processing electronics. The benefits would be multifold: (i) higher FF since the electronics would lay underneath the SPAD; (ii) decoupling of the manufacturing technology for the sensing and processing layers, which allows us to optimize the SPAD performance with limited or no compromises for the in-pixel electronics; and (iii) smaller pixel pitch, an important requirement to be able to implement large arrays of pixels (e.g., megapixel arrays).

Another significant limit is caused by the currently implemented frame-based acquisition of SPAD-based image sensors. For example, the SPAD array used in this study operates on the basis of 50-ns-long observations and can achieve a maximum observation rate of 1 MHz, resulting in a maximum duty cycle of 5%, which is the current state-of-the-art. Larger array implementations suffer even more from this limitation, as they generate more data per frame. For this reason, a frameless architecture with unbounded observation windows is desirable. Such a device would further fit very well into the asynchronous QGI scheme since all other devices can already run continuously.

The Fraunhofer IOSB will continue to work on improving the asynchronous QGI setup and its capabilities. It is currently investigating the resolution achieveable with the setup and its influences, especially with regard to manipulation of the photon source. Special interest is put on integration of new detectors and technologies, such as other time-tagging cameras [19] or free space coupled IR detectors, which have become commercially available last year, due to the expected improvement concerning detection and coupling efficiency, especially in the remote sensing regime of interest. Also the exploitation of the spectral correlation of photons is very promising, possibly enabling single photon hyperspectral imaging and simultaneous acquisition of multiple spectral bands. Further work will focus on the comparison of the setup with classical systems such as compressive sensing and the possible quantum benefits as well as restrictions [3].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]  

2. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: quantum and classical,” Philos. Trans. R. Soc. A 375, 20160233 (2017). [CrossRef]  

3. D. Walter, C. Pitsch, G. Paunescu, and P. Lutzmann, “Quantum ghost imaging for remote sensing,” Proc. SPIE 11134, 112–118 (2019). [CrossRef]  

4. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92, 033601 (2004). [CrossRef]  

5. P. B. Dixon, G. A. Howland, K. W. C. Chan, C. O’Sullivan-Hale, B. Rodenburg, N. D. Hardy, J. H. Shapiro, D. S. Simon, A. V. Sergienko, R. W. Boyd, and J. C. Howell, “Quantum ghost imaging through turbulence,” Phys. Rev. A 83, 051803 (2011). [CrossRef]  

6. D. C. Burnham and D. L. Weinberg, “Observation of simultaneity in parametric production of optical photon pairs,” Phys. Rev. Lett. 25, 84–87 (1970). [CrossRef]  

7. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015). [CrossRef]  

8. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35, 1956–1976 (1996). [CrossRef]  

9. K. Morimoto, A. Ardelean, M.-L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, and E. Charbon, “Megapixel time-gated SPAD image sensor for 2D and 3D imaging applications,” Optica 7, 346–354 (2020). [CrossRef]  

10. R. K. Henderson, N. Johnston, F. M. D. Rocca, H. Chen, D. D.-U. Li, G. Hungerford, R. Hirsch, D. Mcloskey, P. Yip, and D. J. S. Birch, “A 192 × 128 time correlated SPAD image sensor in 40 nm CMOS-technology,” IEEE J. Solid-State Circuits 54, 1907–1916 (2019). [CrossRef]  

11. L. Gasparini, M. M. Garcia, M. Zarghami, A. Stefanov, B. Eckmann, and M. Perenzoni, “A reconfigurable 224 × 272-pixel single-photon image sensor for photon timestamping, counting and binary imaging at 30.0-µm pitch in 11 0nm CIS technology,” in IEEE 48th European Solid State Circuits Conference (ESSCIRC) (IEEE, 2022).

12. M. Zarghami, L. Gasparini, L. Parmesan, M. Moreno-Garcia, A. Stefanov, B. Bessire, M. Unternahrer, and M. Perenzoni, “A 32 × 32-pixel CMOS imager for quantum optics with per-SPAD TDC, 19.48 44.64-µm pitch reaching 1-MHz observation rate,” IEEE J. Solid-State Circuits 55, 2819–2830 (2020). [CrossRef]  

13. H. Xu, L. Pancheri, G.-F. D. Betta, and D. Stoppa, “Design and characterization of a p+/n-well SPAD array in 150 nm CMOS process,” Opt. Express 25, 12765–12778 (2017). [CrossRef]  

14. V. F. Gili, D. Dupish, A. Vega, M. Gandola, E. Manuzzato, M. Perenzoni, L. Gasparini, T. Pertsch, and F. Setzpfandt, “Sub-minute quantum ghost imaging in the infrared enabled by a ‘looking back’ SPAD array,” arXiv, arXiv:2211.12913 (2022). [CrossRef]  

15. C. Pitsch, D. Walter, S. Grosse, W. Brockherde, H. Bürsing, and M. Eichhorn, “Quantum ghost imaging using asynchronous detection,” Appl. Opt. 60, F66–F70 (2021). [CrossRef]  

16. D. R. Guido and A. B. U’Ren, “Study of the effect of pump focusing on the performance of ghost imaging and ghost diffraction, based on spontaneous parametric downconversion,” Opt. Commun. 285, 1269–1274 (2012). [CrossRef]  

17. G. Christian, C. Akers, D. Connolly, J. Fallis, D. Hutcheon, K. Olchanski, and C. Ruiz, “Design and commissioning of a timestamp-based data acquisition system for the DRAGON recoil mass separator,” Eur. Phys. J. A 50, 75 (2014). [CrossRef]  

18. M.-A. Tetrault, J. F. Oliver, M. Bergeron, R. Lecomte, and R. Fontaine, “Real time coincidence detection engine for high count rate timestamp based PET,” IEEE Trans. Nucl. Sci. 57, 117–124 (2010). [CrossRef]  

19. A. Nomerotski, “Imaging and time stamping of photons with nanosecond resolution in timepix based optical cameras,” Nucl. Instrum. Methods Phys. Res. A 937, 26–30 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. FBK SuperEllen detector. The SPAD image sensor module used in this work, consisting of a top printed circuit board hosting the sensor and a bottom board equipped with an FPGA and a USB interface.
Fig. 2.
Fig. 2. Setups used for 3D-imaging. (a) Michelson setup. The idler photons are split into two arms and reflected back by mirrors primed for imaging. The difference in ToF is achieved by different arm lengths and, thus, different optical path lengths. This scheme was chosen to enable 3D imaging on a single optical table for improved fiber coupling of the IR detector and simplified determination of path and time delays. (b) Free space setup. This setup was chosen to demonstrate the 3D capabilities of the setup under a more application-oriented environment. For simplicity and clarity, the reflection of the primed beam splitter was omitted but will also be coupled into the IR detector. The setup had to be constructed over two optical tables to increase the spatial overlap while decreasing the angular offset of both reflected beams at the input fiber coupler of the IR-SPAD in order to achieve good incoupling.
Fig. 3.
Fig. 3. Results of the Michelson setup: (a) Coincidence evaluation of the measurement. The first peak contains the image of the short arm, and the second peak contains the image of the long arm of the setup. The full peak is about 2 ns wide with a FWHM of 700 ps. This temporal resolution has not been obtained from the raw data but was improved by dedicated post-processing, e.g., a compensation of the temporal drift described in [12]. (b) and (c) Images of the primed mirrors used for imaging, consisting of regular mirrors with 3D printed transmission masks glued on top. (b) The Fraunhofer logo was placed in the short. (c) FBK/IOSB logo in the long arm. (d) and (e) Images obtained with the setup, taken over 2 h. The images contain a total of $5.07 \times {10^4}$ and $4.68 \times {10^4}$ photons. A background noise of $3.82 \times {10^4}$ photons was estimated using a non-coincidence window, resulting in a total number of coincidence photons of (d) $1.25 \times {10^4}$ and (e) $8.59 \times {10^3}$, respectively. The IOSB label as seen in (c) is only partially visible in (e) due to the limited field of view. Also the image shown is linearized and normalized to the largest detected value, leading to a suppression of the “I” and “S” due to the limited camera resolution and associated averaging effects.
Fig. 4.
Fig. 4. Results of the free space setup: (a) Coincidence evaluation of the measurement. The first peak comes from the reflection of the beam splitter, the second peak from the primed mirror. The peaks of this measurement also showed a width of about 2 ns wide with a FWHM of 700 ps, the same as with the Michelson setup shown in Fig. 3. (b) Beam splitter of the setup, primed with a roughly circular spot of black tape. Indicated are also the beam reflected directly to the detector (red) and the beam reflected by the posterior mirror and transmitted through the beam splitter to the detector (green). (c) Rear mirror, primed with a curved piece of black tape. (d) Image of the first peak, corresponding to the reflection of the beam splitter. The blocked spot is clearly visible. Further, the image is slightly cut off on the left side due to the incoupling of the fiber-coupled IR detector being optimized on the reflection of the rear mirror and not the angle-shifted reflection of the beam splitter. This leads to both an overall decrease as well as a nonuniform coupling efficiency, resulting in a nonuniform picture. (e) Image of the mirror, showing a smiley. Only the “smile” itself comes from the manipulation of the mirror. The right eye is a shadow of the beam being transmitted through the beam splitter onto the mirror, as can be seen in (d), while the left eye is a shadow of the same tape, but the beam is hereby reflected from the mirror and transmitted through the beam splitter to be detected afterward. The background noise of the images was estimated to be $1.16 \times {10^5}$ photons for both 3.5 ns windows evaluated, while the actual coincidence photons were estimated to be $3.07 \times {10^4}$ for the first and $3.82 \times {10^4}$ for the second image.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.