A spectral camera based on ghost imaging via sparsity constraints (GISC) acquires a three-dimensional (3D) spatial-spectral data cube of the target through a two-dimensional (2D) detector in a single snapshot. However, the spectral and spatial resolution are interrelated because both of them are modulated by the same spatial random phase modulator. In this paper, we theoretically and experimentally demonstrate a system by equipping the GISC spectral camera with a flat-field grating to disperse the light fields before the spatial random phase modulator, hence consequently decoupling the spatial and spectral resolution. By theoretical derivation of the imaging process we obtain the spectral resolution 1nm and spatial resolution 50μm about the new system which are verified by the experiment. The new system can not only modulate the spatial and spectral resolution separately, but also provide a possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Spectral imaging is a multidimensional data acquisition technology combining spectroscopic and image analysis, which captures a three-dimensional (3D) spectral data-cube (x, y, λ) containing information about the imaging scene. With both spatial and spectral resolving capabilities, spectral imaging is extremely effective and vital for surveying scenes and extracting detailed information. Conventional spectral imaging, with point-to-point imaging mode, requires time-scanning along either the spatial or wavelength axis since the 3D spectral data-cube is detected slice-by-slice using a two-dimensional (2D) detector [1,2]. Hence, the application is restricted in some fields where ultra-fast imaging is needed . Recently, remarkable snapshot spectral imaging techniques have been widely developed to acquire a 3D spectral data-cube just in a single exposure [4–6], such as field-split imaging approach which splits the field-of-view (FoV) and then spreads 3D information to a 2D plane by spectroscopic devices , and computed tomography imaging approach which utilizes an orthogonal grating or other spectroscopic devices to project the 3D information tomography to a 2D plane and then numerically reconstructs the 3D data-cube [8,9]. However, owing to no application of the correlation between pixels and wavelengths of image in the reconstruction algorithm, the image information acquisition efficiency of such above approaches is lower than the Shannon Limit [10,11]. Another important category of snapshot spectral imaging approaches is coded aperture spectral imaging, such as the coded aperture snapshot spectral imager (CASSI), which utilizes a binary-coded mask and an equilateral prism to modulate the light fields and captures a spectral image with a single shot 2D measurement [12–14]. Combining with the compressive sampling principle, such spectral imager improves the sampling efficiency through the compressive sampling theory.
Unlike other existing imaging systems, a spectral camera based on ghost imaging via sparsity constrains (GISC) was proposed [15,16]. GISC spectral camera modulates the image at the whole spectral band using a spatial random phase modulator [17, 18], which enables a 3D spectral data-cube to be captured through a single shot 2D measurement. Furthermore, combining with compressive sensing (CS) theory [19–21], GISC spectral camera can obtain the information at less than the Nyquist rate, which improves the utilization efficiency of the optical channel capacity and realizes the compressive sensing of the information during the imaging acquisition process .
The previously proposed GISC spectral camera is incapable of modulating the spatial and spectral resolution independently. In order to decouple the spatial and spectral resolution, we can use the spectroscopic devices to disperse the light fields after the modulator as many other snapshot spectral imaging systems [4, 5]. However, it limits the possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene [22–25]. This paper presents an improved system utilizing a flat-field grating [26, 27] to spatially disperse the light fields of different wavelengths at first. Then the dispersed light field changes from thermal light into a spatially fluctuating pseudo-thermal light through a spatial random phase modulator [17,18] to generate uncorrelated speckles of different wavelengths and positions and is then recorded by charge-coupled device (CCD) in a single exposure. Except possessing the advantages of the previous GISC spectral camera, the new system can respectively regulate the spatial and spectral resolution through a spatial random phase modulator and a flat-field grating and has a much improved spectral resolution.
2.1. Schematic and system model
The basic schematic of the GISC hyperspectral camera based on a flat-field grating is shown in Fig. 1. The system is composed of four modules, (1) imaging module (an objective lens), which projects a scene onto the first imaging plane ‘b’, (2) dispersion module (a flat-field grating), which spatially disperses light fields at different wavelengths and images them onto plane ‘c’, (3) modulation module (a spatial random phase modulator, microscope objective and CCD), which modulates the light fields of different wavelengths and positions to generate the uncorrelated speckles and then magnifies the speckles in the plane ‘d’ to be recorded by CCD, (4) demodulation module, which recovers a target image via certain optimization algorithm.
Denoting, Ib (x0, y0, λ) and Id (x2, y2), as the light intensity distribution of a monochromatic point source in the first imaging plane ‘b’ and the whole light intensity distribution with all the wavelengths in the plane ‘d’, we have
To realize one-armed ghost imaging through calibrating the spatial intensity fluctuation of the pre-determined reference arm, monochromatic point sources at different pixels and wavelengths are used in the first imaging plane ‘b’ to acquire the incoherent intensity impulse response functions of the system before the imaging process. Therefore, a coherent monochromatic point source at pixel (x0′, y0′) with wavelength λ′, expressed as Ibr (x0, y0, λ; x0′, y0′, λ′) = δ(x0 − x0′, y0 − y0′, λ − λ′), is applied to illuminate the flat-field grating and spatial random phase modulator. Then the light intensity in the plane ‘d’, Idr (x2, y2; x0′, y0′, λ′), is described as
During the imaging process, the target image Ti (xi, yi, λ) in the plane ‘a’ is projected onto the first imaging plane ‘b’ where the light intensity may be denoted as T0 (x0, y0, λ). Then according to Eqs. (1) and (2), the light intensity fluctuation Idt (x2, y2) in the speckle plane ‘d’ is given by28] Eq. (6) can be rewritten as
In order to calculate the normalized second-order correlation function , we assume the flat-field grating is big enough and the groove profile is rectangular, and therefore the transmission function of the grating can be expressed as 16], at the same time, are given by
According to the flat-field grating diffraction theory which combines the characteristics of concave mirrors with gratings, and under Fresnel diffraction theorem, the light field in the diffractive plane ‘d’ which is from a monochromatic point source in the first plane at the pixel (x0′, y0′) with the wavelength λ′ can be computed asFig. 2, are respectively the distance from the diffractive plane to the centre of the grating and the included angle between the diffractive plane and the cross section of the grating. Furthermore, the first order diffraction of the grating is selected during imaging, leading to n = 1 in Eq. (12). Substituting Eq. (13) into Eq. (8) yields
From Eq. (11), 〈tp (Xm, ym, λ) tp* (Xm′, ym′, λ′)〉 can be expressed asEq. (15) can be expressed as Eq. (16) into Eq. (14), the normalized second-order correlation function (x0, y0, λ; x0′, y0′, λ′) is given by Eq. (18), the image of the target T(x0′, y0′, k′) can be separated from the correlation function of intensity fluctuation ΔG(2) ((x2, y2)dr, (x2, y2)dt) as explained in ghost imaging theory.
Moreover, the spatial and spectral resolution of the system are respectively defined as the wavelength difference and spatial distance which are determined by the normalized second-order correlation function (x0, y0, λ; x0′, y0′, λ′), hence represents the resolution of the system. Comparing with the previous system , in Eq. (17) is related to βH, deff of the flat-field grating in the spectral dimension, which makes it possible to optimize the spectral resolution by adjusting these parameters.
2.2. The matrix form & the reconstruction algorithm
Equation (18) describes the reconstruction of the image using correlation algorithm as in ghost imaging and is based on ensemble statistics of the light field. Since the light field is approximatively an ergodic random process, the ensemble average in space or time domain is equivalent. In GISC hyperspectral camera, each pixel of CCD represents a bucket detector in the test arm, which means that the correlation detection between the reference arm and the test arm in the time domain can be considered as equivalent to the space domain. Hence, the ensemble average in Eq. (18) can be replaced by the spatial average of different pixels (x2, y2) of the CCD. At the same time, the signal sampling mode of ghost imaging is consistent with the CS theory, and therefore it is also possible to reconstruct the image using CS techniques. Under the framework of CS theory, the discrete model of Eq. (3) is given byFig. 3. Thereby, the whole calibrating measurement can be regarded as acquiring incoherent intensity impulse response functions from different 3D spectral data-cube locations. We denote m th speckle intensity with wavelength λk recorded by CCD as Im (M = l × n pixels), and then reshape it into a column vector . After L × N measurements, the whole measurement matrix A (M × (N × L)) can be obtained as follows
During the imaging process, denote the unknown object image as a column vector XL×N = (x1, x2, · · · , xL×N)T, and the object intensity distribution recorded by the same CCD as It (M = l × n pixels) which is then reshape into YM = (y1, y2, · · · , yM)T. We may express this measurement process in matrix form as:
To reconstruct the spectral 3D object from the detected 2D signal, Eq. (19) must be solved. The image of spectral object has the characteristics of spatial and spectral correlation, whose reconstruction can be supposed to solve a minimization problem about l1 regularized nonnegative constrain. In our camera, an efficient TV-RANK algorithm is applied ,
3. Experimental results
3.1. Experimental setup
Figure 4 depicts an experimental setup of GISC hyperspectral camera based on a flat-field grating, which adds a flat-field grating in front of the spatial random phase modulator to spatially disperse the light fields with different wavelengths.
The objective lens with focal length of f = 150 mm projects the unknown target image onto the first imaging plane. A 536–545 nm band pass filter located behind the objective lens (Tamron AF70–300 mm f/4–5.6) ensures that only the spectral data-cube corresponding to 536–545 nm band is measured by the system. A beam splitter with 1:1 splittering ratio in front of the first imaging plane splits the light field into two paths, and a surveillance camera CCD1(AVT Sting F-504C owning 3.45μm × 3.45μm) records the conventional image of one path as a reference. A flat-field grating (CELO GF106 with focal length of f = 70 mm, dispersion coefficient of D(λ) = 30 nm/mm) disperses and images the light fields with different wavelengths onto the diffractive surface, and a spatial random phase modulator (SIGMAKOKI, DFSQ1-30C02-1000) modulates the light fields of different wavelengths to generate uncorrelated speckles. Then CCD2 (Apogee with 13μm × 13μm) records the intensity distribution of speckles amplified by a microscopic objective whose magnification is β = 10 in a single exposure.
Calibration for the measurement matrix is done by calibration setup as shown in the blue box of Fig. 4. A xenon lamp produces thermal light to illuminate the entrance of monochromator (WDG30-Z) which generates quasi-monochromatic light of different wavelengths. Then the quasi-monochromatic light is coupled into an optical fiber with a diameter of 20 μm to form a quasi-monochromatic point light source. The output end of the optical fiber is put in the equivalent plane which locates at the focal plane of a collimating lens (Olympus M.ZUIKO AF40-150 mm). The objective lens of the system collects the parallel light from the collimating lens. During the calibration, the first imaging plane is divided to 113 × 113 pixels, which is determined by the spatial resolution in Eq. (14), and the number of spectral channels is 10, which is from 536 nm to 545 nm at interval of 1 nm.
3.2. Imaging results
After obtaining the measurement matrix, the normalized second-order correlation function in Eq. (14) is calculated according to the experimental data. Figure 5(a) shows the comparison of between theoretical and experimental results.
In order to verify the spectral resolution of GISC hyperspectral camera based on a flat-field grating displayed in Eq. (14), as shown in Fig. 5(b), two point light sources generated by optical fibers with diameters of 20 μm at central wavelengths of 539 nm and 540 nm were respectively used, and the distance of them was 40 μm. The modulated target intensity distribution Y was detected by CCD2 in the system. According to the reconstructed result, we chose the middle row of the reconstructed image and arranged it based on the different wavelengths shown in Fig. 5(c) which further illustrates that the spectral resolution of GISC hyperspectral camera based on a flat-field is consistent with the theoretical result shown in Fig. 5(a).
After the calibration process, the calibration setup, as shown in the blue box of Fig. 4, is replaced by a real object to perform imaging experiments. Our experiment used objects of transmission-institute logo & a colored toy illuminated with thermal light as displayed in Fig. 6 where Fig. 6(a) is acquired by a conventional camera and Figs. 6(b) and 6(c) are detected by CCD1 of the system. Figure 6(d) shows the reconstructed images by TV-RANK algorithm. In the reconstruction process, the sampling rate of 3D date-cube is 60%, and the number of iterations in the algorithm is 160. Referring to the image in Figs. 6(a)–6(c), the reconstructed image is accurately consistent with the actual object, which verifies the application value of GISC hyperspectral camera based on a flat-field grating.
In conclusion, we demonstrated a new optical system of GISC spactral camera with a flat-field grating in front of a spatial random phase modulator. The flat-field grating spatially disperses light fields, which enables the position of modulator illuminated by the light fields with different wavelengths to be translated by a certain distance, hence decoupling the spectral resolution from the spatial resolution and improving the spectral resolution. We theoretically and experimentally demonstrated a spectral resolution of about 1 nm and completed the imaging experiments on spectral objects at the same time. The new system provides a basis for optimizing the measurement matrix of different wavelengths and facilitating the achromatic imaging by designing the spatial random phase modulator according to the imaging scene to improve the reconstruction qualities of images in the future. As a new optical imaging system, GISC hyperspectral camera based on a flat-field grating has the potential to be applied in ultra-fast measurement , atmospheric remote sensing imaging [31,32], biological microscopy imaging .
Hi-Tech Research (2013AA122902); Development Program of China (2013AA122901).
References and links
1. E. Herrala, J. T. Okkonen, T. S. Hyvarinen, M. Aikio, and J. Lammasniemi, “Imaging spectrometer for process industry applications,” Proc. SPIE 2248(33), 33–40 (1994). [CrossRef]
2. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Fausta, B. E. Pavria, C. J. Chovita, M. Solisa, M. R. Olaha, and O. Williamsa, “Imaging spectrometer for process industry applications,” Remote Sens. Environ. 65(3), 227–248 (1998). [CrossRef]
4. N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]
5. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Reports 616, 1–37 (2016). [CrossRef]
6. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]
8. M. W. Kudenov, J. M. Craven-Jones, C. J. Vandervlugt, E. L. Dereniak, and R. W. Aumiller, “Faceted grating prism for a computed tomographic imaging spectrometer,” Opt. Eng. 51(4), 044002 (2012). [CrossRef]
9. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances (SPIEBellingham, WA, 2014).
10. T. M. Cover and J. A. Thomas, Elements of Information Theory (John Wiley & Sons, 2012).
11. C. E. Shannon, “A mathematical theory of communication,” Bell system technical journal 27(3), 379–423 (1948). [CrossRef]
12. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Spectral image estimation for coded aperture snapshot spectral imagers,” Proc. SPIE 7076, 707602 (2008). [CrossRef]
13. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014). [CrossRef]
15. J. Wu, X. Shen, H. Yu, Z. Chen, Z. Tao, S. Tan, and S. Han, “Snapshot compressive imaging by phase modulation,” Acta Phys. Sin-CH ED 34(10), 1011005 (2014).
16. Z. Liu, S. Tan, J. Wu, E. Li, and X. S. Han, “Spectral camera based on ghost imaging via sparsity constraints,” Sci. Reports 6, 25718 (2016). [CrossRef]
17. M. Giglio, M. Carpineti, and A. Vailati, “Space intensity correlations in the near field of the scattered light: a direct measurement of the density correlation function g (r),” Phys. Rev. Lett. 85(7), 1416 (2000). [CrossRef] [PubMed]
18. R. Cerbino, L. Peverini, M. A. C. Potenza, A. Robert, P. Bösecke, and M. Giglio, “X-ray-scattering information obtained from near-field speckle,” Nature Phys. 4(3), 238–243 (2008). [CrossRef]
19. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]
20. E.J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]
21. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012). [CrossRef]
22. M. Elad, “Optimized projections for compressed sensing,” IEEE T. Signal Process. 55(12), 5695–5702(2007). [CrossRef]
23. M. Chen, E. Li, and S. Han, “Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,” Appl. Optics 53(13), 2924–2928(2014). [CrossRef]
24. J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE T. Image Process. 18(7), 1395–1408(2009). [CrossRef]
25. X. Xu, E. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101(2015). [CrossRef]
26. J. M. Lerner, R. J. Chambers, and G. Passereau, “Flat field imaging spectroscopy using aberration corrected holographic gratings,” Proc. SPIE 268, 122–128 (1981). [CrossRef]
27. E. Sokolova, “Holographic diffraction gratings for flat-field spectrometers,” J. Mod. Optic. 47(13), 2377–2389 (2000). [CrossRef]
28. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef] [PubMed]
29. B. Luo, Z. Wen, Z. Wen, and T. Zeng, “Design of concave grating for ultraviolet-spectrum,” Spectrosc. Spect. Anal. 32(6), 1717–1721 (2012).
30. S. Tan, Z. Liu, E. Li, and S. Han, “Hyperspectral compressed sensing based on prior images constrained,” Acta Optica Sinica 35(8), 0811003 (2015). [CrossRef]
32. F. F. Sabins, Remote Sensing: Principles and Applications (Waveland, 2007).