## Abstract

A spectral camera based on ghost imaging via sparsity constraints (GISC) acquires a three-dimensional (3D) spatial-spectral data cube of the target through a two-dimensional (2D) detector in a single snapshot. However, the spectral and spatial resolution are interrelated because both of them are modulated by the same spatial random phase modulator. In this paper, we theoretically and experimentally demonstrate a system by equipping the GISC spectral camera with a flat-field grating to disperse the light fields before the spatial random phase modulator, hence consequently decoupling the spatial and spectral resolution. By theoretical derivation of the imaging process we obtain the spectral resolution 1nm and spatial resolution 50*μ*m about the new system which are verified by the experiment. The new system can not only modulate the spatial and spectral resolution separately, but also provide a possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Spectral imaging is a multidimensional data acquisition technology combining spectroscopic and image analysis, which captures a three-dimensional (3D) spectral data-cube (*x*, *y*, *λ*) containing information about the imaging scene. With both spatial and spectral resolving capabilities, spectral imaging is extremely effective and vital for surveying scenes and extracting detailed information. Conventional spectral imaging, with point-to-point imaging mode, requires time-scanning along either the spatial or wavelength axis since the 3D spectral data-cube is detected slice-by-slice using a two-dimensional (2D) detector [1,2]. Hence, the application is restricted in some fields where ultra-fast imaging is needed [3]. Recently, remarkable snapshot spectral imaging techniques have been widely developed to acquire a 3D spectral data-cube just in a single exposure [4–6], such as field-split imaging approach which splits the field-of-view (FoV) and then spreads 3D information to a 2D plane by spectroscopic devices [7], and computed tomography imaging approach which utilizes an orthogonal grating or other spectroscopic devices to project the 3D information tomography to a 2D plane and then numerically reconstructs the 3D data-cube [8,9]. However, owing to no application of the correlation between pixels and wavelengths of image in the reconstruction algorithm, the image information acquisition efficiency of such above approaches is lower than the Shannon Limit [10,11]. Another important category of snapshot spectral imaging approaches is coded aperture spectral imaging, such as the coded aperture snapshot spectral imager (CASSI), which utilizes a binary-coded mask and an equilateral prism to modulate the light fields and captures a spectral image with a single shot 2D measurement [12–14]. Combining with the compressive sampling principle, such spectral imager improves the sampling efficiency through the compressive sampling theory.

Unlike other existing imaging systems, a spectral camera based on ghost imaging via sparsity constrains (GISC) was proposed [15,16]. GISC spectral camera modulates the image at the whole spectral band using a spatial random phase modulator [17, 18], which enables a 3D spectral data-cube to be captured through a single shot 2D measurement. Furthermore, combining with compressive sensing (CS) theory [19–21], GISC spectral camera can obtain the information at less than the Nyquist rate, which improves the utilization efficiency of the optical channel capacity and realizes the compressive sensing of the information during the imaging acquisition process [16].

The previously proposed GISC spectral camera is incapable of modulating the spatial and spectral resolution independently. In order to decouple the spatial and spectral resolution, we can use the spectroscopic devices to disperse the light fields after the modulator as many other snapshot spectral imaging systems [4, 5]. However, it limits the possibility of optimizing the light field fluctuations of different wavelengths according to the imaging scene [22–25]. This paper presents an improved system utilizing a flat-field grating [26, 27] to spatially disperse the light fields of different wavelengths at first. Then the dispersed light field changes from thermal light into a spatially fluctuating pseudo-thermal light through a spatial random phase modulator [17,18] to generate uncorrelated speckles of different wavelengths and positions and is then recorded by charge-coupled device (CCD) in a single exposure. Except possessing the advantages of the previous GISC spectral camera, the new system can respectively regulate the spatial and spectral resolution through a spatial random phase modulator and a flat-field grating and has a much improved spectral resolution.

## 2. Theory

#### 2.1. Schematic and system model

The basic schematic of the GISC hyperspectral camera based on a flat-field grating is shown in Fig. 1. The system is composed of four modules, (1) imaging module (an objective lens), which projects a scene onto the first imaging plane ‘b’, (2) dispersion module (a flat-field grating), which spatially disperses light fields at different wavelengths and images them onto plane ‘c’, (3) modulation module (a spatial random phase modulator, microscope objective and CCD), which modulates the light fields of different wavelengths and positions to generate the uncorrelated speckles and then magnifies the speckles in the plane ‘d’ to be recorded by CCD, (4) demodulation module, which recovers a target image via certain optimization algorithm.

Denoting, *I _{b}* (

*x*

_{0},

*y*

_{0},

*λ*) and

*I*(

_{d}*x*

_{2},

*y*

_{2}), as the light intensity distribution of a monochromatic point source in the first imaging plane ‘b’ and the whole light intensity distribution with all the wavelengths in the plane ‘d’, we have

*h*(

_{I}*x*

_{2},

*y*

_{2};

*x*

_{0},

*y*

_{0},

*λ*) is the incoherent intensity impulse response function of the system from plane ‘b’ to plane ‘d’, (

*x*

_{0},

*y*

_{0}) and (

*x*

_{2},

*y*

_{2}) are respectively the coordinates of the plane ‘b’ and ‘d’,

*λ*is the wavelength of the light field.

To realize one-armed ghost imaging through calibrating the spatial intensity fluctuation of the pre-determined reference arm, monochromatic point sources at different pixels and wavelengths are used in the first imaging plane ‘b’ to acquire the incoherent intensity impulse response functions of the system before the imaging process. Therefore, a coherent monochromatic point source at pixel (*x*_{0}*′*, *y*_{0}*′*) with wavelength *λ′*, expressed as *I*_{br} (*x*_{0}, *y*_{0}, *λ*; *x*_{0}*′*, *y*_{0}*′*, *λ′*) = *δ*(*x*_{0} − *x*_{0}*′*, *y*_{0} − *y*_{0}*′*, *λ* − *λ′*), is applied to illuminate the flat-field grating and spatial random phase modulator. Then the light intensity in the plane ‘d’, *I*_{dr} (*x*_{2}, *y*_{2}; *x*_{0}*′*, *y*_{0}*′*, *λ′*), is described as

During the imaging process, the target image *T _{i}* (

*x*,

_{i}*y*,

_{i}*λ*) in the plane ‘a’ is projected onto the first imaging plane ‘b’ where the light intensity may be denoted as

*T*

_{0}(

*x*

_{0},

*y*

_{0},

*λ*). Then according to Eqs. (1) and (2), the light intensity fluctuation

*I*

_{dt}(

*x*

_{2},

*y*

_{2}) in the speckle plane ‘d’ is given by

*I*

_{dt}(

*x*

_{2},

*y*

_{2}) is a weighted integration of the pre-determined reference arm intensity distribution

*I*

_{dr}(

*x*

_{2},

*y*

_{2};

*x*

_{0},

*y*

_{0},

*λ*). Then the second-order correlation function of the spatial fluctuation between the pre-determined reference arm and test arm can be expressed as [28]

Substituting Eq. (3) into Eq. (4), we have

*x*

_{0}

*′*,

*y*

_{0}

*′*,

*λ′*;

*x*

_{0},

*y*

_{0},

*λ*) represents the normalized second-order correlation function:

In order to calculate the normalized second-order correlation function ${g}_{{d}_{r}}^{(2)}$, we assume the flat-field grating is big enough and the groove profile is rectangular, and therefore the transmission function of the grating can be expressed as [29]

*a*and

*d*are respectively the slit width and effective grating constant, and ⊗ denotes the operation of convolution. The transmission function and the height auto-correlation function of the spatial random phase modulator [16], at the same time, are given by

_{eff}*h*(

*x*,

_{m}*y*),

_{m}*h*(

*x*,

_{m}′*y*) denote the height functions at different coordinates in the plane of the modulator, and

_{m}′*ω*,

*ζ*are respectively the height standard deviation of the modulator and the lateral correlation length of the modulator.

According to the flat-field grating diffraction theory which combines the characteristics of concave mirrors with gratings, and under Fresnel diffraction theorem, the light field in the diffractive plane ‘d’ which is from a monochromatic point source in the first plane at the pixel (*x*_{0}*′*, *y*_{0}*′*) with the wavelength *λ′* can be computed as

*X*= (

_{k}*r*sin

_{H}*β*+

_{H}*x*) sec

_{k}*β*,

_{H}*k*= 1, 2,

*m*, describes the x-axis coordinate transformation relation from the cross section of the flat-field grating to the diffractive plane ‘c’,

*f*= (

_{x}*x*

_{1}+

*x*

_{0}

*′z*

_{1}/

*z*

_{0}).

*r*and

_{H}*β*, as shown in Fig. 2, are respectively the distance from the diffractive plane to the centre of the grating and the included angle between the diffractive plane and the cross section of the grating. Furthermore, the first order diffraction of the grating is selected during imaging, leading to

_{H}*n*= 1 in Eq. (12). Substituting Eq. (13) into Eq. (8) yields

From Eq. (11), 〈*t _{p}* (

*X*,

_{m}*y*,

_{m}*λ*)

*t** (

_{p}*X*,

_{m}′*y*,

_{m}′*λ′*)〉 can be expressed as

*M̃*

_{H(Xm,ym)H(Xm′, ym′)}is characteristic function with respect to the height functions

*H*(

*X*,

_{m}*y*) and

_{m}*H*(

*X*,

_{m}′*y*). Since the surface characteristics of the spatial phase modulator satisfy the Gaussian distribution, the characteristic function in Eq. (15) can be expressed as

_{m}′*x*

_{0},

*y*

_{0},

*λ*;

*x*

_{0}

*′*,

*y*

_{0}

*′*,

*λ′*) is given by

Then according to Eqs. (5), (7) and (17), the correlation function of intensity fluctuations Δ*G*^{(2)} ((*x*_{2}, *y*_{2})_{dr}, (*x*_{2}, *y*_{2})_{dt}) is denoted as follows

*T*(

*x*

_{0}

*′*,

*y*

_{0}

*′*,

*k′*) can be separated from the correlation function of intensity fluctuation Δ

*G*

^{(2)}((

*x*

_{2},

*y*

_{2})

_{dr}, (

*x*

_{2},

*y*

_{2})

_{dt}) as explained in ghost imaging theory.

Moreover, the spatial and spectral resolution of the system are respectively defined as the wavelength difference and spatial distance which are determined by the normalized second-order correlation function ${g}_{{d}_{r}}^{(2)}$ (*x*_{0}, *y*_{0}, *λ*; *x*_{0}*′*, *y*_{0}*′*, *λ′*), hence ${g}_{{d}_{r}}^{(2)}$ represents the resolution of the system. Comparing with the previous system [16], ${g}_{{d}_{r}}^{(2)}$ in Eq. (17) is related to *β _{H}*,

*d*of the flat-field grating in the spectral dimension, which makes it possible to optimize the spectral resolution by adjusting these parameters.

_{eff}#### 2.2. The matrix form & the reconstruction algorithm

Equation (18) describes the reconstruction of the image using correlation algorithm as in ghost imaging and is based on ensemble statistics of the light field. Since the light field is approximatively an ergodic random process, the ensemble average in space or time domain is equivalent. In GISC hyperspectral camera, each pixel of CCD represents a bucket detector in the test arm, which means that the correlation detection between the reference arm and the test arm in the time domain can be considered as equivalent to the space domain. Hence, the ensemble average in Eq. (18) can be replaced by the spatial average of different pixels (*x*_{2}, *y*_{2}) of the CCD. At the same time, the signal sampling mode of ghost imaging is consistent with the CS theory, and therefore it is also possible to reconstruct the image using CS techniques. Under the framework of CS theory, the discrete model of Eq. (3) is given by

*A*is the measurement matrix, which can be obtained by calibrating the pre-determined reference arm. During the calibration, suppose that the whole spectral band is divided into

*L*equispaced spectral channels and the field-of-view (FoV) is divided into

*N*pixels. Assuming that all monochromatic point sources in FoV are incoherent with each other, the calibration of incoherent intensity impulse response functions for a point source at different pixels with different wavelengths in the FoV is shown in Fig. 3. Thereby, the whole calibrating measurement can be regarded as acquiring incoherent intensity impulse response functions from different 3D spectral data-cube locations. We denote

*m*th speckle intensity with wavelength

*λ*recorded by CCD as

^{k}*I*(

_{m}*M*=

*l*×

*n*pixels), and then reshape it into a column vector ${A}_{m}^{{\lambda}_{k}}={\left({A}_{1,m}^{{\lambda}_{k}},{A}_{2,m}^{{\lambda}_{k}},\cdots ,{A}_{M,m}^{{\lambda}_{k}}\right)}^{T}$. After

*L*×

*N*measurements, the whole measurement matrix

*A*(

*M*× (

*N*×

*L*)) can be obtained as follows

During the imaging process, denote the unknown object image as a column vector *X*_{L×N} = (*x*_{1}, *x*_{2}, · · · , *x*_{L×N})* ^{T}*, and the object intensity distribution recorded by the same CCD as

*I*(

_{t}*M*=

*l*×

*n*pixels) which is then reshape into

*Y*= (

_{M}*y*

_{1},

*y*

_{2}, · · · ,

*y*)

_{M}*. We may express this measurement process in matrix form as:*

^{T}*p*denotes the

*p*spatial location in the detector.

^{th}To reconstruct the spectral 3D object from the detected 2D signal, Eq. (19) must be solved. The image of spectral object has the characteristics of spatial and spectral correlation, whose reconstruction can be supposed to solve a minimization problem about *l*_{1} regularized nonnegative constrain. In our camera, an efficient TV-RANK algorithm is applied [30],

*X*‖

_{*}is nuclear norm, Φ is the sparsity transform,

*μ*

_{1},

*μ*

_{2}> 0 are the weight coefficients. In this work, we choose Φ to be the difference operator, then ‖Φ

*X*‖

_{1}to be the spatial total variation (TV).

## 3. Experimental results

#### 3.1. Experimental setup

Figure 4 depicts an experimental setup of GISC hyperspectral camera based on a flat-field grating, which adds a flat-field grating in front of the spatial random phase modulator to spatially disperse the light fields with different wavelengths.

The objective lens with focal length of *f* = 150 mm projects the unknown target image onto the first imaging plane. A 536–545 nm band pass filter located behind the objective lens (Tamron AF70–300 mm f/4–5.6) ensures that only the spectral data-cube corresponding to 536–545 nm band is measured by the system. A beam splitter with 1:1 splittering ratio in front of the first imaging plane splits the light field into two paths, and a surveillance camera CCD1(AVT Sting F-504C owning 3.45*μ*m × 3.45*μ*m) records the conventional image of one path as a reference. A flat-field grating (CELO GF106 with focal length of *f* = 70 mm, dispersion coefficient of *D*(*λ*) = 30 nm/mm) disperses and images the light fields with different wavelengths onto the diffractive surface, and a spatial random phase modulator (SIGMAKOKI, DFSQ1-30C02-1000) modulates the light fields of different wavelengths to generate uncorrelated speckles. Then CCD2 (Apogee with 13*μ*m × 13*μ*m) records the intensity distribution of speckles amplified by a microscopic objective whose magnification is *β* = 10 in a single exposure.

Calibration for the measurement matrix is done by calibration setup as shown in the blue box of Fig. 4. A xenon lamp produces thermal light to illuminate the entrance of monochromator (WDG30-Z) which generates quasi-monochromatic light of different wavelengths. Then the quasi-monochromatic light is coupled into an optical fiber with a diameter of 20 *μ*m to form a quasi-monochromatic point light source. The output end of the optical fiber is put in the equivalent plane which locates at the focal plane of a collimating lens (Olympus M.ZUIKO AF40-150 mm). The objective lens of the system collects the parallel light from the collimating lens. During the calibration, the first imaging plane is divided to 113 × 113 pixels, which is determined by the spatial resolution in Eq. (14), and the number of spectral channels is 10, which is from 536 nm to 545 nm at interval of 1 nm.

#### 3.2. Imaging results

After obtaining the measurement matrix, the normalized second-order correlation function in Eq. (14) is calculated according to the experimental data. Figure 5(a) shows the comparison of ${g}_{{d}_{r}}^{(2)}$ between theoretical and experimental results.

In order to verify the spectral resolution of GISC hyperspectral camera based on a flat-field grating displayed in Eq. (14), as shown in Fig. 5(b), two point light sources generated by optical fibers with diameters of 20 *μ*m at central wavelengths of 539 nm and 540 nm were respectively used, and the distance of them was 40 *μ*m. The modulated target intensity distribution *Y* was detected by CCD2 in the system. According to the reconstructed result, we chose the middle row of the reconstructed image and arranged it based on the different wavelengths shown in Fig. 5(c) which further illustrates that the spectral resolution of GISC hyperspectral camera based on a flat-field is consistent with the theoretical result shown in Fig. 5(a).

After the calibration process, the calibration setup, as shown in the blue box of Fig. 4, is replaced by a real object to perform imaging experiments. Our experiment used objects of transmission-institute logo & a colored toy illuminated with thermal light as displayed in Fig. 6 where Fig. 6(a) is acquired by a conventional camera and Figs. 6(b) and 6(c) are detected by CCD1 of the system. Figure 6(d) shows the reconstructed images by TV-RANK algorithm. In the reconstruction process, the sampling rate of 3D date-cube is 60%, and the number of iterations in the algorithm is 160. Referring to the image in Figs. 6(a)–6(c), the reconstructed image is accurately consistent with the actual object, which verifies the application value of GISC hyperspectral camera based on a flat-field grating.

## 4. Conclusion

In conclusion, we demonstrated a new optical system of GISC spactral camera with a flat-field grating in front of a spatial random phase modulator. The flat-field grating spatially disperses light fields, which enables the position of modulator illuminated by the light fields with different wavelengths to be translated by a certain distance, hence decoupling the spectral resolution from the spatial resolution and improving the spectral resolution. We theoretically and experimentally demonstrated a spectral resolution of about 1 nm and completed the imaging experiments on spectral objects at the same time. The new system provides a basis for optimizing the measurement matrix of different wavelengths and facilitating the achromatic imaging by designing the spatial random phase modulator according to the imaging scene to improve the reconstruction qualities of images in the future. As a new optical imaging system, GISC hyperspectral camera based on a flat-field grating has the potential to be applied in ultra-fast measurement [3], atmospheric remote sensing imaging [31,32], biological microscopy imaging [33].

## Funding

Hi-Tech Research (2013AA122902); Development Program of China (2013AA122901).

## References and links

**1. **E. Herrala, J. T. Okkonen, T. S. Hyvarinen, M. Aikio, and J. Lammasniemi, “Imaging spectrometer for process industry applications,” Proc. SPIE **2248**(33), 33–40 (1994). [CrossRef]

**2. **R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Fausta, B. E. Pavria, C. J. Chovita, M. Solisa, M. R. Olaha, and O. Williamsa, “Imaging spectrometer for process industry applications,” Remote Sens. Environ. **65**(3), 227–248 (1998). [CrossRef]

**3. **L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature **516**(7529), 74–77 (2014). [CrossRef] [PubMed]

**4. **N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. **52**(9), 090901 (2013). [CrossRef]

**5. **L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Reports **616**, 1–37 (2016). [CrossRef]

**6. **S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica **4**(10), 1209–1213 (2017). [CrossRef]

**7. **L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express **17**(15), 12293–12308 (2009). [CrossRef] [PubMed]

**8. **M. W. Kudenov, J. M. Craven-Jones, C. J. Vandervlugt, E. L. Dereniak, and R. W. Aumiller, “Faceted grating prism for a computed tomographic imaging spectrometer,” Opt. Eng. **51**(4), 044002 (2012). [CrossRef]

**9. **J. Hsieh, *Computed Tomography: Principles, Design, Artifacts, and Recent Advances* (SPIEBellingham, WA, 2014).

**10. **T. M. Cover and J. A. Thomas, *Elements of Information Theory* (John Wiley & Sons, 2012).

**11. **C. E. Shannon, “A mathematical theory of communication,” Bell system technical journal **27**(3), 379–423 (1948). [CrossRef]

**12. **A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Spectral image estimation for coded aperture snapshot spectral imagers,” Proc. SPIE **7076**, 707602 (2008). [CrossRef]

**13. **G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. **31**(1), 105–115 (2014). [CrossRef]

**14. **A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D.J. Brady, “Compressive coded aperture spectral imaging: An introduction,” Opt. Express **17**(8), 6368–6388 (2009). [CrossRef] [PubMed]

**15. **J. Wu, X. Shen, H. Yu, Z. Chen, Z. Tao, S. Tan, and S. Han, “Snapshot compressive imaging by phase modulation,” Acta Phys. Sin-CH ED **34**(10), 1011005 (2014).

**16. **Z. Liu, S. Tan, J. Wu, E. Li, and X. S. Han, “Spectral camera based on ghost imaging via sparsity constraints,” Sci. Reports **6**, 25718 (2016). [CrossRef]

**17. **M. Giglio, M. Carpineti, and A. Vailati, “Space intensity correlations in the near field of the scattered light: a direct measurement of the density correlation function g (r),” Phys. Rev. Lett. **85**(7), 1416 (2000). [CrossRef] [PubMed]

**18. **R. Cerbino, L. Peverini, M. A. C. Potenza, A. Robert, P. Bösecke, and M. Giglio, “X-ray-scattering information obtained from near-field speckle,” Nature Phys. **4**(3), 238–243 (2008). [CrossRef]

**19. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**20. **E.J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**(2), 489–509 (2006). [CrossRef]

**21. **Y. C. Eldar and G. Kutyniok, *Compressed Sensing: Theory and Applications* (Cambridge University, 2012). [CrossRef]

**22. **M. Elad, “Optimized projections for compressed sensing,” IEEE T. Signal Process. **55**(12), 5695–5702(2007). [CrossRef]

**23. **M. Chen, E. Li, and S. Han, “Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,” Appl. Optics **53**(13), 2924–2928(2014). [CrossRef]

**24. **J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE T. Image Process. **18**(7), 1395–1408(2009). [CrossRef]

**25. **X. Xu, E. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. **13**(7), 071101(2015). [CrossRef]

**26. **J. M. Lerner, R. J. Chambers, and G. Passereau, “Flat field imaging spectroscopy using aberration corrected holographic gratings,” Proc. SPIE **268**, 122–128 (1981). [CrossRef]

**27. **E. Sokolova, “Holographic diffraction gratings for flat-field spectrometers,” J. Mod. Optic. **47**(13), 2377–2389 (2000). [CrossRef]

**28. **A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. **93**(9), 093602 (2004). [CrossRef] [PubMed]

**29. **B. Luo, Z. Wen, Z. Wen, and T. Zeng, “Design of concave grating for ultraviolet-spectrum,” Spectrosc. Spect. Anal. **32**(6), 1717–1721 (2012).

**30. **S. Tan, Z. Liu, E. Li, and S. Han, “Hyperspectral compressed sensing based on prior images constrained,” Acta Optica Sinica **35**(8), 0811003 (2015). [CrossRef]

**31. **A. F. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science **228**(4704), 1147–1153 (1985). [CrossRef] [PubMed]

**32. **F. F. Sabins, *Remote Sensing: Principles and Applications* (Waveland, 2007).

**33. **T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS letters **546**(1), 87–92 (2003). [CrossRef] [PubMed]