Abstract

The acquisition of high-quality panchromatic images is vital to the multi-spectral images pan sharpening, especially to snapshot imaging spectrometers with a low spatial resolution. As an aperture-division snapshot imaging spectrometer, a snapshot hyperspectral imaging Fourier transform spectrometer has the characteristic that images of all the sub-apertures share almost the same spatial information with a small shift. With these sub-images, super-resolution is possible. In this paper, a high-quality panchromatic image acquisition method is proposed. A pre-trained deep learning network is utilized without enlarging the instrument size. The training dataset is obtained experimentally, and the network is designed to realize the contrast enhancement and super-resolution simultaneously. The experimental results demonstrate that the proposed method performs well in high-quality panchromatic image acquisition.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis

Shuaishuai Zhu, Yu Zhang, Jie Lin, Liangyu Zhao, Yanmei Shen, and Peng Jin
Opt. Express 24(21) 24624-24640 (2016)

Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy

Jason G. Dwight and Tomasz S. Tkaczyk
Biomed. Opt. Express 8(3) 1950-1964 (2017)

Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy

Liang Gao, Robert T. Kester, Nathan Hagen, and Tomasz S. Tkaczyk
Opt. Express 18(14) 14330-14344 (2010)

References

  • View by:
  • |
  • |
  • |

  1. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
    [Crossref]
  2. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
    [Crossref]
  3. H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
    [Crossref]
  4. Y. Z. Feng and D. W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012).
    [Crossref]
  5. G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
    [Crossref]
  6. C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
    [Crossref]
  7. C. M. Biradar and P. S. Thenkabail, “Water productivity mapping methods using remote sensing,” J. Appl. Remote Sens 2(1), 023544 (2008).
    [Crossref]
  8. M. B. Sinclair, J. A. Timlin, D. M. Haaland, and M. W. Washburne, “Design, construction, characterization, and application of a hyperspectral microarray scanner,” Appl. Opt. 43(10), 2079–2088 (2004).
    [Crossref]
  9. M. B. Sinclair, D. M. Haaland, J. A. Timlin, and H. D. T. Jones, “Hyperspectral confocal microscope,” Appl. Opt. 45(24), 6283–6291 (2006).
    [Crossref]
  10. P. M. Kasili and T. Vo-Dinh, “Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications,” Cytometry, Part A 69A(8), 835–841 (2006).
    [Crossref]
  11. M. E. Gehm, M. S. Kim, C. Fernandez, and D. J. Brady, “High-throughput, multiplexed pushbroom hyperspectral microscopy,” Opt. Express 16(15), 11032–11043 (2008).
    [Crossref]
  12. N. Bedard, R. A. Schwarz, A. Hu, V. Bhattar, J. Howe, M. D. Williams, A. M. Gillenwater, R. Richards-Kortum, and T. S. Tkaczyk, “Multimodal snapshot spectral imaging for oral cancer diagnostics: a pilot study,” Biomed. Opt. Express 4(6), 938–949 (2013).
    [Crossref]
  13. G. Yang, “Bioimage informatics for understanding spatiotemporal dynamics of cellular processes,” Wiley Interdiscip. Rev.: Syst. Biol. Med. 5(3), 367–380 (2013).
    [Crossref]
  14. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
    [Crossref]
  15. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
    [Crossref]
  16. M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
    [Crossref]
  17. A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
    [Crossref]
  18. P. Milanfar, Super-resolution imaging (CRC press, 2010).
  19. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
    [Crossref]
  20. M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. Image Process. 6(12), 1646–1658 (1997).
    [Crossref]
  21. D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746.
  22. M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
    [Crossref]
  23. R. C. Gonzalez and P. Wintz, Digital image processing (Electronic Industry Press, 2007), pp. 484–486.
  24. X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
    [Crossref]
  25. L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.
  26. X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.
  27. D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
    [Crossref]
  28. W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
    [Crossref]
  29. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
    [Crossref]
  30. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.
  31. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
    [Crossref]
  32. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.
  33. J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp.1646–1654.
  34. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.
  35. Y. Tai, J. Yang, and X. Liu, “Image Super-Resolution via Deep Recursive Residual Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2790–2798.
  36. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.
  37. E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
    [Crossref]
  38. M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
    [Crossref]
  39. J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
    [Crossref]
  40. K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
    [Crossref]
  41. L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.
  42. S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
    [Crossref]
  43. C. Dong, C. C. Loy, and X. O. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” in Computer Vision (Springer, 2016), 391–407.
  44. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167v3 (2015).
  45. A. Vedaldi and K. Lenc, “MatConvNet - Convolutional Neural Networks for MATLAB,” arXiv:1412.4564v2 (2015).
  46. A. Horé and D. Ziou, “Image Quality Metrics: PSNR vs. SSIM,” in 2010 International Conference on Pattern Recognition, (2010), pp. 2366–2369.
  47. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref]
  48. T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
    [Crossref]

2019 (1)

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
[Crossref]

2018 (1)

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

2017 (5)

K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
[Crossref]

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

2016 (4)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref]

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

S. Zhu, Y. Zhang, J. Lin, L. Zhao, Y. Shen, and P. Jin, “High resolution snapshot imaging spectrometer using a fusion algorithm based on grouping principal component analysis,” Opt. Express 24(21), 24624–24640 (2016).
[Crossref]

2015 (1)

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

2014 (1)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref]

2013 (3)

N. Bedard, R. A. Schwarz, A. Hu, V. Bhattar, J. Howe, M. D. Williams, A. M. Gillenwater, R. Richards-Kortum, and T. S. Tkaczyk, “Multimodal snapshot spectral imaging for oral cancer diagnostics: a pilot study,” Biomed. Opt. Express 4(6), 938–949 (2013).
[Crossref]

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

G. Yang, “Bioimage informatics for understanding spatiotemporal dynamics of cellular processes,” Wiley Interdiscip. Rev.: Syst. Biol. Med. 5(3), 367–380 (2013).
[Crossref]

2012 (3)

M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
[Crossref]

Y. Z. Feng and D. W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012).
[Crossref]

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

2008 (2)

C. M. Biradar and P. S. Thenkabail, “Water productivity mapping methods using remote sensing,” J. Appl. Remote Sens 2(1), 023544 (2008).
[Crossref]

M. E. Gehm, M. S. Kim, C. Fernandez, and D. J. Brady, “High-throughput, multiplexed pushbroom hyperspectral microscopy,” Opt. Express 16(15), 11032–11043 (2008).
[Crossref]

2007 (1)

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

2006 (2)

M. B. Sinclair, D. M. Haaland, J. A. Timlin, and H. D. T. Jones, “Hyperspectral confocal microscope,” Appl. Opt. 45(24), 6283–6291 (2006).
[Crossref]

P. M. Kasili and T. Vo-Dinh, “Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications,” Cytometry, Part A 69A(8), 835–841 (2006).
[Crossref]

2004 (2)

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

M. B. Sinclair, J. A. Timlin, D. M. Haaland, and M. W. Washburne, “Design, construction, characterization, and application of a hyperspectral microarray scanner,” Appl. Opt. 43(10), 2079–2088 (2004).
[Crossref]

2003 (2)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
[Crossref]

T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
[Crossref]

2000 (1)

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

1997 (2)

M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. Image Process. 6(12), 1646–1658 (1997).
[Crossref]

D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
[Crossref]

1994 (1)

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Abdullah-Al-Wadud, M.

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

Acosta, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Aichert, A.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Aitken, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Akintayo, A.

K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
[Crossref]

Allen, P.

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

Barbin, D. F.

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

Barron, J. T.

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Bedard, N.

Bhattar, V.

Biradar, C. M.

C. M. Biradar and P. S. Thenkabail, “Water productivity mapping methods using remote sensing,” J. Appl. Remote Sens 2(1), 023544 (2008).
[Crossref]

Bovik, A. C.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Brada, R.

D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746.

Brady, D. J.

Bronstein, A. M.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
[Crossref]

Bucholtz, F.

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

Caballero, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Chae, O.

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

Chen, J.

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Chu, B.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Cunningham, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Dereniak, E. L.

Dewan, M. A. A.

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

Ding, X.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

C. Dong, C. C. Loy, and X. O. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” in Computer Vision (Springer, 2016), 391–407.

Durand, F.

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Elad, M.

M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. Image Process. 6(12), 1646–1658 (1997).
[Crossref]

Elmasry, G.

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

Fei, B.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref]

Feng, Y. Z.

Y. Z. Feng and D. W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012).
[Crossref]

Fernandez, C.

Feuer, A.

M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. Image Process. 6(12), 1646–1658 (1997).
[Crossref]

Fu, X.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

Gao, L.

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref]

Gao, W.

L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.

Gao, X.

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

Gehm, M. E.

Gharbi, M.

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Gillenwater, A. M.

Giryes, R.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
[Crossref]

Gonzalez, R. C.

R. C. Gonzalez and P. Wintz, Digital image processing (Electronic Industry Press, 2007), pp. 484–486.

Gu, S.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.

Guo, X.

X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref]

Haaland, D. M.

Hagen, N.

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Hasinoff, S. W.

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Hazel, G. G.

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

He, Y.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Hirai, A.

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Horé, A.

A. Horé and D. Ziou, “Image Quality Metrics: PSNR vs. SSIM,” in 2010 International Conference on Pattern Recognition, (2010), pp. 2366–2369.

Hornegger, J.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Hou, W.

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

Howe, J.

Hu, A.

Huang, X.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Huang, Y.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

Huszár, F.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Ichioka, Y.

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Inoue, T.

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167v3 (2015).

Itoh, K.

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Jia, H.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Jiang, L.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Jin, P.

Jobson, D. J.

D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
[Crossref]

Jones, H. D. T.

Kabir, M. H.

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

Kang, M. G.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
[Crossref]

Kasili, P. M.

P. M. Kasili and T. Vo-Dinh, “Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications,” Cytometry, Part A 69A(8), 835–841 (2006).
[Crossref]

Keren, D.

D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746.

Kim, H.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

Kim, J.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp.1646–1654.

Kim, M. S.

Kim, S.

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

Ko, S.

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

Köhler, T.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Kudenov, M. W.

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012).
[Crossref]

Ledig, C.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Lee, J. K.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp.1646–1654.

Lee, K. M.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp.1646–1654.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

Lee, S.

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

Lenc, K.

A. Vedaldi and K. Lenc, “MatConvNet - Convolutional Neural Networks for MATLAB,” arXiv:1412.4564v2 (2015).

Li, L.

L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.

Li, X.

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

Li, Y.

X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref]

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Lim, B.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

Lin, J.

Ling, H.

X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref]

Liu, F.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Liu, X.

Y. Tai, J. Yang, and X. Liu, “Image Super-Resolution via Deep Recursive Residual Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2790–2798.

Lore, K. G.

K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
[Crossref]

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

C. Dong, C. C. Loy, and X. O. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” in Computer Vision (Springer, 2016), 391–407.

Lu, G.

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref]

Maier, A.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Michalowicz, J. V.

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

Milanfar, P.

P. Milanfar, Super-resolution imaging (CRC press, 2010).

Nah, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

Park, M. K.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
[Crossref]

Park, S. C.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
[Crossref]

Peleg, S.

D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746.

Pepperkok, R.

T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
[Crossref]

Rahman, Z.

D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
[Crossref]

Richards-Kortum, R.

Rietdorf, J.

T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
[Crossref]

Sarkar, S.

K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
[Crossref]

Schebesch, F.

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

Schwartz, E.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
[Crossref]

Schwarz, R. A.

Sheikh, H. R.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Shen, Y.

Shi, W.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Simoncelli, E. P.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Sinclair, M. B.

Son, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

Stellman, C. M.

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

Sun, D. W.

Y. Z. Feng and D. W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012).
[Crossref]

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

Sun, J.

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167v3 (2015).

Tai, Y.

Y. Tai, J. Yang, and X. Liu, “Image Super-Resolution via Deep Recursive Residual Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2790–2798.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

Tang, X. O.

C. Dong, C. C. Loy, and X. O. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” in Computer Vision (Springer, 2016), 391–407.

Tao, D.

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

Tao, L.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Tejani, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Theis, L.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Thenkabail, P. S.

C. M. Biradar and P. S. Thenkabail, “Water productivity mapping methods using remote sensing,” J. Appl. Remote Sens 2(1), 023544 (2008).
[Crossref]

Timlin, J. A.

Tkaczyk, T. S.

Totz, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Vedaldi, A.

A. Vedaldi and K. Lenc, “MatConvNet - Convolutional Neural Networks for MATLAB,” arXiv:1412.4564v2 (2015).

Vo-Dinh, T.

P. M. Kasili and T. Vo-Dinh, “Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications,” Cytometry, Part A 69A(8), 835–841 (2006).
[Crossref]

Wang, L. V.

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref]

Wang, R.

L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.

Wang, W.

L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.

Wang, Z.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Washburne, M. W.

Williams, M. D.

Wintz, P.

R. C. Gonzalez and P. Wintz, Digital image processing (Electronic Industry Press, 2007), pp. 484–486.

Woodell, G. A.

D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
[Crossref]

Xiang, G.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Xie, X.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Yang, G.

G. Yang, “Bioimage informatics for understanding spatiotemporal dynamics of cellular processes,” Wiley Interdiscip. Rev.: Syst. Biol. Med. 5(3), 367–380 (2013).
[Crossref]

Yang, J.

Y. Tai, J. Yang, and X. Liu, “Image Super-Resolution via Deep Recursive Residual Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2790–2798.

Zeng, D.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

Zhang, C.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Zhang, K.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.

Zhang, L.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.

Zhang, X.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

Zhang, Y.

Zhao, L.

Zhou, W.

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Zhu, C.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

Zhu, H.

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Zhu, S.

Zimmermann, T.

T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
[Crossref]

Ziou, D.

A. Horé and D. Ziou, “Image Quality Metrics: PSNR vs. SSIM,” in 2010 International Conference on Pattern Recognition, (2010), pp. 2366–2369.

Zuo, W.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.

ACM Trans. Graph. (1)

M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph. 36(4), 1–12 (2017).
[Crossref]

Appl. Opt. (2)

Biomed. Opt. Express (1)

Crit. Rev. Food Sci. Nutr. (2)

Y. Z. Feng and D. W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012).
[Crossref]

G. Elmasry, D. F. Barbin, D. W. Sun, and P. Allen, “Meat quality evaluation by hyperspectral imaging technique: an overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012).
[Crossref]

Cytometry, Part A (1)

P. M. Kasili and T. Vo-Dinh, “Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications,” Cytometry, Part A 69A(8), 835–841 (2006).
[Crossref]

FEBS Lett. (1)

T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003).
[Crossref]

IEEE Signal Proc. Mag. (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Proc. Mag. 20(3), 21–36 (2003).
[Crossref]

IEEE Trans. Broadcast Telev. Receivers (1)

M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement,” IEEE Trans. Broadcast Telev. Receivers 53(2), 593–600 (2007).
[Crossref]

IEEE Trans. Comput. Imaging (1)

T. Köhler, X. Huang, F. Schebesch, A. Aichert, A. Maier, and J. Hornegger, “Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization,” IEEE Trans. Comput. Imaging 2(1), 42–58 (2016).
[Crossref]

IEEE Trans. Image Process. (6)

W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. Image Process. 6(12), 1646–1658 (1997).
[Crossref]

X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref]

D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process. 6(3), 451–462 (1997).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: Toward Learning an End-to-End Image Processing Pipeline,” IEEE Trans. Image Process. 28(2), 912–923 (2019).
[Crossref]

IEEE Trans. on Neur. Net. Lear. (1)

W. Hou, X. Gao, D. Tao, and X. Li, “Blind Image Quality Assessment via Deep Learning,” IEEE Trans. on Neur. Net. Lear. 26(6), 1275–1286 (2015).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

J. Appl. Remote Sens (1)

C. M. Biradar and P. S. Thenkabail, “Water productivity mapping methods using remote sensing,” J. Appl. Remote Sens 2(1), 023544 (2008).
[Crossref]

J. Biomed. Opt. (1)

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014).
[Crossref]

Opt. Eng. (2)

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

C. M. Stellman, G. G. Hazel, F. Bucholtz, and J. V. Michalowicz, “Real-time hyperspectral detection and cuing,” Opt. Eng. 39(7), 1928–1935 (2000).
[Crossref]

Opt. Express (3)

Opt. Rev. (1)

A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of Multiple-Image Fourier Transform Spectral Imaging to Measurement of Fast Phenomena,” Opt. Rev. 1(2), 205–207 (1994).
[Crossref]

Pattern Recogn. (1)

K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recogn. 61, 650–662 (2017).
[Crossref]

Phys. Rep. (1)

L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016).
[Crossref]

Sci. Rep. (1)

H. Zhu, B. Chu, C. Zhang, F. Liu, L. Jiang, and Y. He, “Hyperspectral Imaging for Presymptomatic Detection of Tobacco Disease with Successive Projections Algorithm and Machine-learning Classifiers,” Sci. Rep. 7(1), 4125 (2017).
[Crossref]

Signal Process-Image (1)

J. Sun, S. Kim, S. Lee, and S. Ko, “A novel contrast enhancement forensics based on convolutional neural networks,” Signal Process-Image 63, 149–160 (2018).
[Crossref]

Wiley Interdiscip. Rev.: Syst. Biol. Med. (1)

G. Yang, “Bioimage informatics for understanding spatiotemporal dynamics of cellular processes,” Wiley Interdiscip. Rev.: Syst. Biol. Med. 5(3), 367–380 (2013).
[Crossref]

Other (16)

P. Milanfar, Super-resolution imaging (CRC press, 2010).

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2808–2817.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a Deep Convolutional Network for Image Super-Resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp.1646–1654.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114.

Y. Tai, J. Yang, and X. Liu, “Image Super-Resolution via Deep Recursive Residual Network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2790–2798.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1132–1140.

L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2015), pp. 3730–3734.

X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 2782–2790.

D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition (IEEE, 1988), pp. 742–746.

R. C. Gonzalez and P. Wintz, Digital image processing (Electronic Industry Press, 2007), pp. 484–486.

L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in Proceedings of IEEE Conference on Visual Communications and Image Processing (IEEE, 2018), pp. 1–4.

C. Dong, C. C. Loy, and X. O. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” in Computer Vision (Springer, 2016), 391–407.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167v3 (2015).

A. Vedaldi and K. Lenc, “MatConvNet - Convolutional Neural Networks for MATLAB,” arXiv:1412.4564v2 (2015).

A. Horé and D. Ziou, “Image Quality Metrics: PSNR vs. SSIM,” in 2010 International Conference on Pattern Recognition, (2010), pp. 2366–2369.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (a) The Schematic of a SHIFT. Light passes through the 1:1 afocal telescope, generating polarizer (G), microlens array (MLA), Nomarski prism (NP1), half-wave plate (HWP), Nomarski prism (NP2), and analyzer (A) in turn, and then reaches the detector CCD. (b) Rotated BPI (which contains the G, NP1, HWP, NP2, and A) and the rearranged 3D interferogram cube [16].
Fig. 2.
Fig. 2. Left: the high-resolution image, middle: low-resolution image, and right: their histogram
Fig. 3.
Fig. 3. Subpixel shift in the sub-images.
Fig. 4.
Fig. 4. Schematic of: (a) the data acquisition method, and (b) the training process.
Fig. 5.
Fig. 5. The CESRNN network structure. One convolutional layer is used for data pre-processing, and the other is used to generate the output image. Three convolutional modules (red dash boxes) are introduced in the network. Some of the convolutional layers in the first module are followed by a batch normalization layer. The mean value of the input is cropped and duplicated and then introduce to the end of the first module.
Fig. 6.
Fig. 6. The position of training sub-images; the side lengths of the truncated rectangle are 2 sub-images to 12 sub-images.
Fig. 7.
Fig. 7. Network performance for different numbers of training sub-images: (a) PSNR, (b) SSIM, and (c) computing time for one result image on the GPU.
Fig. 8.
Fig. 8. Network performance for different numbers of training images: (a) PSNR, and (b) SSIM.
Fig. 9.
Fig. 9. The experimental results of different output sizes. Left: digital image before printing. Top: one resized sub-image by the bicubic interpolation. Bottom: our results.
Fig. 10.
Fig. 10. Spectra of the four types of light sources used in the experiment. (a) Warm light LED. (b) Cold light LED. (c) Halogen lamp. (d) Fluorescent lamp.
Fig. 11.
Fig. 11. The test results of different methods. From left to right: the original images (print photographs), the HR images obtained by the conventional image path, and the images obtained by Bicubic, MAP method, VDSR method, and the proposed CESRNN method, respectively.

Tables (3)

Tables Icon

Table 1. The PSNR, SSIM, and computing time of the network at different output sizes

Tables Icon

Table 2. Network performance when different light sources were used in the training and test processes

Tables Icon

Table 3. Performance of different methods

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

Y k = H k X + V k = I k D k B k F k X + V k , k = 1 , 2 , , K ,

Metrics