Abstract

A coupled deep learning approach for coded aperture design and single-pixel measurements classification is proposed. A whole neural network is trained to simultaneously optimize the binary sensing matrix of a single-pixel camera (SPC) and the parameters of a classification network, considering the constraints imposed by the compressive architecture. Then, new single-pixel measurements can be acquired and classified with the learned parameters. This method avoids the reconstruction process while maintaining classification reliability. In particular, two network architectures were proposed, one learns re-projected measurements to the image size, and the other extracts small features directly from the compressive measurements. They were simulated using two image data sets and a test-bed implementation. The first network beats in around 10% the accuracy reached by the state-of-the-art methods. A 2x increase in computing time is achieved with the second proposed net.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Machine-learning enables image reconstruction and classification in a “see-through” camera

Zhimeng Pan, Brian Rodriguez, and Rajesh Menon
OSA Continuum 3(3) 401-409 (2020)

Deep-learning projector for optical diffraction tomography

Fangshu Yang, Thanh-an Pham, Harshit Gupta, Michael Unser, and Jianwei Ma
Opt. Express 28(3) 3905-3921 (2020)

Does deep learning always outperform simple linear regression in optical imaging?

Shuming Jiao, Yang Gao, Jun Feng, Ting Lei, and Xiaocong Yuan
Opt. Express 28(3) 3717-3731 (2020)

References

  • View by:
  • |
  • |
  • |

  1. E. J. Candès, “Compressive sampling,” in Proceedings of the international congress of mathematicians, (2006), pp. 1433–1452.
  2. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  3. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
    [Crossref]
  4. E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007).
    [Crossref]
  5. A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2017), pp. 2272–2276.
  6. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.
  7. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
    [Crossref]
  8. J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
    [Crossref]
  9. L. Galvis, H. Arguello, and G. R. Arce, “Coded aperture design in mismatched compressive spectral imaging,” Appl. Opt. 54(33), 9875–9882 (2015).
    [Crossref]
  10. H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
    [Crossref]
  11. L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56(22), 6332–6340 (2017).
    [Crossref]
  12. M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
    [Crossref]
  13. A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
    [Crossref]
  14. R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 3441–3444.
  15. C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
    [Crossref]
  16. S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1913–1917.
  17. E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
    [Crossref]
  18. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
    [Crossref]
  19. H. Arguello, H. Rueda, Y. Wu, D. W. Prather, and G. R. Arce, “Higher-order computational model for coded aperture spectral imaging,” Appl. Opt. 52(10), D12–D21 (2013).
    [Crossref]
  20. S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
    [Crossref]
  21. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.
  22. I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.
  23. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, (2015), pp. 1–15.
  24. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).
  25. V. N. Xuan and O. Loffeld, “A deep learning framework for compressed learning and signal reconstruction,” in 5th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing, and Imaging (CoSeRa), (2018), pp. 1–5.
  26. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
    [Crossref]
  27. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., Citeseer (2009).
  28. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), pp. 1097–1105.
  29. J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

2019 (1)

J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
[Crossref]

2018 (5)

H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
[Crossref]

E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
[Crossref]

2017 (2)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56(22), 6332–6340 (2017).
[Crossref]

2015 (1)

2014 (1)

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

2013 (1)

2010 (1)

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

2008 (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

2007 (1)

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007).
[Crossref]

1998 (1)

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

1992 (1)

S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
[Crossref]

Adler, A.

E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
[Crossref]

Arce, G. R.

Arguello, H.

J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
[Crossref]

C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
[Crossref]

H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
[Crossref]

L. Galvis, D. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Appl. Opt. 56(22), 6332–6340 (2017).
[Crossref]

L. Galvis, H. Arguello, and G. R. Arce, “Coded aperture design in mismatched compressive spectral imaging,” Appl. Opt. 54(33), 9875–9882 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

H. Arguello, H. Rueda, Y. Wu, D. W. Prather, and G. R. Arce, “Higher-order computational model for coded aperture spectral imaging,” Appl. Opt. 52(10), D12–D21 (2013).
[Crossref]

J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

Ashok, A.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, (2015), pp. 1–15.

Bacca, J.

J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
[Crossref]

C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
[Crossref]

J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

Baraniuk, R. G.

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2017), pp. 2272–2276.

Bengio, S.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

Bengio, Y.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Bienenstock, E.

S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
[Crossref]

Bottou, L.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.

Boufounos, P. T.

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

Brady, D. J.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Calderbank, R.

R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 3441–3444.

Candès, E.

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007).
[Crossref]

Candès, E. J.

E. J. Candès, “Compressive sampling,” in Proceedings of the international congress of mathematicians, (2006), pp. 1433–1452.

Carin, L.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Correa, C. V.

J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
[Crossref]

H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
[Crossref]

Dahl, G.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.

Davenport, M. A.

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Doulamis, A.

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

Doulamis, N.

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

Doursat, R.

S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
[Crossref]

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Edgar, M. P.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Elad, M.

E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
[Crossref]

Froustey, E.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Galvis, L.

Garcia, H.

H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
[Crossref]

Geman, S.

S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
[Crossref]

Haffner, P.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Hardt, M.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

Higham, C. F.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Hinojosa, C.

C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
[Crossref]

Hinton, G.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., Citeseer (2009).

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), pp. 1097–1105.

Jafarpour, S.

R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 3441–3444.

Jin, K. H.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Kelly, K. E.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Kerviche, R.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, (2015), pp. 1–15.

Kittle, D. S.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), pp. 1097–1105.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., Citeseer (2009).

Kulkarni, K.

S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1913–1917.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Lau, D.

LeCun, Y.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Loffeld, O.

V. N. Xuan and O. Loffeld, “A deep learning framework for compressed learning and signal reconstruction,” in 5th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing, and Imaging (CoSeRa), (2018), pp. 1–5.

Lohit, S.

S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1913–1917.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

Ma, X.

Martens, J.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.

McCann, M. T.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Molina-Velasco, D.

J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

Mousavi, A.

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2017), pp. 2272–2276.

Murray-Smith, R.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Padgett, M. J.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Prather, D. W.

Protopapadakis, E.

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

Recht, B.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

Romberg, J.

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007).
[Crossref]

Rueda, H.

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Sutskever, I.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), pp. 1097–1105.

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Turaga, P.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1913–1917.

Unser, M.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

Vargas-García, H.

J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

Vinyals, O.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

Voulodimos, A.

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

Wakin, M. B.

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

Wu, Y.

Xuan, V. N.

V. N. Xuan and O. Loffeld, “A deep learning framework for compressed learning and signal reconstruction,” in 5th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing, and Imaging (CoSeRa), (2018), pp. 1–5.

Zhang, C.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

Zisselman, E.

E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
[Crossref]

Appl. Opt. (3)

Comput. intelligence neuroscience (1)

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. intelligence neuroscience 2018, 1–13 (2018).
[Crossref]

IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. (1)

J. Bacca, C. V. Correa, and H. Arguello, “Noniterative hyperspectral image reconstruction from compressive fused measurements,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 12(4), 1231–1239 (2019).
[Crossref]

IEEE J. Sel. Top. Signal Process. (2)

M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010).
[Crossref]

C. Hinojosa, J. Bacca, and H. Arguello, “Coded aperture design for compressive spectral subspace clustering,” IEEE J. Sel. Top. Signal Process. 12(6), 1589–1600 (2018).
[Crossref]

IEEE Signal Process. Mag. (2)

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. E. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Transactions on Image Process. (2)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Process. 26(9), 4509–4522 (2017).
[Crossref]

H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Process. 27(12), 6174–6184 (2018).
[Crossref]

Inverse Problems (1)

E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007).
[Crossref]

Neural Computation (1)

S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation 4(1), 1–58 (1992).
[Crossref]

Proc. IEEE (1)

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Process. Anal. Learn. Images, Shapes, Forms (1)

E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Process. Anal. Learn. Images, Shapes, Forms 19, 3–17 (2018).
[Crossref]

Sci. Rep. (1)

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Other (13)

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2017), pp. 2272–2276.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

E. J. Candès, “Compressive sampling,” in Proceedings of the international congress of mathematicians, (2006), pp. 1433–1452.

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, (2013), pp. 1139–1147.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, (2015), pp. 1–15.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 pp. 1–15 (2016).

V. N. Xuan and O. Loffeld, “A deep learning framework for compressed learning and signal reconstruction,” in 5th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing, and Imaging (CoSeRa), (2018), pp. 1–5.

S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1913–1917.

R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2012), pp. 3441–3444.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. rep., Citeseer (2009).

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), pp. 1097–1105.

J. Bacca, H. Vargas-García, D. Molina-Velasco, and H. Arguello, “Single pixel compressive spectral polarization imaging using a movable micro-polarizer array,” Revista Fac. de Ing. Universidad de Antioquia pp. 91–99 (2018).

Supplementary Material (1)

NameDescription
» Visualization 1       Real measurement acquisition using designed binary masks implemented into a DMD

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic of the single-pixel camera acquisition.
Fig. 2.
Fig. 2. Proposed deep learning scheme where the colors are only for illustrative purposes and represent different shots. In the training step the binary sensing matrix and CNN parameters for classification are learned. In the testing step, the learned sensing matrix is hardware implemented in a real DMD to acquire SPC compressed measurements, and in terms of software, those measurements are classified with the learned CCN.
Fig. 3.
Fig. 3. Layers description of the proposed neural networks used for the MNIST data set. a) Binary-Pres-Net and b) Binary-NoP-Net. conv denote convolutional layer and they are shown in green, fc is a fully-connected layer shown in orange color, and st stands for the stride of the max-polling. The main proposed layers are presented in blue color.
Fig. 4.
Fig. 4. Layers description of the proposed neural networks used for the CIFAR data set. a) Binary-Pres-Net and b) Binary-NoP-Net.
Fig. 5.
Fig. 5. Test-bed implementation of the single-pixel camera.
Fig. 6.
Fig. 6. Printed digits used to evaluate the proposed method.
Fig. 7.
Fig. 7. Designed Coded aperture employed in the DMD for 5 and 10 shots with the two proposed schemes.
Fig. 8.
Fig. 8. Confusion matrix for all target digits used. (Left) Binary-Pres-Net. (Right) Binary-NoP-Net.

Tables (5)

Tables Icon

Table 1. Average classification accuracy for different sensing ratios - MNIST data set.

Tables Icon

Table 2. Average training time in seconds per epoch - MNIST data set.

Tables Icon

Table 3. Average classification accuracy for four sensing ratios - CIFAR-10 data set.

Tables Icon

Table 4. Average training time in seconds per epoch - CIFAR data set.

Tables Icon

Table 5. General accuracy of the proposed methods for the experimental results.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

g k ( x , y ) = ϕ k ( x , y ) f ( x , y ) d x d y ,
g ~ k = g k ( x , y ) rect ( x Δ g 1 , y Δ g 1 ) d x d y ,
ϕ k ( x , y ) = m = 1 M n = 1 N ϕ m , n k rect ( x Δ t m , y Δ t n ) ,
g ~ k = m = 1 M n = 1 N ϕ m , n k f m , n + ω k ,
g = Φ f + ω ,
γ = K M N .
{ Φ , θ } = arg min Φ , θ 1 L = 1 L L ( M θ ( Φ x ) , d ) subject to Φ { 0 , 1 } k , n , k = 1 , , K , a n d n = 1 , , M N ,
{ Φ , θ } = arg min Φ , θ 1 L = 1 L L ( M θ ( f 1 ( Φ x + b 1 ) ) , d ) + μ k = 1 K n = 1 M N ( 1 + Φ k , n ) 2 ( 1 Φ k , n ) 2 ,
L ( z , d ) = [ d log ( z ) + ( 1 d ) log ( 1 z ) ) ] ,
z = M θ ( f 1 ( g Φ x + b 1 ) ) ,
g = 2 g ~ g 0 = 2 ( Φ ~ f + ω ~ ) ( 1 K d T ) D f + 1 K ω o ω 0 ~ = ( 2 Φ ~ D ) f + 2 ω ~ + ω 0 ~ = Φ f + ω ,

Metrics