Abstract

We demonstrate that image reconstruction can be achieved via a convolutional neural network for a “see-through” computational camera comprised of a transparent window and CMOS image sensor. Furthermore, we compared classification results using a classifier network for the raw sensor data against those with the reconstructed images. The results suggest that similar classification accuracy is likely possible in both cases with appropriate network optimizations. All networks were trained and tested for the MNIST (6 classes), EMNIST, and the Kanji49 datasets.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Coupled deep learning coded aperture design for compressive image classification

Jorge Bacca, Laura Galvis, and Henry Arguello
Opt. Express 28(6) 8528-8540 (2020)

Computational imaging enables a “see-through” lens-less camera

Ganghun Kim and Rajesh Menon
Opt. Express 26(18) 22826-22836 (2018)

Reconstruction and analysis of wavefront with irregular-shaped aperture based on deep learning

Xin Liu, Zhenhua Liu, Zhongming Yang, Jiantai Dou, and Zhaojun Liu
OSA Continuum 3(4) 835-843 (2020)

References

  • View by:
  • |
  • |
  • |

  1. P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
    [Crossref]
  2. P. Wang and R. Menon, “Computational spectroscopy based on a broadband diffractive optic,” Opt. Express 22(12), 14575–14587 (2014).
    [Crossref]
  3. P. Wang and R. Menon, “Ultra-high sensitivity color imaging via a transparent diffractive-filter array and computational optics,” Optica 2(11), 933–939 (2015).
    [Crossref]
  4. P. Wang and R. Menon, “Computational snapshot angular-spectral lensless imaging,” arXiv preprint arXiv:1707.08104 [physics.optics] (2017).
  5. P. Wang and R. Menon, “Computational multi-spectral video imaging,” J. Opt. Soc. Am. A 35(1), 189–199 (2018).
    [Crossref]
  6. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
    [Crossref]
  7. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
    [Crossref]
  8. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25(15), 17466–17479 (2017).
    [Crossref]
  9. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27(20), 28075–28090 (2019).
    [Crossref]
  10. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  11. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
    [Crossref]
  12. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.
  13. S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).
  14. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
    [Crossref]
  15. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
    [Crossref]
  16. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.
  17. P. Berto, H. Rigneault, and M. Guillon, “Wavefront sensing with a thin diffuser,” Opt. Lett. 42(24), 5117–5120 (2017).
    [Crossref]
  18. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
    [Crossref]
  19. A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
    [Crossref]
  20. G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014).
    [Crossref]
  21. G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
    [Crossref]
  22. G. Kim and R. Menon, “Numerical analysis of computational cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
    [Crossref]
  23. G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
    [Crossref]
  24. O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
    [Crossref]
  25. G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
    [Crossref]
  26. G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).
  27. G. Kim and R. Menon, “Computational imaging enables a “see-through” lensless camera,” Opt. Express 26(18), 22826–22836 (2018).
    [Crossref]
  28. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning in computational imaging,” Optica 6(8), 921–943 (2019).
    [Crossref]
  29. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” In International Conference on Medical image computing and computer-assisted intervention, pages234–241. Springer, 2015.
  30. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).
  31. A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” In Advances in neural information processing systems, p. 5574–5584 (2017).
  32. S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
    [Crossref]
  33. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, (2014).
  34. S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).
  35. Y. LeCun and C. Cortes. MNIST handwritten digit database (2010).
  36. G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).
  37. T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

2019 (3)

2018 (5)

2017 (8)

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
[Crossref]

G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25(15), 17466–17479 (2017).
[Crossref]

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

G. Kim and R. Menon, “Numerical analysis of computational cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
[Crossref]

P. Berto, H. Rigneault, and M. Guillon, “Wavefront sensing with a thin diffuser,” Opt. Lett. 42(24), 5117–5120 (2017).
[Crossref]

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref]

2016 (1)

2015 (2)

P. Wang and R. Menon, “Ultra-high sensitivity color imaging via a transparent diffractive-filter array and computational optics,” Optica 2(11), 933–939 (2015).
[Crossref]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

2014 (4)

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy based on a broadband diffractive optic,” Opt. Express 22(12), 14575–14587 (2014).
[Crossref]

2008 (1)

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
[Crossref]

Adams, J. K.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Adarsh, V. R.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

Afshar, S.

G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).

Aharoni, D.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Antipa, N.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

Avants, B. W.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, (2014).

Baek, S.-H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

Baraniuk, R. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Barbastathis, G.

Berto, P.

Bober-Irizar, M.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

Boominathan, V.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

Bostan, E.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” In International Conference on Medical image computing and computer-assisted intervention, pages234–241. Springer, 2015.

Capecchi, M.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

Clanuwat, T.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

Cohen, G.

G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).

Cortes, C.

Y. LeCun and C. Cortes. MNIST handwritten digit database (2010).

Cox, D. D.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Deng, M.

Dun, X.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Fayyaz, M.

S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” In International Conference on Medical image computing and computer-assisted intervention, pages234–241. Springer, 2015.

Fu, Q.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Gal, Y.

A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” In Advances in neural information processing systems, p. 5574–5584 (2017).

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Grama, A.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Guillon, M.

Gupta, O.

Ha, D.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

Hasanpour, S. H.

S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).

Heckel, R.

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Heidrich, W.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Heshmat, B.

Horisaki, R.

Hoshizawa, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

Isaacson, K.

Jenks, K.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

Jeon, D. S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Kapetanovic, S.

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Kendall, A.

A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” In Advances in neural information processing systems, p. 5574–5584 (2017).

Khan, S. S.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

Kim, G.

G. Kim and R. Menon, “Computational imaging enables a “see-through” lensless camera,” Opt. Express 26(18), 22826–22836 (2018).
[Crossref]

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref]

G. Kim and R. Menon, “Numerical analysis of computational cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
[Crossref]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014).
[Crossref]

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, (2014).

Kitamoto, A.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

Kuo, G.

Lamb, A.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

LeCun, Y.

Y. LeCun and C. Cortes. MNIST handwritten digit database (2010).

Lee, J.

Li, S.

Menon, R.

P. Wang and R. Menon, “Computational multi-spectral video imaging,” J. Opt. Soc. Am. A 35(1), 189–199 (2018).
[Crossref]

G. Kim and R. Menon, “Computational imaging enables a “see-through” lensless camera,” Opt. Express 26(18), 22826–22836 (2018).
[Crossref]

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref]

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

G. Kim and R. Menon, “Numerical analysis of computational cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
[Crossref]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

P. Wang and R. Menon, “Ultra-high sensitivity color imaging via a transparent diffractive-filter array and computational optics,” Optica 2(11), 933–939 (2015).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy based on a broadband diffractive optic,” Opt. Express 22(12), 14575–14587 (2014).
[Crossref]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014).
[Crossref]

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).

P. Wang and R. Menon, “Computational snapshot angular-spectral lensless imaging,” arXiv preprint arXiv:1707.08104 [physics.optics] (2017).

Mildenhall, B.

Mitra, K.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

Molodtsov, M. I.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Monakhova, K.

Nagarajan, N.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

Nakamura, Y.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

Ng, R.

Nöbauer, T.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Osten, W.

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

Ozcan, A.

Palmer, R.

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref]

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).

Pastuzyn, E.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

Pedrini, G.

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

Raskar, R.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).

Rigneault, H.

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” In International Conference on Medical image computing and computer-assisted intervention, pages234–241. Springer, 2015.

Rouhani, M.

S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).

Sabokrou, M.

S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

Sao, M.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

Saratchandran, P.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
[Crossref]

Satat, G.

Sheperd, J.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

Shimano, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

Singh, A. K.

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

Sinha, A.

Situ, G.

Skocek, O.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).

Sundararajan, N.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
[Crossref]

Suresh, S.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
[Crossref]

Tajima, K.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

Takagi, R.

Takeda, M.

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

Tan, J.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

Tancik, M.

Tanida, J.

Tapson, J.

G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).

Traub, F. M.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

van Schaik, A.

G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).

Veeraraghavan, A.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

Vercosa, D. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Waller, L.

Wang, P.

Weilguny, L.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Xia, C. N.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Yamagata, M.

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Yamamoto, K.

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

Yanny, K.

Ye, F.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Yi, S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Yurtsever, J.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).

ACM Trans. Graph. (1)

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, and W. Heidrich, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Appl. Opt. (2)

Appl. Phys. Lett. (2)

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014).
[Crossref]

Inf. Sci. (1)

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178(12), 2621–2638 (2008).
[Crossref]

J. Opt. Soc. Am. A (1)

Nat. Methods (1)

O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, and D. D. Cox, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018).
[Crossref]

Nat. Photonics (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Opt. Express (6)

Opt. Lett. (1)

Optica (5)

Sci. Adv. (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, and R. G. Baraniuk, “Single-frame 3D fluorescence microscopy with ultraminiature lensless Flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Sci. Rep. (2)

A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction-limited resolution,” Sci. Rep. 7(1), 10687 (2017).
[Crossref]

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Sheperd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref]

Other (13)

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” 2015 IEEE International Conference on computer vision workshop.

S. S. Khan, V. R. Adarsh, V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mitra, “Towards photorealistic reconstruction of highly multiplex lensless images,” Proc. of IEEE International Conf. Computer Vision (2019).

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, (2014).

S. H. Hasanpour, M. Rouhani, M. Fayyaz, and M. Sabokrou, “Lets keep it simple, using simple architectures to outperform deeper and more complex architectures,” arXiv preprint arXiv:1608.06037 (2016).

Y. LeCun and C. Cortes. MNIST handwritten digit database (2010).

G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, “Emnist: an extension of mnist to handwritten letters,” arXiv preprint arXiv:1702.05373 (2017).

T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha, “Deep learning for classical japanese literature,” arXiv preprint arXiv: 1812.01718v1 (2018).

P. Wang and R. Menon, “Computational snapshot angular-spectral lensless imaging,” arXiv preprint arXiv:1707.08104 [physics.optics] (2017).

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” 2017 International conference on computational photography.

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, “Lensless-camera based machine learning for image classification,” arXiv preprint arXiv:1709.00408 [cs.CV] (2017).

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” In International Conference on Medical image computing and computer-assisted intervention, pages234–241. Springer, 2015.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition, p. 770–778 (2016).

A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” In Advances in neural information processing systems, p. 5574–5584 (2017).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) Schematic of our experimental setup. The object is an LCD display placed about 250 mm away from a transparent plexiglass window, to the edge of which is placed a color CMOS image sensor (with no optics). (b) Photograph of our experimental setup. The letter “o” from the MNIST dataset is displayed on the LCD.
Fig. 2.
Fig. 2. CNN architecture for image reconstruction.
Fig. 3.
Fig. 3. Reconstruction results for MNIST data. Left shows example images from the training set and Right shows example images from the testing data set.
Fig. 4.
Fig. 4. Reconstruction results for EMNIST data. Left shows example images from the training set and Right shows example images from the testing data set.
Fig. 5.
Fig. 5. Reconstruction results for Kanji49 data. Left shows example images from the training set and Right shows example images from the testing data set.
Fig. 6.
Fig. 6. (a) Schematic of the two methods of classification. (b) Classification accuracy for the two methods and the 3 data-sets.
Fig. 7.
Fig. 7. Confusion matrix of classification using (a) raw-sensor images and (b) the reconstructed images from the MNIST (6 classes) dataset.

Tables (2)

Tables Icon

Table 1. Image size (in pixels) of images used in each case.

Tables Icon

Table 2. Details of datasets and summary of results

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

L = 1 N i g i l o g ( p i ) ( 1 g i ) l o g ( 1 p i ) ,

Metrics