Abstract

This paper presents a depth-based defocus map estimation method from a single camera with multiple off-axis apertures. The proposed estimation algorithm consists of two steps: (i) object distance estimation using off-axis apertures and (ii) defocus map estimation based on the object distance. The proposed method can accurately estimate the defocus map using object distances that are well-characterized in a color shift model-based computational camera. Experimental results show that the proposed method outperforms the state-of-the-art defocus estimation methods in the sense of both accuracy and the estimation range. The proposed defocus map estimation method is suitable for multifocusing, refocusing, and extended depth of field (EDoF) systems.

© 2015 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Distance estimation using a single computational camera with dual off-axis color filtered apertures

Seungwon Lee, Monson H. Hayes, and Joonki Paik
Opt. Express 21(20) 23116-23129 (2013)

Depth map generation using a single image sensor with phase masks

Jinbeum Jang, Sangwoo Park, Jieun Jo, and Joonki Paik
Opt. Express 24(12) 12868-12878 (2016)

Passive depth estimation using chromatic aberration and a depth from defocus approach

Pauline Trouvé, Frédéric Champagnat, Guy Le Besnerais, Jacques Sabater, Thierry Avignon, and Jérôme Idier
Appl. Opt. 52(29) 7152-7164 (2013)

References

  • View by:
  • |
  • |
  • |

  1. W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
    [Crossref]
  2. V. Maik, J. Shin, and J. Paik, “Regularized image restoration by means of fusion for digital auto focusing,” in Proceedings of Computational Intelligence and Security (Springer, 2005), pp. 928–934.
  3. P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
    [Crossref] [PubMed]
  4. P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
    [Crossref]
  5. S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
    [Crossref]
  6. Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in Proceedings of International Conference on Image Processing (IEEE, 2009), pp. 1798–1800.
  7. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011).
    [Crossref]
  8. C. Shen, W. Hwang, and S. Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing (IEEE, 2012), pp. 1069–1072.
  9. C. Willert and M. Gharib, “Three-dimensional particle imaging with a single camera,” Experiments in Fluids 12(6), 353–358 (1992).
    [Crossref]
  10. H. Farid and E. P. Simoncelli, “Range estimation by optical differentiation,” J. Opt. Soc. Am. A 15(7), 1777– 1786, (1993).
    [Crossref]
  11. S. Nayar, “Computational cameras: redefining the image,” Computer 39(8), 30–38 (2006).
    [Crossref]
  12. C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011).
    [Crossref] [PubMed]
  13. Y. Lim, J. Park, K. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express 20(21), 23480–23488 (2012).
    [Crossref] [PubMed]
  14. T. Nakamura, R. Horisaki, and J. Tanida, “Computational phase modulation in light field imaging,” Opt. Express 21(29), 29523–29543 (2013).
    [Crossref]
  15. S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
    [Crossref] [PubMed]
  16. A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  17. V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
    [Crossref]
  18. Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).
  19. S. Lee, M. H Hayes, and J. Paik, “Distance estimation using a single computational camera with dual off-axis color filtered apertures,” Opt. Express 21(20), 23116–23129 (2013).
    [Crossref] [PubMed]
  20. H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
    [Crossref]
  21. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2007).
  22. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
    [Crossref]
  23. C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
    [Crossref]

2013 (4)

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

T. Nakamura, R. Horisaki, and J. Tanida, “Computational phase modulation in light field imaging,” Opt. Express 21(29), 29523–29543 (2013).
[Crossref]

S. Lee, M. H Hayes, and J. Paik, “Distance estimation using a single computational camera with dual off-axis color filtered apertures,” Opt. Express 21(20), 23116–23129 (2013).
[Crossref] [PubMed]

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
[Crossref]

2012 (3)

C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
[Crossref]

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

Y. Lim, J. Park, K. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express 20(21), 23480–23488 (2012).
[Crossref] [PubMed]

2011 (2)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011).
[Crossref]

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011).
[Crossref] [PubMed]

2008 (3)

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
[Crossref]

Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).

2007 (1)

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

2006 (1)

S. Nayar, “Computational cameras: redefining the image,” Computer 39(8), 30–38 (2006).
[Crossref]

2005 (1)

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

2002 (1)

H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
[Crossref]

1993 (1)

1992 (1)

C. Willert and M. Gharib, “Three-dimensional particle imaging with a single camera,” Experiments in Fluids 12(6), 353–358 (1992).
[Crossref]

Bando, Y.

Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).

Berthod, M.

H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
[Crossref]

Bovik, A. C.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
[Crossref]

Brown, M.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in Proceedings of International Conference on Image Processing (IEEE, 2009), pp. 1798–1800.

Burger, M.

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

Chandler, D. M.

C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
[Crossref]

Chen, B.

Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).

Cho, D.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

Farid, H.

Favaro, P.

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

Foroosh, H.

H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
[Crossref]

Fusiello, A.

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

Garcia, M.

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

Gharib, M.

C. Willert and M. Gharib, “Three-dimensional particle imaging with a single camera,” Experiments in Fluids 12(6), 353–358 (1992).
[Crossref]

Gonzalez, R. C.

R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2007).

Har, D.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

Hayes, M.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

Hayes, M. H

Horisaki, R.

Huang, X.

A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Hwang, W.

C. Shen, W. Hwang, and S. Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing (IEEE, 2012), pp. 1069–1072.

Kim, N.

Kim, S.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

Kwon, K.

Lee, E.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

Lee, S.

Lim, Y.

Maik, V.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

V. Maik, J. Shin, and J. Paik, “Regularized image restoration by means of fusion for digital auto focusing,” in Proceedings of Computational Intelligence and Security (Springer, 2005), pp. 928–934.

Michael, K.

W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
[Crossref]

Mittal, A.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
[Crossref]

Mohan, A.

A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Nakamura, T.

Nayar, S.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011).
[Crossref] [PubMed]

S. Nayar, “Computational cameras: redefining the image,” Computer 39(8), 30–38 (2006).
[Crossref]

Nishita, T.

Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).

Osher, S.

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

Paik, J.

S. Lee, M. H Hayes, and J. Paik, “Distance estimation using a single computational camera with dual off-axis color filtered apertures,” Opt. Express 21(20), 23116–23129 (2013).
[Crossref] [PubMed]

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

V. Maik, J. Shin, and J. Paik, “Regularized image restoration by means of fusion for digital auto focusing,” in Proceedings of Computational Intelligence and Security (Springer, 2005), pp. 928–934.

Park, J.

Pei, S.

C. Shen, W. Hwang, and S. Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing (IEEE, 2012), pp. 1069–1072.

Pertuz, S.

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

Phan, T. D.

C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
[Crossref]

Puig, D.

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

Raskar, R.

A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Shen, C.

C. Shen, W. Hwang, and S. Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing (IEEE, 2012), pp. 1069–1072.

Shin, J.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

V. Maik, J. Shin, and J. Paik, “Regularized image restoration by means of fusion for digital auto focusing,” in Proceedings of Computational Intelligence and Security (Springer, 2005), pp. 928–934.

Sim, T.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011).
[Crossref]

Simoncelli, E. P.

Soatto, S.

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

Soundararajan, R.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
[Crossref]

Tai, Y.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in Proceedings of International Conference on Image Processing (IEEE, 2009), pp. 1798–1800.

Tanida, J.

Tumblin, J.

A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Vu, C. T.

C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
[Crossref]

Willert, C.

C. Willert and M. Gharib, “Three-dimensional particle imaging with a single camera,” Experiments in Fluids 12(6), 353–358 (1992).
[Crossref]

Woods, R. E.

R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2007).

You, W.

W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
[Crossref]

Yu, M.

W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
[Crossref]

Zerubia, J.

H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
[Crossref]

Zhou, C.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011).
[Crossref] [PubMed]

Zhuo, S.

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011).
[Crossref]

ACM Trans. Graphic. (1)

Y. Bando, B. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graphic. 27(5), 134 (2008).

Computer (1)

S. Nayar, “Computational cameras: redefining the image,” Computer 39(8), 30–38 (2006).
[Crossref]

Experiments in Fluids (1)

C. Willert and M. Gharib, “Three-dimensional particle imaging with a single camera,” Experiments in Fluids 12(6), 353–358 (1992).
[Crossref]

IEEE Signal Process. Lett. (1)

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘Completely Blind’ Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013).
[Crossref]

IEEE Trans. Image Process. (6)

C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images,” IEEE Trans. Image Process. 21(3), 934–945 (2012).
[Crossref]

H. Foroosh, J. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002).
[Crossref]

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20(12), 3322–3340 (2011).
[Crossref] [PubMed]

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process. 21(9), 4152–4166 (2012).
[Crossref] [PubMed]

W. You, K. Michael, and M. Yu, “Efficient total variation minimization methods for color image restoration,” IEEE Trans. Image Process. 17(11), 2081–2088 (2008).
[Crossref]

S. Pertuz, D. Puig, M. Garcia, and A. Fusiello, “Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images,” IEEE Trans. Image Process. 27(3), 1242–1251 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

P. Favaro, S. Soatto, M. Burger, and S. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 518–531 (2008).
[Crossref]

J. Imaging Sci. Technol. (1)

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol. 51, 368–379 (2007).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (3)

Pattern Recogn. (1)

S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011).
[Crossref]

Other (5)

C. Shen, W. Hwang, and S. Pei, “Spatially-varying out-of-focus image deblurring with L1-2 optimization and a guided blur map,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing (IEEE, 2012), pp. 1069–1072.

Y. Tai and M. Brown, “Single image defocus map estimation using local contrast prior,” in Proceedings of International Conference on Image Processing (IEEE, 2009), pp. 1798–1800.

V. Maik, J. Shin, and J. Paik, “Regularized image restoration by means of fusion for digital auto focusing,” in Proceedings of Computational Intelligence and Security (Springer, 2005), pp. 928–934.

R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2007).

A. Mohan, X. Huang, J. Tumblin, and R. Raskar, “Sensing increased image resolution using aperture masks,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Comparison of the convergence patterns with different aperture positions.
Fig. 2
Fig. 2 A thin lens model with a single off-axis aperture.
Fig. 3
Fig. 3 The relationship between the object distance and the corresponding shifting vector the single-aperture model.
Fig. 4
Fig. 4 Different convergence patterns with multiple apertures and sensor.
Fig. 5
Fig. 5 The size of a circle of confusion varies according to the object location.
Fig. 6
Fig. 6 The relationship between the color shifting vectors and the distances among RGB channels in image and sensor.
Fig. 7
Fig. 7 Experimental setup: (a) far-focused image, (b) in-focused image, (c) near-focused image, and (d) and (e) multiple, different object locations with the multiple color-filtered aperture camera with different sin ’s
Fig. 8
Fig. 8 Relationship between the object distance and the corresponding circle of confusion: (a,c) object distance according to shifting vector and (b,d) circle of confusions for the object distances in MCA.
Fig. 9
Fig. 9 Comparison of different defocus map estimation methods: (a) the input image acquired by MCA, (b) the estimated defocus map by the Zhuo’s method [7], (c) the estimated defocus map by the Shen’s method [8], and (d) the estimated defocus map by the proposed method.
Fig. 10
Fig. 10 Comparison of different defocus map estimation methods: (a)–(b) two input images acquired by the MCA camera, (c)–(d) the estimated defocus maps using Zhuo’s method [7], (e)–(f) the estimated defocus maps using Shen’s method [8], and (g)–(h) the estimated defocus maps using the proposed method.
Fig. 11
Fig. 11 Comparison of the restored results using the estimated defocus map; (a) an input image acquired by the MCA camera, (b) the experimental setup, (c) the defocus map estimated by Zhuo’s method [7], (d) the restored image using (c), (e) the defocus map estimated by Shen’s method [8], (f) the restored image using (e), (g) the defocus map estimated by the proposed method, and (h) the restored image using (g).

Tables (3)

Tables Icon

Algorithm 1 Depth-based Defocus Map Estimation

Tables Icon

Table 1 Hardware Specifications of the Off-axis Aperture Camera

Tables Icon

Table 2 Comparison of the restoration performance using various defocus map.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

1 f = 1 s j + 1 v j , where j { far , in , near } ,
1 s in + 1 v in = 1 s + 1 v ,
s = s in v in v v in v + s in v s in v in .
v = v in d c ( d c Δ ) ,
s = s in v in d c v in d c + s in Δ .
c near = d l ( v near v in ) v near ,
c far = d l ( v in v far ) v far ,
c = d l ( | v v in | ) v .
c = d l ( | Δ | ) d c .
Δ = p m δ = q n δ ,
c = d l v in ( s in s ) ss in .
δ = 1 3 3 a ,

Metrics