Abstract

Three-dimensional (3D) reconstruction based on optical diffusion has certain significant advantages, such as its capacity for high-precision depth estimation with a small lens, distant-object depth estimation, a monocular vision basis, and no required camera or scene adjustment. However, few mathematical models to relate the depth information acquired using this technique to the basic principles of intensity distribution during optical diffusion have been proposed. In this paper, the heat diffusion equation of physics is applied in order to construct a mathematical model of the intensity distribution during optical diffusion. Hence, a high-precision 3D reconstruction method with optical diffusion based on the heat diffusion equation is proposed. First, the heat diffusion equation is analyzed and an optical diffusion model is introduced to explain the basic principles of the diffusion imaging process. Second, the novel 3D reconstruction method based on global heat diffusion is proposed, which incorporates the relationship between the depth information and the degree of diffusion. Finally, a simulation involving synthetic images and an experiment using five playing cards are conducted, with the results confirming the effectiveness and feasibility of the proposed method.

© 2015 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Nanoscale depth reconstruction from defocus: within an optical diffraction model

Yangjie Wei, Chengdong Wu, and Zaili Dong
Opt. Express 22(21) 25481-25493 (2014)

Single shot three-dimensional imaging using an engineered point spread function

René Berlich, Andreas Bräuer, and Sjoerd Stallinga
Opt. Express 24(6) 5946-5960 (2016)

High-precision rotation angle measurement method based on monocular vision

Jing Jin, Lingna Zhao, and Shengli Xu
J. Opt. Soc. Am. A 31(7) 1401-1407 (2014)

References

  • View by:
  • |
  • |
  • |

  1. C. Y. Yin, “Determining residual nonlinearity of a high-precision heterodyne interferometer,” Opt. Eng. 38(8), 1361–1365 (1999).
    [Crossref]
  2. L. D. Wu, Computer Vision (Fudan University Press, 1993).
  3. B. Girod and S. Scherock, “Depth from defocus of structured light,” in “Proceedings of Optics, Illumination, and Image Sensing for Machine Vision,” (1989), pp. 209–215.
  4. V. M. J. Bove., “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10(4), 561–566 (1993).
    [Crossref]
  5. S. K. Nayar, “Shape from focus system,” in “Proceedings of IEEE Computer Vision and Pattern Recognition,” (1992), pp. 302–308.
  6. A. P. Pentland, S. Scheroch, T. Darrell, and B. Girod, “Simple range cameras based on focus error,” J. Opt. Soc. Am. A 11(11), 2925–2934 (1994).
    [Crossref]
  7. S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).
  8. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987).
    [Crossref] [PubMed]
  9. M. Gokstorp, “Computing depth from out-of-focus blur using a local frequency representation,” in “Proceedings of International Conference on Pattern Recognition, ” (1994), pp. 153–158.
    [Crossref]
  10. M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
    [Crossref]
  11. V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
    [Crossref]
  12. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
    [Crossref] [PubMed]
  13. P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
    [Crossref]
  14. P. Favaro and A. Mennucci, “Learning shape from defocus,” in “Proceedings of Europeon Conference on Computer Vision,” (2002), pp.735–745.
  15. K. Mori, “Apparatus for uniform illumination employing light diffuser,” July 17 1984, US Patent 4,460,940.
  16. G. M. Mari-Roca, L. Vaughn, J. S. King, K. W. Jelley, A. G. Chen, and G. T. Valliath, “Light diffuser for a liquid crystal display,” February 14, 1995, US Patent 5,390,085.
  17. A. Ashok and M. A. Neifeld, “Pseudorandom phase masks for superresolution imaging from subpixel shifting,” Appl. Opt. 46(12), 2256–2268 (2007).
    [Crossref] [PubMed]
  18. E. E. García-Guerrero, E. R. Méndez, H. M. Escamilla, T. A. Leskova, and A. A. Maradudin, “Design and fabrication of random phase diffusers for extending the depth of focus,” Opt. Express 15(3), 910–923 (2007).
    [Crossref] [PubMed]
  19. C. Y. Zhou, O. Cossairt, and S. Nayar, “Depth from diffusion,” in “Proceedings of IEEE Conference of Computer Vision and Pattern Recognition,” (2010), pp.1110–1117.
  20. Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
    [Crossref]
  21. Y. Wei, C. Wu, and Z. Dong, “Nanoscale depth reconstruction from defocus: within an optical diffraction model,” Opt. Express 22(21), 25481–25493 (2014).
    [Crossref] [PubMed]
  22. Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
    [Crossref]

2014 (1)

2012 (2)

Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
[Crossref]

Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
[Crossref]

2008 (1)

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

2007 (3)

2003 (1)

P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
[Crossref]

1999 (1)

C. Y. Yin, “Determining residual nonlinearity of a high-precision heterodyne interferometer,” Opt. Eng. 38(8), 1361–1365 (1999).
[Crossref]

1996 (1)

S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).

1994 (2)

A. P. Pentland, S. Scheroch, T. Darrell, and B. Girod, “Simple range cameras based on focus error,” J. Opt. Soc. Am. A 11(11), 2925–2934 (1994).
[Crossref]

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

1993 (1)

1987 (1)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987).
[Crossref] [PubMed]

Ashok, A.

Bove, V. M. J.

Burger, M.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

Chaudhuri, S.

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Cossairt, O.

C. Y. Zhou, O. Cossairt, and S. Nayar, “Depth from diffusion,” in “Proceedings of IEEE Conference of Computer Vision and Pattern Recognition,” (2010), pp.1110–1117.

Darrell, T.

Dong, Z.

Dong, Z. L.

Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
[Crossref]

Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
[Crossref]

Escamilla, H. M.

Favaro, P.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
[Crossref]

P. Favaro and A. Mennucci, “Learning shape from defocus,” in “Proceedings of Europeon Conference on Computer Vision,” (2002), pp.735–745.

García-Guerrero, E. E.

Girod, B.

Gokstorp, M.

M. Gokstorp, “Computing depth from out-of-focus blur using a local frequency representation,” in “Proceedings of International Conference on Pattern Recognition, ” (1994), pp. 153–158.
[Crossref]

Leskova, T. A.

Maradudin, A. A.

Méndez, E. R.

Mennucci, A.

P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
[Crossref]

P. Favaro and A. Mennucci, “Learning shape from defocus,” in “Proceedings of Europeon Conference on Computer Vision,” (2002), pp.735–745.

Namboodiri, V. P.

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Navar, S. K.

S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).

Nayar, S.

C. Y. Zhou, O. Cossairt, and S. Nayar, “Depth from diffusion,” in “Proceedings of IEEE Conference of Computer Vision and Pattern Recognition,” (2010), pp.1110–1117.

Nayar, S. K.

S. K. Nayar, “Shape from focus system,” in “Proceedings of IEEE Computer Vision and Pattern Recognition,” (1992), pp. 302–308.

Neifeld, M. A.

Noguchi, M.

S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).

Osher, S. J.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

Pentland, A. P.

Scheroch, S.

Soatto, S.

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
[Crossref]

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Watanabe, M.

S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).

Wei, Y.

Wei, Y. J.

Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
[Crossref]

Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
[Crossref]

Wu, C.

Wu, C. D.

Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
[Crossref]

Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
[Crossref]

Yin, C. Y.

C. Y. Yin, “Determining residual nonlinearity of a high-precision heterodyne interferometer,” Opt. Eng. 38(8), 1361–1365 (1999).
[Crossref]

Zhou, C. Y.

C. Y. Zhou, O. Cossairt, and S. Nayar, “Depth from diffusion,” in “Proceedings of IEEE Conference of Computer Vision and Pattern Recognition,” (2010), pp.1110–1117.

Appl. Opt. (1)

IEEE Trans. NanoTechnol. (1)

Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global shape reconstruction of the bended AFM cantilever,” IEEE Trans. NanoTechnol. 11(4), 713–719 (2012).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008).
[Crossref] [PubMed]

S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996).

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987).
[Crossref] [PubMed]

IET Computer Vision (1)

Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012).
[Crossref]

Int. J. Comput. Vis. (2)

P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from defocused images,” Int. J. Comput. Vis. 52(1), 25–43 (2003).
[Crossref]

M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Eng. (1)

C. Y. Yin, “Determining residual nonlinearity of a high-precision heterodyne interferometer,” Opt. Eng. 38(8), 1361–1365 (1999).
[Crossref]

Opt. Express (2)

Pattern Recognit. Lett. (1)

V. P. Namboodiri and S. Chaudhuri, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007).
[Crossref]

Other (8)

M. Gokstorp, “Computing depth from out-of-focus blur using a local frequency representation,” in “Proceedings of International Conference on Pattern Recognition, ” (1994), pp. 153–158.
[Crossref]

L. D. Wu, Computer Vision (Fudan University Press, 1993).

B. Girod and S. Scherock, “Depth from defocus of structured light,” in “Proceedings of Optics, Illumination, and Image Sensing for Machine Vision,” (1989), pp. 209–215.

S. K. Nayar, “Shape from focus system,” in “Proceedings of IEEE Computer Vision and Pattern Recognition,” (1992), pp. 302–308.

C. Y. Zhou, O. Cossairt, and S. Nayar, “Depth from diffusion,” in “Proceedings of IEEE Conference of Computer Vision and Pattern Recognition,” (2010), pp.1110–1117.

P. Favaro and A. Mennucci, “Learning shape from defocus,” in “Proceedings of Europeon Conference on Computer Vision,” (2002), pp.735–745.

K. Mori, “Apparatus for uniform illumination employing light diffuser,” July 17 1984, US Patent 4,460,940.

G. M. Mari-Roca, L. Vaughn, J. S. King, K. W. Jelley, A. G. Chen, and G. T. Valliath, “Light diffuser for a liquid crystal display,” February 14, 1995, US Patent 5,390,085.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1 Diffusion theory and a diffused image.
Fig. 2
Fig. 2 Geometry of a diffusion process and its energy distribution.
Fig. 3
Fig. 3 Different diffusion forms (uniform, batwing and Gaussian).
Fig. 4
Fig. 4 Geometry of diffusion in a pinhole camera.
Fig. 5
Fig. 5 Flow graph of our algorithm.
Fig. 6
Fig. 6 The synthesized images of a consine plane.
Fig. 7
Fig. 7 The depth maps of gray.
Fig. 8
Fig. 8 The reconstructed 3D surface and the true 3D surface.
Fig. 9
Fig. 9 The error map between the estimated surface and the true surface.
Fig. 10
Fig. 10 The synthesized images of a rectangle plane.
Fig. 11
Fig. 11 The depth maps of gray.
Fig. 12
Fig. 12 The reconstructed 3D surface and the true surface.
Fig. 13
Fig. 13 The error map between the estimated surface and the true surface.
Fig. 14
Fig. 14 Focused image and diffused image of the arranged cards.
Fig. 15
Fig. 15 Reconstructed surface of our method.
Fig. 16
Fig. 16 3D reconstructed surface of our method.
Fig. 17
Fig. 17 True 3D surface of the cards.
Fig. 18
Fig. 18 Error surface of our method.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

J(x,y,t)=au(x,y,t)
u(x,y,t) t =J(x,y,t)
u(x,y,t) t =a 2 u(x,y,t)
{ u(x,y,t) t =a 2 u(x,y,t) u(x,y,0)= u 0 (x,y)
{ u(x,y,t) t =(a(x,y)u(x,y,t)) u(x,y,0)= u 0 (x,y)
b= v U AB 2 = vtanθ cos 2 α Z Z+U
I( x 1 , y 1 )= h( x 1 , y 1 ,x,y,b)r(x,y)dx dy
I=h(,b)r
h(x,y, x 1 , y 1 )= 1 2π σ 2 exp( (x x 1 ) 2 + (y y 1 ) 2 2 σ 2 )
h( x 1 , y 1 ,x,y,b)d x 1 d y 1 =1
{ u ˙ (x,y,t)=aΔu(x,y,t)a[0,)t(0,) u(x,y,0)=r(x,y)
σ 2 =2ta
h(x,y, x 1 , y 1 ,t)= 1 4πta exp( (x x 1 ) 2 + (y y 1 ) 2 4ta )
I( x 1 , y 1 ,t)=( 1 4πta exp( (x x 1 ) 2 + (y y 1 ) 2 4ta ))r(x,y)
{ u ˙ (x,y,t)=(a(x,y)u(x,y,t))t(0,) u(x,y,0)=r(x,y)
σ 2 (x,y)=2ta(x,y)
a(x,y)= σ 2 (x,y) 2t = γ 2 v 2 tan 2 (θ) 2t cos 4 α ( Z Z+U ) 2
{ u ˙ (x,y,t)=(a(x,y)u(x,y,t))t(0,) u(x,y,0)= E 1 (x,y) u(x,y, t 2 )= E 2 (x,y)
γ 2 v 2 tan 2 (θ) 2t cos 4 α ( Z Z+U ) 2 =0
{ u ˙ (x,y,t)=(a(x,y)u(x,y,t))t(0,) u(x,y,0)= E 1 (x,y) u(x,y,Δt)= E 2 (x,y) 0=a(x,y)u(x,y,t)n(x,y)
Z= σU cos 2 α (γvtanθσ cos 2 α)
Z ˜ = argmin Z(x,y) ( u(x,y,Δt) E 2 (x,y) ) 2 dxdy
Z ˜ = argmin Z(x,y) ( u(x,y,Δt) E 2 (x,y) ) 2 dxdy+η Z(x,y) 2 +ηk Z(x,y) 2
F(Z)= ( u(x,y,Δt) E 2 (x,y) ) 2 dxdy+η Z 2 +ηk Z 2
Z ˜ =arg min Z F(Z)
Z t =F'(Z)
ζ= Z e /Z1
φ= E[ ( Z e /Z1) 2 ]

Metrics