Abstract

Recovering the real light field, including the light field intensity distributions and continuous volumetric data in the object space, is an attractive and important topic with the developments in light-field imaging. In this paper, a blind light field reconstruction method is proposed to recover the intensity distributions and continuous volumetric data without the assistant of prior geometric information. The light field reconstruction problem is approximated to be a summation of the localized reconstructions based on image formation analysis. Blind volumetric information derivation is proposed based on backward image formation modeling to exploit the correspondence among the deconvoluted results. Finally, a light field is blindly reconstructed via the proposed inverse image formation approximation and wave propagation. We demonstrate that the method can blindly recover the light field intensity with continuous volumetric data. It can be further extended to other light field imaging systems if the backward image formation model can be derived.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Wave optics theory and 3-D deconvolution for the light field microscope

Michael Broxton, Logan Grosenick, Samuel Yang, Noy Cohen, Aaron Andalman, Karl Deisseroth, and Marc Levoy
Opt. Express 21(21) 25418-25439 (2013)

Enhancing the performance of the light field microscope using wavefront coding

Noy Cohen, Samuel Yang, Aaron Andalman, Michael Broxton, Logan Grosenick, Karl Deisseroth, Mark Horowitz, and Marc Levoy
Opt. Express 22(20) 24817-24839 (2014)

Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0

Xin Jin, Li Liu, Yanqin Chen, and Qionghai Dai
Opt. Express 25(9) 9947-9962 (2017)

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
    [Crossref]
  2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenopic camera,” Technical Report, Stanford University (2005).
  3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  4. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).
  5. Y. Zhang, Z. Li, W. Yang, P. Yu, H. Lin, and J. Yu, “The light field 3D scanner,” in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–9 (2017).
  6. S. Shroff and K. Berkner, “High resolution image reconstruction for plenoptic imaging systems using system response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
    [Crossref]
  7. S. Shroff and K. Berkner, “Plenoptic system response and image formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.
  8. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
    [Crossref]
  9. L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
    [Crossref]
  10. C. Guo, H. Li, I. Muniraj, B. Schroeder, J. Sheridan, and S. Jia, “Volumetric light-field encryption at the microscopic scale,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper JTu2A.94.
  11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  12. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  13. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
  14. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).
  15. X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
    [Crossref] [PubMed]
  16. T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.
  17. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  18. C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
    [Crossref]
  19. D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
    [Crossref]
  20. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
    [Crossref]
  21. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison-Wesley Longman Publishing Co., 1992), pp. 28–48, vol. I.

2017 (1)

2015 (1)

2013 (2)

2012 (1)

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2011 (1)

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

2010 (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

2001 (1)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

1982 (1)

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Andalman, A.

Berkner, K.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Boykov, Y.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

Broxton, M.

Chen, Y.

Cohen, N.

Dai, Q.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Deisseroth, K.

Favaro, P.

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

Fong, D. C. L.

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

Grosenick, L.

Jin, X.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Lam, E. Y.

Levoy, M.

Liu, L.

X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017).
[Crossref] [PubMed]

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

Paige, C. C.

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Saunders, M.

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Saunders, M. A.

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

Shroff, S.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Veksler, O.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Yang, S.

Zabih, R.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

ACM Trans. Math. Softw. (1)

C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001).
[Crossref]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

J. Opt. Soc. Am. A (1)

Opt. Express (2)

Proc. SPIE (1)

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

SIAM J. Sci. Comput. (1)

D. C. L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011).
[Crossref]

Other (11)

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison-Wesley Longman Publishing Co., 1992), pp. 28–48, vol. I.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.

L. Liu, X. Jin, and Q. Dai, “Image formation analysis and light field information reconstruction for plenoptic Camera 2.0,” Pacific-Rim Conference on Multimedia (PCM)2017, Sept. 28–29, Harbin, China.
[Crossref]

C. Guo, H. Li, I. Muniraj, B. Schroeder, J. Sheridan, and S. Jia, “Volumetric light-field encryption at the microscopic scale,” in Frontiers in Optics 2017, OSA Technical Digest (online) (Optical Society of America, 2017), paper JTu2A.94.

T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenopic camera,” Technical Report, Stanford University (2005).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).

Y. Zhang, Z. Li, W. Yang, P. Yu, H. Lin, and J. Yu, “The light field 3D scanner,” in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–9 (2017).

S. Shroff and K. Berkner, “High resolution image reconstruction for plenoptic imaging systems using system response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
[Crossref]

S. Shroff and K. Berkner, “Plenoptic system response and image formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Optical structure of plenoptic camera 2.0.
Fig. 2
Fig. 2 (a) Imaging targets “P,” “S,” and “F” are placed at 65mm, 67mm and 69mm, respectively; (b) the simulated sensor data using the plenoptic camera 2.0 with a main lens (f1 = 40mm and 4mm radius) and a 3 × 3 microlens array (f2 = 4mm and 160 μm radius).
Fig. 3
Fig. 3 Imaging results under each microlens. Microlens coordinate is (mx, my). The region circled by dotted line corresponds to the imaging area of a microlens. The regions lined in red are the segmented imaging results “P”, “F” and “S” under each microlens. The segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1) are magnified on the right.
Fig. 4
Fig. 4 (a) Segmented imaging results of “P” under microlens (1,1), (1,2) and (2,1); (b) to (k) are the reconstructed intensity at d 1n from 62mm to 71mm using the image in (a).
Fig. 5
Fig. 5 (a): segmented imaging results under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d 1n s, from 62mm to 71mm, using the images in (a), respectively. Image in red represents I s, Ω k m 1 , d 1n and its reconstructed I 0 d 1n , Ω k m 1 . Image in green represents I s, Ω k m 2 , d 1n and its reconstructed I 0 d 1n , Ω k m 2 .
Fig. 6
Fig. 6 Sensor data with noise. The brightness of the image is adjusted by 20% to show the noise clearly.
Fig. 7
Fig. 7 (a): segmented imaging results with 50dB noise under microlens (1, 1), (1, 2) for object “P”, those under microlens (2,2), (3,2) for object “S” and those under microlens (2, 3) and (3, 3) for object “F,” respectively. (b) to (k) corresponds to reconstructed intensity at different d 1n s, from 62mm to 71mm, using the images in (a), respectively. Image in red represents I s, Ω k m 1 , d 1n and its reconstructed I 0 d 1n , Ω k m 1 . Image in green represents I s, Ω k m 2 , d 1n and its reconstructed I 0 d 1n , Ω k m 2 .
Fig. 8
Fig. 8 The reconstructed volumetric information of “P”, “S,” and “F” in the object space, in which the recovered “P”, “S,” and “F” are located at distance 65mm, 67mm and 69mm, respectively. The images on the right in each subimage are the recovered spatial discrete intensity at the specific distance. (a) reconstructed from noise-free imaging results in Fig. 2(b); (b) reconstructed from noisy imaging results in Fig. 6.
Fig. 9
Fig. 9 Recovered light field intensity using light field repropagation at the object distance: (a) 65mm; (b) 67mm and (c) 69mm.
Fig. 10
Fig. 10 (a) Imaging target “A” placed at 66mm; (b) the simulated sensor data using the plenoptic camera 2.0 with a 7 × 7 microlens array.
Fig. 11
Fig. 11 (a) The first pair of imaging responses; (b) The second pair of imaging responses; (c) The third pair of imaging responses.
Fig. 12
Fig. 12 The reconstructed volumetric information at 66mm. (a) reconstructed information from the first pair; (b) reconstructed information from the second pair; (c) reconstructed information from the third pair; (d) The recovered light field intensity at 66mm.

Tables (3)

Tables Icon

Table 1 Spatial similarity measurement between the reconstructed intensity images at different depths

Tables Icon

Table 2 Spatial similarity measurement between the reconstructed intensity images for “S” and “F”

Tables Icon

Table 3 Spatial similarity measurement between images of “P,” “S,” and “F” reconstructed from noisy imaging result

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

h(x,y, x 0 , y 0 )= exp[ik( d 1 + d 2 + d 3 +l)] λ 4 d 1 d 2 d 3 l m n + t micro ( x m mD, y m nD) ×exp{ ik 2 d 3 [ ( x 1 x m ) 2 + ( y 1 y m ) 2 ]} ×exp{ ik 2l [ (x x m ) 2 + (y y m ) 2 ]}d x m d y m × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( y 1 y main ) 2 ]}d x main d y main d x 1 d y 1 ,
t main ( x main , y main )= P 1 ( x main , y main )exp[ ik 2 f 1 ( x main 2 + y main 2 )],
t micro ( x m , y m )= P 2 ( x m , y m )exp[ ik 2 f 2 ( x m 2 + y m 2 )],
I(x,y)= I( x 0 , y 0 )| h(x,y, x 0 , y 0 ) | 2 d x 0 d y 0 ,
I s d 1n = H d 1n I 0 d 1n .
[ I s d 1n (1,1) I s d 1n (1,2) I s d 1n (2,1) I s d 1n (2,2) I s d 1n ( P s , Q s ) ]=[ H d 1n (1,1,1,1) H d 1n (1,1,1,2) H d 1n (1,1, P 0 , Q 0 ) H d 1n (1,2,1,1) H d 1n (1,2,1,2) H d 1n (1,2, P 0 , Q 0 ) H d 1n (2,1,1,1) H d 1n (2,1,1,2) H d 1n (2,1, P 0 , Q 0 ) H d 1n (2,2,1,1) H d 1n (2,2,1,2) H d 1n (2,2, P 0 , Q 0 ) H d 1n ( P s , Q s ,1,1) H d 1n ( P s , Q s ,1,2) H d 1n ( P s , Q s , P 0 , Q 0 ) ][ I 0 d 1n (1,1) I 0 d 1n (1,2) I 0 d 1n (2,1) I 0 d 1n (2,2) I 0 d 1n ( P 0 , Q 0 ) ],
I 0 d 1n = argmin H d 1n I 0 d 1n - I s d 1n 2 2 +τ I 0 d 1n 2 2 ,
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) H d 1n m I 0 d 1n m =( m x , m y ), m x [ 1,M ], m y [ 1,N ],
x 1 = d 2 d 1 x 0 , y 1 = d 2 d 1 y 0 .
1 d 1 + 1 d 2 = 1 f 1 .
y m1 = y 1 R d 2 L+R,
y m2 = y 1 +R d 2 LR.
2 m y r+r y m1 2 m y rr y m2 ,
y 1 R d 2 L+Rr2 m y r y 1 +R d 2 L( Rr ).
y 0 d 1 R( d 1 f 1 ) d 1 f 1 L+Rr2 m y r y 0 d 1 + R( d 1 f 1 ) d 1 f 1 LR+r.
y 0 d 1 + R( d 1 f 1 ) d 1 f 1 LR+r2 m y r y 0 d 1 R( d 1 f 1 ) d 1 f 1 L+Rr.
x 2 = d 4 L d 2 ( x 1 m x D )+ m x D y 2 = d 4 L d 2 ( y 1 m y D )+ m y D,
1 L- d 2 + 1 d 4 = 1 f 2 .
x= l L d 2 ( x 1 m x D )+ m x D y= l L d 2 ( y 1 m y D )+ m y D.
x= l d 2 ( L d 2 ) d 1 ( x 0 + m x D( ( L d 2 ) d 1 l d 2 + d 1 d 2 ) ) y= l d 2 ( L d 2 ) d 1 ( y 0 + m y D( ( L d 2 ) d 1 l d 2 + d 1 d 2 ) ),
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) H d 1n m I 0 d 1n m .
I s d 1n = m =( 1,1 ) (M,N) I s m , d 1n = m =( 1,1 ) (M,N) k=1 O I s, Ω k m , d 1n = m =( 1,1 ) (M,N) H d 1n m k=1 O I 0 d 1n , Ω k m ,
I 0 d 1n = k=1 O m =( 1,1 ) (M,N) argmin H d 1n m I 0 d 1n , Ω k m - I s, Ω k m , d 1n 2 2 +τ I 0 d 1n , Ω k m 2 2 .
x d 1n m x = l( d 1n f 1 ) L( d 1n f 1 ) d 1n f 1 ( f 1 d 1n f 1 x 0 d 1n , Ω k m x + m x D )+ m x D
x 0 d 1n , Ω k m x = d 1n f 1 f 1 ( L( d 1n f 1 ) d 1n f 1 l( d 1n f 1 ) ( x d 1n m x m x D ) m x D ).
x 0 d 1n , Ω k m x = L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 x 0 d 1n , Ω k m x + ( d 1n f 1 ) f 1 L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 m x D d 1n f 1 f 1 m x D.
x 0 d 1n , Ω k m x1 x 0 d 1n , Ω k m x2 = ( d 1n f 1 ) f 1 L( d 1n f 1 ) d 1n f 1 L( d 1n f 1 ) d 1n f 1 D( m x1 m x2 ) d 1n f 1 f 1 D( m x1 m x2 ),
d 1n = argmin d 1n Dis( I 0 d 1n , Ω k m 1 , I 0 d 1n , Ω k m 2 ),
I 0 d 1n , Ω k m 1 = argmin H d 1n m 1 I 0 d 1n , Ω k m 1 - I s, Ω k m 1 , d 1n 2 2 +τ I 0 d 1n , Ω k m 1 2 2 .
U d 1m , d 1n ( x , y )= exp{ ik| d 1n d 1m | } iλ| d 1n d 1m | exp[ ik 2| d 1n d 1m | ( x 2 + y 2 )] + I 0 d 1n , Ω k ( x 0 , y 0 )exp{θ( x 0 , y 0 )} ×exp[ ik 2| d 1n d 1m | ( x 0 2 + y 0 2 )]exp[ ik | d 1n d 1m | ( x 0 x + y 0 y )]d x 0 d y 0 ,
| U d 1m |= I 0 d 1m + d 1n d 1m | U d 1m , d 1n | ,

Metrics