Abstract

The deep learning wavefront sensor (DLWFS) allows the direct estimate of Zernike coefficients of aberrated wavefronts from intensity images. The main drawback of this approach is related to the use of massive convolutional neural networks (CNNs) that are lengthy to train or estimate. In this paper, we explore several options to reduce both the training and estimation time. First, we develop a CNN that can be rapidly trained without compromising accuracy. Second, we explore the effects given smaller input image sizes and different amounts of Zernike modes to be estimated. Our simulation results demonstrate that the proposed network using images of either $8 \times 8$, $16 \times 16$, or $32 \times 32$ will dramatically reduce training time and even boost the estimation accuracy of Zernike coefficients. From our experimental results, we can confirm that a $16 \times 16$ DLWFS can be quickly trained and is able to estimate the first 12 Zernike coefficients–skipping piston, tip, and tilt–without sacrificing accuracy and significantly speeding up the prediction time to facilitate low-cost, real-time adaptive optics systems.

© 2021 Optical Society of America

Full Article  |  PDF Article
More Like This
Deep learning wavefront sensing

Yohei Nishizaki, Matias Valdivia, Ryoichi Horisaki, Katsuhisa Kitaguchi, Mamoru Saito, Jun Tanida, and Esteban Vera
Opt. Express 27(1) 240-251 (2019)

Single-shot wavefront sensing with deep neural networks for free-space optical communications

Minghao Wang, Wen Guo, and Xiuhua Yuan
Opt. Express 29(3) 3465-3478 (2021)

Wavefront reconstruction based on deep transfer learning for microscopy

Yuncheng Jin, Jiajia Chen, Chenxue Wu, Zhihong Chen, XIngyu Zhang, Hui-liang Shen, Wei Gong, and Ke Si
Opt. Express 28(14) 20738-20747 (2020)

References

  • View by:
  • |
  • |
  • |

  1. J. M. Geary, Introduction to Wavefront Sensors, SPIE tutorial texts (SPIE, 1995).
  2. R. Tyson, Principles of Adaptive Optics (CRC Press, 2015).
  3. R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack–Hartmann sensor,” Appl. Opt. 31, 6902–6908 (1992).
    [Crossref]
  4. R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).
    [Crossref]
  5. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  6. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
    [Crossref]
  7. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43, 1235–1238 (2018).
    [Crossref]
  8. Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27, 240–251 (2019).
    [Crossref]
  9. G. Allan, I. Kang, E. S. Douglas, G. Barbastathis, and K. Cahoy, “Deep residual learning for low-order wavefront sensing in high-contrast imaging systems,” Opt. Express 28, 26267–26283 (2020).
    [Crossref]
  10. L. Hu, S. Hu, W. Gong, and K. Si, “Learning-based Shack-Hartmann wavefront sensor for high-order aberration detection,” Opt. Express 27, 33504–33517 (2019).
    [Crossref]
  11. H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
    [Crossref]
  12. L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
    [Crossref]
  13. C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
    [Crossref]

2020 (3)

G. Allan, I. Kang, E. S. Douglas, G. Barbastathis, and K. Cahoy, “Deep residual learning for low-order wavefront sensing in high-contrast imaging systems,” Opt. Express 28, 26267–26283 (2020).
[Crossref]

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
[Crossref]

2019 (4)

2018 (1)

2015 (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

1996 (1)

R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).
[Crossref]

1992 (1)

Allan, G.

Barbastathis, G.

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Cahoy, K.

Douglas, E. S.

Du, S.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Fienup, J. R.

Geary, J. M.

J. M. Geary, Introduction to Wavefront Sensors, SPIE tutorial texts (SPIE, 1995).

Gong, W.

Guo, H.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Guzmán, F.

C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
[Crossref]

He, D.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Horisaki, R.

Hu, L.

Hu, S.

Huang, Y.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Kang, I.

Kitaguchi, K.

Lane, R. G.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Li, L.

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

Li, Q.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Li, S.

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

Mu, X.

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

Nishizaki, Y.

Ozcan, A.

Paine, S. W.

Peng, H.

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

Ragazzoni, R.

R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).
[Crossref]

Saito, M.

Si, K.

Situ, G.

Tallon, M.

Tanida, J.

Tyson, R.

R. Tyson, Principles of Adaptive Optics (CRC Press, 2015).

Valdivia, M.

Vera, E.

C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
[Crossref]

Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27, 240–251 (2019).
[Crossref]

Wang, Q.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Weinberger, C.

C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
[Crossref]

Xu, Y.

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Appl. Opt. (1)

IEEE Access (1)

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access 8, 139110 (2020).
[Crossref]

J. Mod. Opt. (1)

R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43, 289–293 (1996).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Optica (1)

Proc. SPIE (1)

C. Weinberger, F. Guzmán, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
[Crossref]

Sensors (1)

H. Guo, Y. Xu, Q. Li, S. Du, D. He, Q. Wang, and Y. Huang, “Improved machine learning approach for wavefront sensing,” Sensors 19, 3533 (2019).
[Crossref]

Other (2)

J. M. Geary, Introduction to Wavefront Sensors, SPIE tutorial texts (SPIE, 1995).

R. Tyson, Principles of Adaptive Optics (CRC Press, 2015).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. WFNet convolutional neural network scheme.
Fig. 2.
Fig. 2. Neuron activation of the first convolutional layer for each of the three convolutional branches in the WFNet: (a) input image; (b) upper branch with vertical convolutions; (c) central branch of $3 \times 3$ convolutions; and (d) lower branch with horizontal convolutions.
Fig. 3.
Fig. 3. PSFs from random aberrations (saturated to notice the details). (a), (b) Normal PSFs for two randomly aberrated wavefronts; and (c), (d) respective defocused PSFs from (a) and (b) using a defocus of ${a_4} = 8$.
Fig. 4.
Fig. 4. Comparison of interaction matrices for the Xception and WFNet network using 12 Zernike modes (purely stimulated at ${a_n} = 2$) at different input image sizes.
Fig. 5.
Fig. 5. Comparison of trained networks for 12, 18, 25, and 33 Zernike modes with an input size of $32 \times 32$ with and without residual phase. (a) Red area plot presents the noiseless results extracted from Table 2. (b) Blue area plot presents the results when adding phase noise using Zernike coefficients from the 37th to the 53rd mode with a uniform distribution between [${-}$0.05, 0.05].
Fig. 6.
Fig. 6. Experimental setup for training and testing the DLWFS using the proposed WFNet.
Fig. 7.
Fig. 7. Experimental RMSE median per Zernike of the DLWFS using WFNet trained with 200,000 samples for different image sizes.
Fig. 8.
Fig. 8. Experimental interaction matrices for the DLWFS using WFNet trained with 200,000 samples for different input sizes.
Fig. 9.
Fig. 9. Experimental estimation comparison for the DLWFS using WFNet at different resolutions for 200,000 training samples. (a) Best case and (b) worst case image inputs (top) and associated phase reconstructions (middle) from the estimated Zernike modes (bottom).

Tables (3)

Tables Icon

Table 1. Comparison of the Simulated Results for the DLWFS Using Xception and WFNet Trained for 12 Zernike Modes for Different Input Sizes Using 50,000 Training Samples and 20,000 Test Samples

Tables Icon

Table 2. Simulation Results for the DLWFS Using WFNet for Different Input Sizes, Different Zernike Modes, 200,000 Training Samples and 20,000 Testing Samples

Tables Icon

Table 3. Experimental Results of the DLWFS Using WFNet Trained for 12 Zernike Modes and Using a Test Dataset of 10,000 Samples

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

h = | F [ P ] | 2 ,
w = n = 1 N a n Z n ,

Metrics