Abstract

Ultrashort laser pulses with femtosecond to attosecond pulse duration are the shortest systematic events humans can currently create. Characterization (amplitude and phase) of these pulses is a crucial ingredient in ultrafast science, e.g., exploring chemical reactions and electronic phase transitions. Here, we propose and demonstrate, numerically and experimentally, what is to the best of our knowledge, the first deep neural network technique to reconstruct ultrashort optical pulses. Employing deep neural networks for reconstruction of ultrashort pulses enables diagnostics of very weak pulses and offers new possibilities, e.g., reconstruction of pulses using measurement devices without knowing in advance the relations between the pulses and the measured signals. Finally, we demonstrate the ability to reconstruct ultrashort pulses from their experimentally measured frequency-resolved optical gating traces via deep networks that have been trained on simulated data.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Ptychographic reconstruction algorithm for frequency-resolved optical gating: super-resolution and supreme robustness

Pavel Sidorenko, Oren Lahav, Zohar Avnat, and Oren Cohen
Optica 3(12) 1320-1330 (2016)

Deep learning microscopy

Yair Rivenson, Zoltán Göröcs, Harun Günaydin, Yibo Zhang, Hongda Wang, and Aydogan Ozcan
Optica 4(11) 1437-1443 (2017)

Deep-STORM: super-resolution single-molecule microscopy by deep learning

Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman
Optica 5(4) 458-464 (2018)

References

  • View by:
  • |
  • |
  • |

  1. A. M. Weiner, Ultrafast Optics (Wiley, 2009).
  2. M. F. Kling and M. J. J. Vrakking, “Attosecond electron dynamics,” Annu. Rev. Phys. Chem. 59, 463–492 (2008).
    [Crossref]
  3. Y. Silberberg, “Quantum coherent control for nonlinear spectroscopy and microscopy,” Ann. Rev. Phys. Chem. 60, 277–292 (2009).
    [Crossref]
  4. F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
    [Crossref]
  5. H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
    [Crossref]
  6. R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
    [Crossref]
  7. R. Trebino, Frequency-Resolved Optical Gating: The Measurement of Ultrashort Laser Pulses (Springer, 2012).
  8. T. Bendory, P. Sidorenko, and Y. C. Eldar, “On the uniqueness of FROG methods,” IEEE Signal Process. Lett. 24, 722–726 (2017).
    [Crossref]
  9. P. O’shea, M. Kimmel, X. Gu, and R. Trebino, “Highly simplified device for ultrashort-pulse measurement,” Opt. Lett. 26, 932–934 (2001).
    [Crossref]
  10. X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.
  11. D. J. Kane, “Principal components generalized projections: a review [Invited],” J. Opt. Soc. Am. B 25, A120–A132 (2008).
    [Crossref]
  12. P. Sidorenko, O. Lahav, Z. Avnat, and O. Cohen, “Ptychographic reconstruction algorithm for frequency-resolved optical gating: super-resolution and supreme robustness,” Optica 3, 1320–1330 (2016).
    [Crossref]
  13. D. N. Fittinghoff, K. W. DeLong, R. Trebino, and C. L. Ladera, “Noise sensitivity in frequency-resolved optical-gating measurements of ultrashort pulses,” J. Opt. Soc. Am. B 12, 1955–1967 (1995).
    [Crossref]
  14. M. A. Krumbügel, C. L. Ladera, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, and R. Trebino, “Direct ultrashort-pulse intensity and phase retrieval by frequency-resolved optical gating and a computational neural network,” Opt. Lett. 21, 143–145 (1996).
    [Crossref]
  15. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision - ECCV 2014, Lecture Notes in Computer Science (Springer, 2014), Vol. 8689, pp. 818–833.
  16. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1–9.
  17. I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 3104–3112.
  18. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
    [Crossref]
  19. A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
    [Crossref]
  20. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
    [Crossref]
  21. Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
    [Crossref]
  22. P. Vincent and H. Larochelle, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res. 11, 3371–3408 (2010).
  23. J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
    [Crossref]
  24. R. Collobert, S. Bengio, and J. Mariethoz, “Torch: a modular machine learning software library,” (2002).
  25. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2015).
  26. C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006), Vol. 4.
  27. D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).
  28. Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.
  29. L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.
  30. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  31. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.
  32. G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).
  33. “Code for retrieving a pulse intensity and phase from its FROG trace,” 2013, http://frog.gatech.edu/code.html .
  34. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.
  35. G. I. Haham, P. Sidorenko, O. Lahav, and O. Cohen, “Multiplexed FROG,” Opt. Express 25, 33007–33017 (2017).
    [Crossref]
  36. G. Stibenz and G. Steinmeyer, “Interferometric frequency-resolved optical gating,” Opt. Express 13, 2617–2626 (2005).
    [Crossref]
  37. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
    [Crossref]

2017 (5)

T. Bendory, P. Sidorenko, and Y. C. Eldar, “On the uniqueness of FROG methods,” IEEE Signal Process. Lett. 24, 722–726 (2017).
[Crossref]

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
[Crossref]

G. I. Haham, P. Sidorenko, O. Lahav, and O. Cohen, “Multiplexed FROG,” Opt. Express 25, 33007–33017 (2017).
[Crossref]

2016 (3)

P. Sidorenko, O. Lahav, Z. Avnat, and O. Cohen, “Ptychographic reconstruction algorithm for frequency-resolved optical gating: super-resolution and supreme robustness,” Optica 3, 1320–1330 (2016).
[Crossref]

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
[Crossref]

2015 (2)

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

2014 (1)

F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
[Crossref]

2011 (1)

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

2010 (2)

D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).

P. Vincent and H. Larochelle, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res. 11, 3371–3408 (2010).

2009 (1)

Y. Silberberg, “Quantum coherent control for nonlinear spectroscopy and microscopy,” Ann. Rev. Phys. Chem. 60, 277–292 (2009).
[Crossref]

2008 (2)

M. F. Kling and M. J. J. Vrakking, “Attosecond electron dynamics,” Annu. Rev. Phys. Chem. 59, 463–492 (2008).
[Crossref]

D. J. Kane, “Principal components generalized projections: a review [Invited],” J. Opt. Soc. Am. B 25, A120–A132 (2008).
[Crossref]

2005 (1)

2001 (1)

1997 (1)

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

1996 (1)

1995 (1)

Akturk, S.

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

Anguelov, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Antonoglou, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Arbor, A.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Avnat, Z.

Ba, J.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2015).

Bendory, T.

T. Bendory, P. Sidorenko, and Y. C. Eldar, “On the uniqueness of FROG methods,” IEEE Signal Process. Lett. 24, 722–726 (2017).
[Crossref]

Bengio, S.

R. Collobert, S. Bengio, and J. Mariethoz, “Torch: a modular machine learning software library,” (2002).

Bengio, Y.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Bishop, C. M.

C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006), Vol. 4.

Blau, H. M.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Boser, B.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Cao, Q.

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

Chapman, H. N.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Chen, K.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Cohen, O.

Collobert, R.

R. Collobert, S. Bengio, and J. Mariethoz, “Torch: a modular machine learning software library,” (2002).

Corrado, G. S.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Courville, A.

D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Dean, J.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

DeLong, K. W.

Denker, J.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Denker, J. S.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Devin, M.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Dieleman, S.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Eldar, Y. C.

T. Bendory, P. Sidorenko, and Y. C. Eldar, “On the uniqueness of FROG methods,” IEEE Signal Process. Lett. 24, 722–726 (2017).
[Crossref]

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Erhan, D.

D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Esteva, A.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Fergus, R.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision - ECCV 2014, Lecture Notes in Computer Science (Springer, 2014), Vol. 8689, pp. 818–833.

Fittinghoff, D. N.

Frank, E.

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

Gao, L.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
[Crossref]

Goda, K.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
[Crossref]

Goodfellow, I.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Göröcs, Z.

Graepel, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Grewe, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Gu, X.

P. O’shea, M. Kimmel, X. Gu, and R. Trebino, “Highly simplified device for ultrashort-pulse measurement,” Opt. Lett. 26, 932–934 (2001).
[Crossref]

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

Guez, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Günaydin, H.

Haham, G. I.

Hassabis, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Henderson, D.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Hezaveh, Y. D.

Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
[Crossref]

Hill, C.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Hinton, G.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1–9.

Holmes, G.

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

Howard, R. E.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Huang, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Huang, G.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).

Hubbard, W.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Ivanov, M. Y.

F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
[Crossref]

Jia, Y.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Kalchbrenner, N.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Kane, D. J.

D. J. Kane, “Principal components generalized projections: a review [Invited],” J. Opt. Soc. Am. B 25, A120–A132 (2008).
[Crossref]

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

Kavukcuoglu, K.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Kimmel, M.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2015).

Kling, M. F.

M. F. Kling and M. J. J. Vrakking, “Attosecond electron dynamics,” Annu. Rev. Phys. Chem. 59, 463–492 (2008).
[Crossref]

Ko, J.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1–9.

Krumbügel, M. A.

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

M. A. Krumbügel, C. L. Ladera, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, and R. Trebino, “Direct ultrashort-pulse intensity and phase retrieval by frequency-resolved optical gating and a computational neural network,” Opt. Lett. 21, 143–145 (1996).
[Crossref]

Kuprel, B.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Ladera, C. L.

Lahav, O.

Lanctot, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Larochelle, H.

P. Vincent and H. Larochelle, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res. 11, 3371–3408 (2010).

Le, Q. V.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 3104–3112.

Le Cun, B.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Le Cun Jackel, L. D.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

Leach, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Lecun, Y.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Lépine, F.

F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
[Crossref]

Levasseur, L. P.

Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
[Crossref]

Lillicrap, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Liu, W.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Liu, Z.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).

Maddison, C. J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Mariethoz, J.

R. Collobert, S. Bengio, and J. Mariethoz, “Torch: a modular machine learning software library,” (2002).

Marshall, P. J.

Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
[Crossref]

Miao, J.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Mikami, H.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
[Crossref]

Mirza, M.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Monga, R.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Ng, A. Y.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Nham, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Novoa, R. A.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

O’shea, P.

Ozair, S.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Ozcan, A.

Panneershelvam, V.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Pfahringer, B.

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

Pouget-Abadie, J.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Rabinovich, A.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Ranzato, M.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

Read, J.

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

Reed, S.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Richman, B. A.

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

Rivenson, Y.

Schrittwieser, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Segev, M.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Sermanet, P.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Shechtman, Y.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Shreenath, A.

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

Sidorenko, P.

Sifre, L.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Silberberg, Y.

Y. Silberberg, “Quantum coherent control for nonlinear spectroscopy and microscopy,” Ann. Rev. Phys. Chem. 60, 277–292 (2009).
[Crossref]

Silver, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Steinmeyer, G.

Stibenz, G.

Sutskever, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1–9.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 3104–3112.

Sweetser, J. N.

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

M. A. Krumbügel, C. L. Ladera, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, and R. Trebino, “Direct ultrashort-pulse intensity and phase retrieval by frequency-resolved optical gating and a computational neural network,” Opt. Lett. 21, 143–145 (1996).
[Crossref]

Swetter, S. M.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Szegedy, C.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Thrun, S.

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Trebino, R.

P. O’shea, M. Kimmel, X. Gu, and R. Trebino, “Highly simplified device for ultrashort-pulse measurement,” Opt. Lett. 26, 932–934 (2001).
[Crossref]

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

M. A. Krumbügel, C. L. Ladera, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, and R. Trebino, “Direct ultrashort-pulse intensity and phase retrieval by frequency-resolved optical gating and a computational neural network,” Opt. Lett. 21, 143–145 (1996).
[Crossref]

D. N. Fittinghoff, K. W. DeLong, R. Trebino, and C. L. Ladera, “Noise sensitivity in frequency-resolved optical-gating measurements of ultrashort pulses,” J. Opt. Soc. Am. B 12, 1955–1967 (1995).
[Crossref]

R. Trebino, Frequency-Resolved Optical Gating: The Measurement of Ultrashort Laser Pulses (Springer, 2012).

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

van den Driessche, G.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

van der Maaten, L.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).

Vanhoucke, V.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

Vincent, P.

P. Vincent and H. Larochelle, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res. 11, 3371–3408 (2010).

D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).

Vinyals, O.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 3104–3112.

Vrakking, M. J. J.

F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
[Crossref]

M. F. Kling and M. J. J. Vrakking, “Attosecond electron dynamics,” Annu. Rev. Phys. Chem. 59, 463–492 (2008).
[Crossref]

Wang, H.

Warde-Farley, D.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Weinberger, K. Q.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).

Weiner, A. M.

A. M. Weiner, Ultrafast Optics (Wiley, 2009).

Xu, B.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

Zeiler, M. D.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision - ECCV 2014, Lecture Notes in Computer Science (Springer, 2014), Vol. 8689, pp. 818–833.

Zhang, Y.

Ann. Rev. Phys. Chem. (1)

Y. Silberberg, “Quantum coherent control for nonlinear spectroscopy and microscopy,” Ann. Rev. Phys. Chem. 60, 277–292 (2009).
[Crossref]

Annu. Rev. Phys. Chem. (1)

M. F. Kling and M. J. J. Vrakking, “Attosecond electron dynamics,” Annu. Rev. Phys. Chem. 59, 463–492 (2008).
[Crossref]

IEEE Signal Process. Lett. (1)

T. Bendory, P. Sidorenko, and Y. C. Eldar, “On the uniqueness of FROG methods,” IEEE Signal Process. Lett. 24, 722–726 (2017).
[Crossref]

IEEE Signal Process. Mag. (1)

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

J. Mach. Learn. Res. (2)

D. Erhan, A. Courville, and P. Vincent, “Why does unsupervised pre-training help deep learning ?” J. Mach. Learn. Res. 11, 625–660 (2010).

P. Vincent and H. Larochelle, “Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion Pierre-Antoine Manzagol,” J. Mach. Learn. Res. 11, 3371–3408 (2010).

J. Opt. Soc. Am. B (2)

Mach. Learn. (1)

J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” Mach. Learn. 85, 333–359 (2011).
[Crossref]

Nanophotonics (1)

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5, 441–453 (2016).
[Crossref]

Nat. Photonics (1)

F. Lépine, M. Y. Ivanov, and M. J. J. Vrakking, “Attosecond molecular dynamics: fact or fiction?” Nat. Photonics 8, 195–204 (2014).
[Crossref]

Nature (4)

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542, 115–118 (2017).
[Crossref]

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall, “Fast automated analysis of strong gravitational lenses with convolutional neural networks,” Nature 548, 555–557 (2017).
[Crossref]

Opt. Express (2)

Opt. Lett. (2)

Optica (2)

Rev. Sci. Instrum. (1)

R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, B. A. Richman, D. J. Kane, and M. A. Krumbügel, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997).
[Crossref]

Other (15)

R. Trebino, Frequency-Resolved Optical Gating: The Measurement of Ultrashort Laser Pulses (Springer, 2012).

X. Gu, S. Akturk, A. Shreenath, Q. Cao, and R. Trebino, “The measurement of ultrashort light—simple devices, complex pulses,” in Femtosecond Laser Spectroscopy (Springer, 2005), pp. 61–86.

A. M. Weiner, Ultrafast Optics (Wiley, 2009).

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision - ECCV 2014, Lecture Notes in Computer Science (Springer, 2014), Vol. 8689, pp. 818–833.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1–9.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 3104–3112.

Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, “Building high-level features using large scale unsupervised learning,” in 29th International Conference on Machine Learning (2012), p. 38115.

L. D. Le Cun Jackel, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, B. Le Cun, J. Denker, and D. Henderson, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (NIPS, 1990), pp. 396–404.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, C. Hill, and A. Arbor, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 1–9.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2017).

“Code for retrieving a pulse intensity and phase from its FROG trace,” 2013, http://frog.gatech.edu/code.html .

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), Vol. 27, pp. 2672–2680.

R. Collobert, S. Bengio, and J. Mariethoz, “Torch: a modular machine learning software library,” (2002).

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980 (2015).

C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006), Vol. 4.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (A) Experimental SHG FROG setup. (B)–(G) Simulated examples from the data set. (B), (D), (F) are pulse amplitudes (blue) and phases (red). (C), (E), (G) are the corresponding FROG traces.
Fig. 2.
Fig. 2. Four different pulses (amplitude and phase) that correspond to the same FROG trace. Each pulse corresponds to a different ambiguity in SHG FROG measurements.
Fig. 3.
Fig. 3. (A) DNN architecture for mapping FROG traces into complex pulses. A FROG trace, which serves as input data, is convoluted by learned filters in three subsequent layers. The result serves as an input to three fully connected layers with ReLU activation, after which the real and imaginary parts of the pulse are fully reconstructed. (B) Supervised training procedure. The generated pulses are used (after ambiguity removal) as labels for the supervised DNN training step. We create a FROG trace from these pulses with the FROGNet, add white Gaussian noise (WGN), and then forward propagate the pulses in a DNN. The label comparison gradient (loss) back propagates and updates the weights. The gradient is back propagated through the DNN and is added to the weights through a stochastic gradient descent (SGD) update. (C) Unsupervised training procedure. Similar to B, but now, the reconstructed pulse (the output of the DNN) is also forward propagated through the FROGNet, such that the reconstructed FROG trace is compared with the measured one. The gradient is then computed and is back propagated through the FROGNet to update the weights of the DNN.
Fig. 4.
Fig. 4. Learning stage. Reconstruction error and STD as a function of SNR for different methods: DeepFROG DNNs trained with no noise (blue) or 0–30 dB SNR (red) training sets, ptychographic FROG (yellow), and PCGPA (dashed black). DeepFROG trained with noisy data reaches the lowest error at SNR values below 20 dB.
Fig. 5.
Fig. 5. Reconstruction of a simulated example pulse, which was not included in the training set, from its FROG trace with 10 dB SNR, using PCGPA, ptychography, and DeepFROG. (A) Noisy simulated FROG measurement at 10 dB SNR. (B), (C), (D) are the reconstructed FROG traces by PCGPA, ptychographic FROG, and DeepFROG, respectively. (E) FROG trace of the original simulated test pulse. (F), (G), (H) reconstructed pulses by PCGPA, ptychographic FROG, and DeepFROG, respectively, compared to the original. Pulse error and FROG trace error are denoted. DeepFROG reaches the lowest reconstruction error, almost three times lower than the error achieved by other algorithms.
Fig. 6.
Fig. 6. Reconstruction of an experimentally generated reference pulse. (A) High SNR SHG FROG measurement of the reference pulse. (B) Reconstruction of the pulse from (A) with the commonly used algorithms: ptychographic FROG, PCGPA, and the algorithm provided within the commercial reconstruction FROG program by the Trebino group.
Fig. 7.
Fig. 7. Reconstruction of an experimental pulse from low SNR FROG measurements. (A) Measured FROG trace. (B), (C), (D) FROG traces constructed from the pulses reconstructed by PCGPA, ptychographic FROG, and DeepFROG, with their error compared to the FROG trace of the reference pulse listed in white. (E) FROG trace of the reference pulse. (F), (G), (H) Corresponding reconstructed pulses and their reconstruction errors. Additionally, the δI errors between the reconstructed FROG traces and the low SNR measured trace are 0.49, 0.44, and 0.32 for PCGPA, Ptychography, and DeepFROG, respectively (not shown in figure). The modified DeepFROG displays the lowest reconstruction error for both the pulse and the FROG trace.
Fig. 8.
Fig. 8. Experimental pulse reconstruction from noisy but filtered FROG measurement. (A) Measured and filtered FROG trace. (B), (C), (D) are FROG traces constructed from the retrieved pulses reconstructed by PCGPA, ptychographic FROG, and DeepFROG and their errors from the reference FROG trace, respectively. (E) FROG trace constructed from the reference pulse. (F), (G), (H) are the corresponding pulses and pulse reconstruction errors δE, relative to the reference pulse. Additionally, the δI errors between the reconstructed FROG traces and the low SNR measured trace are 0.102, 0.103, and 0.092 for PCGPA, Ptychography, and DeepFROG, respectively (not shown in figure). In this filtered trace reconstruction, DeepFROG performs comparatively to PCGPA and ptychography—results we predicted in the simulations section.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Imeasured(ωi,τj)=|F{E(t)E(tτj)}|2,
w*=argminw{loss(I,E)}=argminw{CNN(I;w)E1+λFROGNet(CNN(I;w))I1},
yout(p;nout,x,nout,y)=nw,xnw,ymyin(m;nout,xnw,x,nout,ynw,y)w(p,m;nw,x,nw,y).
nout=(nin+2p)/s+(nw1).

Metrics