Abstract

In 3D optical metrology, single-shot structured light profilometry techniques have inherent advantages over their multi-shot counterparts in terms of measurement speed, optical setup simplicity, and robustness to motion artifacts. In this paper, we present a new approach to extract height information from single deformed fringe patterns, based entirely on deep learning. By training a fully convolutional neural network on a large set of simulated height maps with corresponding deformed fringe patterns, we demonstrate the ability of the network to obtain full-field height information from previously unseen fringe patterns with high accuracy. As an added benefit, intermediate data processing steps such as background masking, noise reduction and phase unwrapping that are otherwise required in classic demodulation strategies, can be learned directly by the network as part of its mapping function.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3D measurement

Jiashuo Shi, Xinjun Zhu, Hongyi Wang, Limei Song, and Qinghua Guo
Opt. Express 27(20) 28929-28943 (2019)

Neural network digital fringe calibration technique for structured light profilometers

Matthew J. Baker, Jiangtao Xi, and Joe F. Chicharo
Appl. Opt. 46(8) 1233-1243 (2007)

Single-shot absolute 3D shape measurement with Fourier transform profilometry

Beiwen Li, Yatong An, and Song Zhang
Appl. Opt. 55(19) 5219-5225 (2016)

References

  • View by:
  • |
  • |
  • |

  1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
    [Crossref]
  2. S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 1–14 (2016).
  3. N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
    [Crossref]
  4. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006).
    [Crossref] [PubMed]
  5. J. Geng, “Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems,” Opt. Eng. 35(2), 376–383 (1996).
    [Crossref]
  6. Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.
  7. W. Liu, Z. Wang, G. Mu, and Z. Fang, “Color-coded projection grating method for shape measurement with a single exposure,” Appl. Opt. 39(20), 3504–3508 (2000).
    [Crossref] [PubMed]
  8. P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
    [Crossref]
  9. X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001).
    [Crossref]
  10. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems paper 1097–1105 (2012).
  11. L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
    [Crossref] [PubMed]
  12. A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649.
    [Crossref]
  13. R. Collobert and J. Weston, “A unified architecture for natural language processing,” in Proceedings of the 25th International Conference on Machine Learning - ICML ’08 (ACM Press, 2008), pp. 160–167.
    [Crossref]
  14. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
    [Crossref] [PubMed]
  15. W. Zhou, Y. Song, X. Qu, Z. Li, and A. He, “Fourier transform profilometry based on convolution neural network,” in Optical Metrology and Inspection for Industrial Applications V, S. Han, T. Yoshizawa, and S. Zhang, eds. (SPIE, 2018), 10819, p. 62.
  16. Y. H. Chan and D. P. K. Lun, “Deep learning based period order detection in fringe projection profilometry,” in Proceedings, APSIPA Annual Summit and Conference 2018, 108–113 (2018).
  17. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).
  18. J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in the IEEE conference on computer vision and pattern recognition 3431–3440 (2015).
  19. M. Sankaradas, V. Jakkula, S. Cadambi, S. Chakradhar, I. Durdanovic, E. Cosatto, and H. P. Graf, “A Massively Parallel Coprocessor for Convolutional Neural Networks,” in 2009 20th IEEE International Conference on Application-Specific Systems, Architectures and Processors (IEEE, 2009), pp. 53–60.
    [Crossref]
  20. M. Abadi, “TensorFlow: A system for large-scale machine learning,” Proc. 12th USENIX Symp. Oper. Syst. Des. Implement. (2016).
  21. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv (2014).
  22. X. Glorot and Y. Bengio, “Understanding the Difficulty of Training Deep Feedforward Neural Networks,” Proceedings of the thirteenth international conference on artificial intelligence and statistics (2010).
  23. A. Botchkarev, “Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology,” arXiv preprint (2018).
  24. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2016), 9906 LNCS, pp. 694–711.
  25. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition. (2015).
  27. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).
  28. H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision (2015), 2015 Inter, 1520–1528.
  29. D. Rolnick, A. Veit, S. Belongie, and N. Shavit, “Deep learning is robust to massive label noise,” arXiv preprint (2017).
  30. P. Upchurch, J. Gardner, G. Pleiss, R. Pless, N. Snavely, K. Bala, and K. Weinberger, “Deep feature interpolation for image content changes,” Proceedings of the IEEE conference on computer vision and pattern recognition (2017)
  31. J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
    [Crossref]

2018 (1)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

2016 (2)

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 1–14 (2016).

2015 (1)

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

2014 (2)

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

2011 (1)

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

2010 (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

2009 (1)

L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
[Crossref] [PubMed]

2006 (1)

2001 (1)

X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001).
[Crossref]

2000 (1)

1996 (1)

J. Geng, “Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems,” Opt. Eng. 35(2), 376–383 (1996).
[Crossref]

1992 (1)

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
[Crossref]

Antonoglou, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Ball, G. R.

L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
[Crossref] [PubMed]

Bengio, Y.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Blondel, M.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Brucher, M.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Chan, Y. H.

Y. H. Chan and D. P. K. Lun, “Deep learning based period order detection in fringe projection profilometry,” in Proceedings, APSIPA Annual Summit and Conference 2018, 108–113 (2018).

Chen, Q.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Chen, V.

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

Chen, W.

X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001).
[Crossref]

Chicharo, J.

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Cournapeau, D.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Courville, A.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Dieleman, S.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Dirckx, J. J. J.

S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 1–14 (2016).

Dubourg, V.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Duchesnay, É.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Fang, Z.

Feng, S.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Geng, J.

J. Geng, “Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems,” Opt. Eng. 35(2), 376–383 (1996).
[Crossref]

Goodfellow, I. J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Graepel, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Gramfort, A.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Graves, A.

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649.
[Crossref]

Grewe, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Griffin, P. M.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
[Crossref]

Grisel, O.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Gu, G.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Guez, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Guo, L.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Han, J.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Hassabis, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Hinton, G.

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649.
[Crossref]

Hoke, M.

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

Hu, X.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Hu, Y.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Huang, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Kalchbrenner, N.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Karpinsky, N.

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

Kavukcuoglu, K.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Lancashire, L. J.

L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
[Crossref] [PubMed]

Lanctot, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Leach, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Lemetre, C.

L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
[Crossref] [PubMed]

Li, E.

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Lillicrap, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Liu, W.

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Lun, D. P. K.

Y. H. Chan and D. P. K. Lun, “Deep learning based period order detection in fringe projection profilometry,” in Proceedings, APSIPA Annual Summit and Conference 2018, 108–113 (2018).

Maddison, C. J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Michel, V.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Mirza, M.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Mohamed, A.

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649.
[Crossref]

Mu, G.

Narasimhan, L. S.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
[Crossref]

Nham, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Ozair, S.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Panneershelvam, V.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Passos, A.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Pedregosa, F.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Perrot, M.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Pouget-Abadie, J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Prettenhofer, P.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Ren, J.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Schrittwieser, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Sifre, L.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Silver, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Su, X.

X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001).
[Crossref]

Sutskever, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Tao, T.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Thirion, B.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Towers, C. E.

Towers, D. P.

van den Driessche, G.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Van der Jeught, S.

S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 1–14 (2016).

Vanderplas, J.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Varoquaux, G.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Wang, Z.

Warde-Farley, D.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Weiss, R.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Wu, F.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Xi, J.

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Xu, B.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Yang, Z.

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Yee, S. R.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
[Crossref]

Yin, W.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Yu, Y.

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

Zhang, D.

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

Zhang, L.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Zhang, S.

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

Zhang, Z.

Zuo, C.

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Adv. Phonetics (1)

S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Phonetics 1(2), 1–7 (2018).

Advances in neural information processing systems (1)

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Advances in neural information processing systems 27, 2672–2680 (2014).

Appl. Opt. (1)

Brief. Bioinform. (1)

L. J. Lancashire, C. Lemetre, and G. R. Ball, “An introduction to artificial neural networks in bioinformatics--application to complex microarray and mass spectrometry datasets in cancer studies,” Brief. Bioinform. 10(3), 315–329 (2009).
[Crossref] [PubMed]

IEEE Trans. Circ. Syst. Video Tech. (1)

J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background Prior-Based Salient Object Detection via Deep Reconstruction Residual,” IEEE Trans. Circ. Syst. Video Tech. 25(8), 1309–1321 (2015).
[Crossref]

J. Mach. Learn. Res. (1)

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011).

Nature (1)

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529(7587), 484–489 (2016).
[Crossref] [PubMed]

Opt. Eng. (2)

J. Geng, “Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems,” Opt. Eng. 35(2), 376–383 (1996).
[Crossref]

N. Karpinsky, M. Hoke, V. Chen, and S. Zhang, “High-resolution, real-time three-dimensional shape measurement on graphics processing unit,” Opt. Eng. 53(2), 024105 (2014).
[Crossref]

Opt. Express (1)

Opt. Lasers Eng. (2)

S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 1–14 (2016).

X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. 35(5), 263–284 (2001).
[Crossref]

Pattern Recognit. (2)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992).
[Crossref]

Other (17)

Y. Hu, J. Xi, E. Li, J. Chicharo, Z. Yang, and Y. Yu, “A calibration approach for decoupling colour cross-talk using nonlinear blind signal separation network,” in Conference on Optoelectronic and Microelectronic Materials and Devices, Proceedings, COMMAD (2005), pp. 265–268.

J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in the IEEE conference on computer vision and pattern recognition 3431–3440 (2015).

M. Sankaradas, V. Jakkula, S. Cadambi, S. Chakradhar, I. Durdanovic, E. Cosatto, and H. P. Graf, “A Massively Parallel Coprocessor for Convolutional Neural Networks,” in 2009 20th IEEE International Conference on Application-Specific Systems, Architectures and Processors (IEEE, 2009), pp. 53–60.
[Crossref]

M. Abadi, “TensorFlow: A system for large-scale machine learning,” Proc. 12th USENIX Symp. Oper. Syst. Des. Implement. (2016).

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv (2014).

X. Glorot and Y. Bengio, “Understanding the Difficulty of Training Deep Feedforward Neural Networks,” Proceedings of the thirteenth international conference on artificial intelligence and statistics (2010).

A. Botchkarev, “Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology,” arXiv preprint (2018).

J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2016), 9906 LNCS, pp. 694–711.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems paper 1097–1105 (2012).

A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2013), pp. 6645–6649.
[Crossref]

R. Collobert and J. Weston, “A unified architecture for natural language processing,” in Proceedings of the 25th International Conference on Machine Learning - ICML ’08 (ACM Press, 2008), pp. 160–167.
[Crossref]

W. Zhou, Y. Song, X. Qu, Z. Li, and A. He, “Fourier transform profilometry based on convolution neural network,” in Optical Metrology and Inspection for Industrial Applications V, S. Han, T. Yoshizawa, and S. Zhang, eds. (SPIE, 2018), 10819, p. 62.

Y. H. Chan and D. P. K. Lun, “Deep learning based period order detection in fringe projection profilometry,” in Proceedings, APSIPA Annual Summit and Conference 2018, 108–113 (2018).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition. (2015).

H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision (2015), 2015 Inter, 1520–1528.

D. Rolnick, A. Veit, S. Belongie, and N. Shavit, “Deep learning is robust to massive label noise,” arXiv preprint (2017).

P. Upchurch, J. Gardner, G. Pleiss, R. Pless, N. Snavely, K. Bala, and K. Weinberger, “Deep feature interpolation for image content changes,” Proceedings of the IEEE conference on computer vision and pattern recognition (2017)

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Standard projector-camera configuration used in structured light profilometry techniques. A structured light modulator projects the pattern onto the scene and an imaging sensor placed at a relative angle with the projection axis records the deformed pattern. The height (z)-sensitivity vector is oriented along the bisector between the projection and observation axes.
Fig. 2
Fig. 2 Random selection of simulated height maps from the training data set, ordered in function of number of peaks (Np) and sampled on grids of 128 × 128 pixels.
Fig. 3
Fig. 3 Fringe modulation process. A predefined fringe pattern (b) is projected onto a randomly generated height map (a), after which the surface texture (c) is observed under an angle α (here α = 30°) and the deformed fringe pattern (d) is sampled onto a Cartesian grid.
Fig. 4
Fig. 4 Our network structure. A total of 10 convolutional layers separate the input and output layer. Nonlinear activation (ReLU) and dropout layers are included after every convolutional layer but are not shown here.
Fig. 5
Fig. 5 Network inference on samples drawn randomly from the validation set. The first two columns represent the deformed fringe pattern – surface map data couples, the third column shows network prediction and the fourth column includes the error map. The numbers of peaks Np present in the height maps are 4, 6 and 9 for the first, second and third samples, respectively. X- and Y-coordinates are displayed as pixel numbers on a 128 × 128 grid; Z-values are normalized to [0, 1].
Fig. 6
Fig. 6 Network inference on samples created independently from the random surface generator. The first two columns represent the deformed fringe pattern – surface map data couples, the third column shows network prediction and the fourth column includes the error map. The upper part of a sphere, a triangular step function, and a mannequin doll head are included in row 1, row 2 and row 3, respectively. X- and Y-coordinates are displayed as pixel numbers on a 128 × 128 grid; Z-values are normalized to [0, 1].
Fig. 7
Fig. 7 Network inference on samples with varying levels of noise. Gaussian noise with a mean of µ = 0 and variance ranging from σ = 0 (top row) to σ = 0.1 (bottom row) was added to a sample drawn from the validation set before network inference. X- and Y-coordinates are displayed as pixel numbers on a 128 × 128 grid; Z-values are normalized to [0, 1].

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

I R ( x,y )= I B ( x,y )+ I M ( x,y )cos( φ( x,y ) ).

Metrics