Abstract

This study addressed the general problem of correspondence retrieval for single-shot depth sensing where the coded features cannot be detected perfectly. The traditional correspondence retrieval technique can be regarded as maximum likelihood estimation with a uniform distribution prior assumption, which may lead to mismatches for two types of insignificant features: 1) incomplete features that cannot be detected completely because of edges, tiny objects, and many depth variations, etc.; and 2) distorted features disturbed by environmental noise. To overcome the drawback of the uniform distribution assumption, we propose a maximum a posteriori estimation-based correspondence retrieval method that uses the significant features as priors to estimate the weak or missing features. We also propose a novel monochromatic maze-like pattern, which is more robust to ambient illumination and the colors in scenes than the traditional patterns. Our experimental results demonstrate that the proposed system performs better than the popular RGB-D cameras and traditional single-shot techniques in terms of accuracy and robustness, especially with challenging scenes.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Square wave encoded fringe patterns for high accuracy depth sensing

Guangming Shi, Lili Yang, Fu Li, Yi Niu, Ruodai Li, Zhefeng Gao, and Xuemei Xie
Appl. Opt. 54(12) 3796-3804 (2015)

One-shot depth acquisition with a random binary pattern

Qin Li, Fu Li, Guangming Shi, Shan Gao, Ruodai Li, Lili Yang, and Xuemei Xie
Appl. Opt. 53(30) 7095-7102 (2014)

Depth detection in interactive projection system based on one-shot black-and-white stripe pattern

Qian Zhou, Xiaorui Qiao, Kai Ni, Xinghui Li, and Xiaohao Wang
Opt. Express 25(5) 5341-5351 (2017)

References

  • View by:
  • |
  • |
  • |

  1. L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in First International Symposium on 3D Data Processing Visualization and Transmission (IEEE, 2002), pp. 24-36.
  2. P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 148–164 (1990).
  3. J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).
  4. M. Trobina, “Error model of a coded-light range sensor,” Technique Report, Communication Technology Laboratory, ETH Zentrum, Zurich (1995).
  5. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), 813–820.
  6. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984).
    [PubMed]
  7. S. Zhang, High-resolution, High-speed 3-D Dynamically Deformable Shape Measurement Using Digital Fringe Projection Techniques (INTECH Open Access Publisher, 2010).
  8. G. Shi, L. Yang, F. Li, Y. Niu, R. Li, Z. Gao, and X. Xie, “Square wave encoded fringe patterns for high accuracy depth sensing,” Appl. Opt. 54, 3796–3804 (2015).
  9. T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2007), 1–8.
  10. K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Trans. Pattern Anal. Mach. Intell. 9(1), 14–28 (1987).
    [PubMed]
  11. T. Monks, J. Carter, and C. Shadle, “Colour-encoded structured light for digitisation of real-time 3D data,” in International Conference on Image Processing and its Applications (IET, 1992), 327–330.
  12. J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).
  13. R. Sagawa, Y. Ota, Y. Yagi, R. Furukawa, N. Asada, and H. Kawasaki, “Dense 3D reconstruction method using a single pattern for fast moving object,” in Computer Vision, 2009 IEEE 12th International Conference on, 2009), 1779–1786.
  14. R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
    [PubMed]
  15. E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.
  16. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).
  17. S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
    [PubMed]
  18. P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).
  19. J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.
  20. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983).
    [PubMed]
  21. W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).
  22. C. Albitar, P. Graebling, and C. Doignon, “Robust Structured Light Coding for 3D Reconstruction,” in IEEE 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–6.
  23. X. Maurice, P. Graebling, and C. Doignon, “Epipolar based structured light pattern design for 3-D reconstruction of moving surfaces,” in 2011 IEEE International Conference on Robotics and Automation (ICRA, 2011), pp. 5301–5308.
  24. Q. Li, F. Li, G. Shi, S. Gao, R. Li, L. Yang, and X. Xie, “One-shot depth acquisition with a random binary pattern,” Appl. Opt. 53(30), 7095–7102 (2014).
    [PubMed]
  25. Z. Zhengyou, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
  26. N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979).
  27. R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “KinectFusion: Real-time dense surface mapping and tracking,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (IEEE, 2011), 127–136.
  28. M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

2015 (1)

2014 (3)

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[PubMed]

Q. Li, F. Li, G. Shi, S. Gao, R. Li, L. Yang, and X. Xie, “One-shot depth acquisition with a random binary pattern,” Appl. Opt. 53(30), 7095–7102 (2014).
[PubMed]

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

2010 (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

2008 (1)

S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[PubMed]

2006 (1)

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

2004 (1)

J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).

2000 (1)

Z. Zhengyou, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).

1998 (1)

J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).

1992 (1)

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).

1990 (1)

P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 148–164 (1990).

1987 (1)

K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Trans. Pattern Anal. Mach. Intell. 9(1), 14–28 (1987).
[PubMed]

1984 (1)

1983 (1)

1979 (1)

N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979).

Batlle, J.

J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).

J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).

Boyer, K. L.

K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Trans. Pattern Anal. Mach. Intell. 9(1), 14–28 (1987).
[PubMed]

Cao, Y.

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Carter, J.

T. Monks, J. Carter, and C. Shadle, “Colour-encoded structured light for digitisation of real-time 3D data,” in International Conference on Image Processing and its Applications (IET, 1992), 327–330.

Chaumette, F.

J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.

Chen, S. Y.

S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[PubMed]

Chen, W.

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Collewet, C.

J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.

Curless, B.

L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in First International Symposium on 3D Data Processing Visualization and Transmission (IEEE, 2002), pp. 24-36.

Doignon, C.

X. Maurice, P. Graebling, and C. Doignon, “Epipolar based structured light pattern design for 3-D reconstruction of moving surfaces,” in 2011 IEEE International Conference on Robotics and Automation (ICRA, 2011), pp. 5301–5308.

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

Fisher, M.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Fitzgibbon, A.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Furukawa, R.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[PubMed]

Gao, S.

Gao, Z.

Graebling, P.

X. Maurice, P. Graebling, and C. Doignon, “Epipolar based structured light pattern design for 3-D reconstruction of moving surfaces,” in 2011 IEEE International Conference on Robotics and Automation (ICRA, 2011), pp. 5301–5308.

Griffin, P. M.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).

Gupta, M.

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), 813–820.

Halioua, M.

Izadi, S.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Kak, A. C.

K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Trans. Pattern Anal. Mach. Intell. 9(1), 14–28 (1987).
[PubMed]

Kawasaki, H.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[PubMed]

Leibe, B.

T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2007), 1–8.

Li, F.

Li, Q.

Li, R.

Li, Y. F.

S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[PubMed]

Liu, H. C.

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

Loop, C.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Maurice, X.

X. Maurice, P. Graebling, and C. Doignon, “Epipolar based structured light pattern design for 3-D reconstruction of moving surfaces,” in 2011 IEEE International Conference on Robotics and Automation (ICRA, 2011), pp. 5301–5308.

Moica, A.

E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.

Monks, T.

T. Monks, J. Carter, and C. Shadle, “Colour-encoded structured light for digitisation of real-time 3D data,” in International Conference on Image Processing and its Applications (IET, 1992), 327–330.

Mouaddib, E.

J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).

Mutoh, K.

Narasimhan, L. S.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).

Nayar, S. K.

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), 813–820.

Nießner, M.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Niu, Y.

Oosterlinck, A.

P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 148–164 (1990).

Otsu, N.

N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979).

Pages, J.

J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.

Pagès, J.

J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).

Petriu, E. M.

E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

Rehmann, C.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Sagawa, R.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[PubMed]

Sakr, Z.

E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).

J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).

J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.

Seitz, S. M.

L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in First International Symposium on 3D Data Processing Visualization and Transmission (IEEE, 2002), pp. 24-36.

Shadle, C.

T. Monks, J. Carter, and C. Shadle, “Colour-encoded structured light for digitisation of real-time 3D data,” in International Conference on Image Processing and its Applications (IET, 1992), 327–330.

Shi, G.

Spoelder, H. J. W.

E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.

Srinivasan, V.

Su, X.

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Takeda, M.

Theobalt, C.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Van Gool, L.

T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2007), 1–8.

Vuylsteke, P.

P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 148–164 (1990).

Weise, T.

T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2007), 1–8.

Wu, C.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Xiang, L.

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Xie, X.

Yang, L.

Yee, S. R.

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).

Zach, C.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Zhang, J.

S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[PubMed]

Zhang, L.

L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in First International Symposium on 3D Data Processing Visualization and Transmission (IEEE, 2002), pp. 24-36.

Zhang, Q.

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Zhengyou, Z.

Z. Zhengyou, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).

Zollhöfer, M.

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

ACM Trans. Graph. (1)

M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, and C. Theobalt, “Real-time non-rigid reconstruction using an RGB-D camera,” ACM Trans. Graph. 33, 156 (2014).

Appl. Opt. (4)

IEEE Trans. Image Process. (1)

S. Y. Chen, Y. F. Li, and J. Zhang, “Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (4)

Z. Zhengyou, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).

P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 148–164 (1990).

K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Trans. Pattern Anal. Mach. Intell. 9(1), 14–28 (1987).
[PubMed]

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D Reconstruction from High Frame-Rate Video Using a Static Grid Pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[PubMed]

IEEE Trans. Syst. Man Cybern. (1)

N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979).

Pattern Recognit. (3)

P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25, 609–616 (1992).

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43, 2666–2680 (2010).

J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).

Pattern Recognit. Lett. (1)

J. Salvi, J. Batlle, and E. Mouaddib, “A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognit. Lett. 19, 1055–1065 (1998).

Proc. SPIE (1)

W. Chen, X. Su, Y. Cao, Q. Zhang, and L. Xiang, “Study on Fourier transforms profilometry based on bi-color projecting,” Proc. SPIE 6027, 60271J (2006).

Other (12)

C. Albitar, P. Graebling, and C. Doignon, “Robust Structured Light Coding for 3D Reconstruction,” in IEEE 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–6.

X. Maurice, P. Graebling, and C. Doignon, “Epipolar based structured light pattern design for 3-D reconstruction of moving surfaces,” in 2011 IEEE International Conference on Robotics and Automation (ICRA, 2011), pp. 5301–5308.

R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “KinectFusion: Real-time dense surface mapping and tracking,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (IEEE, 2011), 127–136.

J. Pages, C. Collewet, F. Chaumette, and J. Salvi, “An approach to visual servoing based on coded light,” in Proceedings 2006 IEEE International Conference on Robotics and Automation (IEEE, 2006), pp. 4118–4123.

R. Sagawa, Y. Ota, Y. Yagi, R. Furukawa, N. Asada, and H. Kawasaki, “Dense 3D reconstruction method using a single pattern for fast moving object,” in Computer Vision, 2009 IEEE 12th International Conference on, 2009), 1779–1786.

T. Monks, J. Carter, and C. Shadle, “Colour-encoded structured light for digitisation of real-time 3D data,” in International Conference on Image Processing and its Applications (IET, 1992), 327–330.

L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in First International Symposium on 3D Data Processing Visualization and Transmission (IEEE, 2002), pp. 24-36.

E. M. Petriu, Z. Sakr, H. J. W. Spoelder, and A. Moica, “Object recognition using pseudo-random color encoded structured light,” in Proceedings of the 17th IEEE Instrumentation and Measurement Technology Conference (2000), 1237–1241.

M. Trobina, “Error model of a coded-light range sensor,” Technique Report, Communication Technology Laboratory, ETH Zentrum, Zurich (1995).

M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), 813–820.

T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2007), 1–8.

S. Zhang, High-resolution, High-speed 3-D Dynamically Deformable Shape Measurement Using Digital Fringe Projection Techniques (INTECH Open Access Publisher, 2010).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1 Estimation of the location of the corresponding feature in a pattern for scenes with fixed depth.
Fig. 2
Fig. 2 Estimation of the location of the corresponding feature in a pattern for scenes with non-uniform depth.
Fig. 3
Fig. 3 By fixing p 0 (the area with shading), the variance σ k is positively related to the distance Δ θ k .
Fig. 4
Fig. 4 Critical condition for occlusion.
Fig. 5
Fig. 5 (a) The maze pattern. (b) The crossing features appear as the intersections of the horizontal lines and epipolar lines (red lines) in the captured image.
Fig. 6
Fig. 6 The epipolar geometry.
Fig. 7
Fig. 7 Feature detection procedure.
Fig. 8
Fig. 8 The experimental platform.
Fig. 9
Fig. 9 Mean absolute errors obtained by the depth sensing techniques versus the distance to the planar board.
Fig. 10
Fig. 10 Depth maps acquired for a statue by: (a) Kinect, (b) SR4000, (c) block matching, (d) phase matching, and (e) the proposed method.
Fig. 11
Fig. 11 Depth maps acquired for a hand by: (a) Kinect, (b) SR4000, (c) block matching, (d) phase matching, and (e) the proposed method.
Fig. 12
Fig. 12 (a) Captured image of a dodecahedron. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 13
Fig. 13 (a) Captured image of a simple geometry. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 14
Fig. 14 (a) Captured image of a pot. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 15
Fig. 15 (a) Captured image of a vase. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 16
Fig. 16 (a) Captured image of a pyramid. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 17
Fig. 17 (a) Captured image of a hand. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 18
Fig. 18 (a) Captured image of a complex scene. (b) Depth map acquired by the MLE method. (c) Depth map acquired by the MAPECR method. (d) Dense depth map interpolated from (c).
Fig. 19
Fig. 19 Depth maps of a human face with different expressions.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

m( X i , Θ i )= X i Θ i 2 .
p( θ i | x i )=p( x i | θ i ),
θ= argmax θ p( θ|x )= argmax θ i i p( x i | θ i ) ,
θ i = argmax θ i p( θ i | x i )= argmax θ i p( x i | θ i )p( θ i ).
θ ¯ = f p / f c ( x x k )+ θ k ,
Δθ L = f p Z k ,
L B = ΔZ Z k +ΔZ ,
Δθ= f p B/( Z k 2 /ΔZ+ Z k ).
Z k 2 /ΔZ>> Z k .
Δθ f p BΔZ/ Z k 2 .
p( θ| x k )=1/ 2π σ k 2 exp( ( θ μ k ) 2 /( 2 σ k 2 ) ).
p 0 = θ ¯ Δ θ k θ ¯ +Δ θ k 1/ 2π σ k 2 exp( ( θ θ ¯ ) 2 /( 2 σ k 2 ) ) dθ,
p( θ )= k ω k p( θ| x k ) ,
ω k = α k t,
k ω k =1.
N= V/W ,
T log 2 N ,
k= R/W ,
M= K/T ×T.
T( f )=ffb,
y= argmin y λ y y 0 2 2 + Dy 2 2 ,
D= [ 1 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 ] n×n .
y= ( λ+ D T D ) 1 λ y 0 .
r( x,y )= i=s s I( x+i,yi ) i=s s I( x+i,y+i ) ,
b i ={ 0,r( i )>0 1,r( i )<0 ,
γ i =| r( i ) |/ r max ,
x= 1 N i=1 N v i ,
c i = b i+ j ^ T ,
β i = γ i+ j ^ T ,
j ^ =arg max j γ i+jT .
α= β i ^ ,
i ^ = min i β i .
z= argmin z λ Hz z 0 2 2 + D 1 z 2 2 + D 2 z 2 2 ,
z= ( λ H T H+ D 1 T D 1 + D 2 T D 2 ) 1 λ H T z 0 .

Metrics