Abstract

The stereoscopic vision is widely used to acquire depth information, distances between objects, as well as obstacle detection. In this work, a method that reduces the amount of data to obtain depth information for a specific scenery is proposed. The method reduces a 640x480 size image to a 3x3 matrix, simplifying the instructions and decision making for an actuating device. Excellent results were obtained with a 3 seconds processing time by using Python 3.7.2, Opencv 4.0.1, and two Logitech C170 web cameras.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
    [Crossref]
  2. S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.
  3. U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.
  4. M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
    [Crossref]
  5. M. Meenakshi and P. Shubha, “Design and implementation of healthcare assistive robot,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), (IEEE, 2019), pp. 61–65.
  6. Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” in Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technology, (ACM, 2019), p. 17.
  7. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” in Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 1991), pp. 63–69.
  8. T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100
  9. Logitech, “Especificaciones webcam logitech c170,” https://support.logitech.com/en_us/product/webcam-c170/specs .
  10. D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical design (CRC Press, 2016).
  11. H. Hirschmüller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008).
    [Crossref]
  12. B. Jahne, Practical handbook on image processing for scientific and technical applications (CRC Press, 2004).
  13. G. Cristóbal, P. Schelkens, and H. Thienpont, Optical and digital image processing: fundamentals and applications (John Wiley & Sons, 2013).
  14. W. K. Pratt, “Digital image processing. a wiley-interscience publication,” (1978).
  15. D. Scharstein, R. I. Szelisk, and H. Hirschmüller, “http://vision.middlebury.edu/stereo/,” http://vision.middlebury.edu/stereo/ .
  16. M. A. G. Ramon, “Segmentación de imágenes obtenidas a través de un sensor kinect con criterios morfológicos y atributos visuales de profundidad,” Ph.D. thesis, Universidad Autonoma de Querétaro (2018).
  17. C. Castedo Hernández, R. Estop Remacha, and L. Santos de la Fuente, “Sistema de visión estereoscópico para el guiado de un robot quirúrgico en operaciones de cirugía laparosócopica hals,” Actas de las XXXVIII Jornadas de Automática (2017).
  18. J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.
  19. Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.
  20. P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.
  21. Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
    [Crossref]
  22. A. Ali and M. A. Ali, “Blind navigation system for visually impaired using windowing-based mean on microsoft kinect camera,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME), (IEEE, 2017), pp. 1–4.
  23. R. Ribani and M. Marengoni, “Vision substitution with object detection and vibrotactile stimulus,” in Proc. 14th Int. Joint Conf. Comput. Vis., Imag. Comput. Graph. Theory Appl., (2019), pp. 584–590.
  24. M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).
  25. C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

2019 (3)

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

2008 (1)

H. Hirschmüller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008).
[Crossref]

Adarsh, S.

S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.

Agüero, P. D.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Ali, A.

A. Ali and M. A. Ali, “Blind navigation system for visually impaired using windowing-based mean on microsoft kinect camera,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME), (IEEE, 2017), pp. 1–4.

Ali, M. A.

A. Ali and M. A. Ali, “Blind navigation system for visually impaired using windowing-based mean on microsoft kinect camera,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME), (IEEE, 2017), pp. 1–4.

Boloni, L.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

Cantu, R. A.

J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.

Capobianco, A.

P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.

Castedo Hernández, C.

C. Castedo Hernández, R. Estop Remacha, and L. Santos de la Fuente, “Sistema de visión estereoscópico para el guiado de un robot quirúrgico en operaciones de cirugía laparosócopica hals,” Actas de las XXXVIII Jornadas de Automática (2017).

Cervellini, M. P.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Chen, S.

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Cristóbal, G.

G. Cristóbal, P. Schelkens, and H. Thienpont, Optical and digital image processing: fundamentals and applications (John Wiley & Sons, 2013).

Cruz, J. E. M.

J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.

Curticapean, D.

P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.

Delgado, J. E. L.

J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.

Ding, P.

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Emani, S.

S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.

Estop Remacha, R.

C. Castedo Hernández, R. Estop Remacha, and L. Santos de la Fuente, “Sistema de visión estereoscópico para el guiado de un robot quirúrgico en operaciones de cirugía laparosócopica hals,” Actas de las XXXVIII Jornadas de Automática (2017).

Fan, H.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

Fei, Z.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Feltner, C.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

Gawron, P.

U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.

Gong, C.

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Gonzalez, E.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Guilbe, J.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

Hangst, N.

U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.

Heidari, H.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

Himmelsbach, U. B.

U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.

Hirschmüller, H.

H. Hirschmüller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008).
[Crossref]

D. Scharstein, R. I. Szelisk, and H. Hirschmüller, “http://vision.middlebury.edu/stereo/,” http://vision.middlebury.edu/stereo/ .

Imran, M.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

Jahne, B.

B. Jahne, Practical handbook on image processing for scientific and technical applications (CRC Press, 2004).

Javahiraly, N.

P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.

Kanade, T.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” in Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 1991), pp. 63–69.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Kano, H.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Katayama, H.

Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” in Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technology, (ACM, 2019), p. 17.

Khodadadeh, S.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

Kimura, S.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Konishi, T.

Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” in Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technology, (ACM, 2019), p. 17.

Kuzman, M. G.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Liang, X.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

Malacara-Hernández, D.

D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical design (CRC Press, 2016).

Malacara-Hernández, Z.

D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical design (CRC Press, 2016).

Marengoni, M.

R. Ribani and M. Marengoni, “Vision substitution with object detection and vibrotactile stimulus,” in Proc. 14th Int. Joint Conf. Comput. Vis., Imag. Comput. Graph. Theory Appl., (2019), pp. 584–590.

Meenakshi, M.

M. Meenakshi and P. Shubha, “Design and implementation of healthcare assistive robot,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), (IEEE, 2019), pp. 61–65.

Mehnen, J.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Mineo, C.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Morales, N. I. G.

J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.

Oda, K.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Okutomi, M.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” in Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 1991), pp. 63–69.

Pham, Q.-C.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Pratt, W. K.

W. K. Pratt, “Digital image processing. a wiley-interscience publication,” (1978).

Qian, J.

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Ramon, M. A. G.

M. A. G. Ramon, “Segmentación de imágenes obtenidas a través de un sensor kinect con criterios morfológicos y atributos visuales de profundidad,” Ph.D. thesis, Universidad Autonoma de Querétaro (2018).

Ren, J.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Ribani, R.

R. Ribani and M. Marengoni, “Vision substitution with object detection and vibrotactile stimulus,” in Proc. 14th Int. Joint Conf. Comput. Vis., Imag. Comput. Graph. Theory Appl., (2019), pp. 584–590.

Rodden, T.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Santos de la Fuente, L.

C. Castedo Hernández, R. Estop Remacha, and L. Santos de la Fuente, “Sistema de visión estereoscópico para el guiado de un robot quirúrgico en operaciones de cirugía laparosócopica hals,” Actas de las XXXVIII Jornadas de Automática (2017).

Scharstein, D.

D. Scharstein, R. I. Szelisk, and H. Hirschmüller, “http://vision.middlebury.edu/stereo/,” http://vision.middlebury.edu/stereo/ .

Schelkens, P.

G. Cristóbal, P. Schelkens, and H. Thienpont, Optical and digital image processing: fundamentals and applications (John Wiley & Sons, 2013).

Shubha, P.

M. Meenakshi and P. Shubha, “Design and implementation of healthcare assistive robot,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), (IEEE, 2019), pp. 61–65.

Soman, K.

S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.

Song, J.

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Song, M.

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Sun, M.

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Sun, Y.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

Szelisk, R. I.

D. Scharstein, R. I. Szelisk, and H. Hirschmüller, “http://vision.middlebury.edu/stereo/,” http://vision.middlebury.edu/stereo/ .

Tange, Y.

Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” in Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technology, (ACM, 2019), p. 17.

Thienpont, H.

G. Cristóbal, P. Schelkens, and H. Thienpont, Optical and digital image processing: fundamentals and applications (John Wiley & Sons, 2013).

Tulli, J. C.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Turgut, D.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

Uriz, A.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

Variyar, V. S.

S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.

Wang, L.

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

Wei, Y.

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Wendt, T. M.

U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.

Wong, C.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Wozniak, P.

P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.

Yan, Y.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Yang, E.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Yang, J.

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Yoshida, A.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Zabalza, J.

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Zehtabian, S.

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

IEEE Access (1)

M. Sun, P. Ding, J. Song, M. Song, and L. Wang, “Watch your step: Precise obstacle detection and navigation for mobile users through their mobile service,” IEEE Access 7, 66731–66738 (2019).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

H. Hirschmüller, “Stereo processing by semiglobal matching and mutual information,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008).
[Crossref]

Neural Process. Lett. (1)

Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, “Obstacle detection by fusing point clouds and monocular image,” Neural Process. Lett. 49(3), 1007–1019 (2019).
[Crossref]

Sensors (1)

J. Zabalza, Z. Fei, C. Wong, Y. Yan, C. Mineo, E. Yang, T. Rodden, J. Mehnen, Q.-C. Pham, and J. Ren, “Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments - a case study,” Sensors 19(6), 1354 (2019).
[Crossref]

Other (21)

S. Emani, K. Soman, V. S. Variyar, and S. Adarsh, “Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and dnn,” in Soft Computing and Signal Processing, (Springer, 2019), pp. 639–648.

U. B. Himmelsbach, T. M. Wendt, N. Hangst, and P. Gawron, “Single pixel time-of-flight sensors for object detection and self-detection in three-sectional single-arm robot manipulators,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), (IEEE, 2019), pp. 250–253.

A. Ali and M. A. Ali, “Blind navigation system for visually impaired using windowing-based mean on microsoft kinect camera,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME), (IEEE, 2017), pp. 1–4.

R. Ribani and M. Marengoni, “Vision substitution with object detection and vibrotactile stimulus,” in Proc. 14th Int. Joint Conf. Comput. Vis., Imag. Comput. Graph. Theory Appl., (2019), pp. 584–590.

M. P. Cervellini, E. Gonzalez, J. C. Tulli, A. Uriz, P. D. Agüero, and M. G. Kuzman, “Sistema de sustitución sensorial visual - táctil para no videntes empleando sensores infrarrojos,” XVIII Congreso Argentino de Bioingeniería SABI 2011 - VII Jornadas de Ingeniería Clínica (2011).

C. Feltner, J. Guilbe, S. Zehtabian, S. Khodadadeh, L. Boloni, and D. Turgut, “Smart walker for the visually impaired,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

M. Meenakshi and P. Shubha, “Design and implementation of healthcare assistive robot,” in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), (IEEE, 2019), pp. 61–65.

Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” in Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technology, (ACM, 2019), p. 17.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” in Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 1991), pp. 63–69.

T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, “Development of a video-rate stereo machine,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3 (IEEE, 1995) pp. 95–100

Logitech, “Especificaciones webcam logitech c170,” https://support.logitech.com/en_us/product/webcam-c170/specs .

D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical design (CRC Press, 2016).

B. Jahne, Practical handbook on image processing for scientific and technical applications (CRC Press, 2004).

G. Cristóbal, P. Schelkens, and H. Thienpont, Optical and digital image processing: fundamentals and applications (John Wiley & Sons, 2013).

W. K. Pratt, “Digital image processing. a wiley-interscience publication,” (1978).

D. Scharstein, R. I. Szelisk, and H. Hirschmüller, “http://vision.middlebury.edu/stereo/,” http://vision.middlebury.edu/stereo/ .

M. A. G. Ramon, “Segmentación de imágenes obtenidas a través de un sensor kinect con criterios morfológicos y atributos visuales de profundidad,” Ph.D. thesis, Universidad Autonoma de Querétaro (2018).

C. Castedo Hernández, R. Estop Remacha, and L. Santos de la Fuente, “Sistema de visión estereoscópico para el guiado de un robot quirúrgico en operaciones de cirugía laparosócopica hals,” Actas de las XXXVIII Jornadas de Automática (2017).

J. E. L. Delgado, R. A. Cantu, J. E. M. Cruz, and N. I. G. Morales, “Desarrollo de un sistema de reconstrucción 3d estereoscópica basado en la disparidad,” in Determinacion del grado de estres en docentes universitarios con actividad, (2018), p. 6548.

Y. Sun, X. Liang, H. Fan, M. Imran, and H. Heidari, “Visual hand tracking on depth image using 2-d matched filter,” in 2019 UK/China Emerging Technologies (UCET), (IEEE, 2019), pp. 1–4.

P. Wozniak, A. Capobianco, N. Javahiraly, and D. Curticapean, “Depth sensor based detection of obstacles and notification for virtual reality systems,” in International Conference on Applied Human Factors and Ergonomics, (Springer, 2019), pp. 271–282.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Stereoscopic vision system graphical representation, where B is the baseline, f is the focal length, X is a specific position of the object, x and x’ are the object representation on the image plane at the camera’s focal length, Z is the distance between the object and the camera, and O and O’ are the principle points of cameras.
Fig. 2.
Fig. 2. Processing flowchart, where ①, ②, ③, ④, ⑤ and ⑥ are the six ranges of gray levels considered.
Fig. 3.
Fig. 3. a) Scene with objects b) depth image, and c) image segmentation and binarization of the four generated images.
Fig. 4.
Fig. 4. Images 3, 4, 5 and 6 division in nine sections.
Fig. 5.
Fig. 5. Sections evaluation in a) image 3, b) image 4, c) image 5 and d) image 6.
Fig. 6.
Fig. 6. 3x3 output matrix generation , assigning 0, 0.2, 0.5, 0.8 or 1 values (0 refers to an absence of object and 1 refers to the closest objects).
Fig. 7.
Fig. 7. Experimental setup in a controlled environment. a) side view and b) webcam image. Here Cs are the cameras, Ob1, Ob2 and Ob3 are objects placed at different distances.
Fig. 8.
Fig. 8. a) Depth image, b) 3x3 matrix display in gray levels.
Fig. 9.
Fig. 9. a) Scene with objects, b) 3x3 matrix display in gray levels.
Fig. 10.
Fig. 10. a) Scene with objects, b) 3x3 matrix display in gray levels.
Fig. 11.
Fig. 11. a) Depth image, b) 3x3 matrix display in gray levels.
Fig. 12.
Fig. 12. a) Depth image, b) 3x3 matrix display in gray levels.
Fig. 13.
Fig. 13. a) Depth image, b) 3x3 matrix display in gray levels.
Fig. 14.
Fig. 14. a) Depth image, b) 3x3 matrix display in gray levels.

Tables (3)

Tables Icon

Table 1. Intel processor characteristics.

Tables Icon

Table 2. Comparison of depth image works.

Tables Icon

Table 3. Sensory substitution devices and the proposed method application to this area.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

Z = B f d
d = x x

Metrics