Abstract

Consumer-grade red-green-blue and depth (RGB-D) sensors, such as the Microsoft Kinect and the Asus Xtion, are attractive devices due to their low cost and robustness for real-time sensing of depth information. These devices provide the depth information by detecting the correspondences between the captured infrared (IR) image and the initial image sent to the IR projector, and their essential limitation is the low accuracy of 3D shape reconstruction. In this paper, an effective technique that employs the Kinect sensors for accurate 3D shape, deformation, and vibration measurements is introduced. The technique involves using the RGB-D sensors, an accurate camera calibration scheme, and area- and feature-based image-matching algorithms. The IR speckle pattern projected from the Kinect projector considerably facilitates the digital image correlation analysis in the regions of interest with enhanced accuracy. A number of experiments have been carried out to demonstrate the validity and effectiveness of the proposed technique and approach. It is shown that the technique can yield measurement accuracy at the 10 μm level for a typical field of view. The real-time capturing speed of 30 frames per second makes the proposed technique suitable for certain motion and vibration measurements, such as non-contact monitoring of respiration and heartbeat rates.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Improved speckle projection profilometry for out-of-plane shape measurement

Bing Pan, Huimin Xie, Jianxin Gao, and Anand Asundi
Appl. Opt. 47(29) 5527-5533 (2008)

References

  • View by:
  • |
  • |
  • |

  1. Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE MultiMedia 19, 4–10 (2012).
    [Crossref]
  2. S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.
  3. K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.
  4. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.
  5. J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
    [Crossref]
  6. “Kinect for Xbox One,” http://www.xbox.com/en-US/xbox-one/accessories/kinect .
  7. “PrimeSense,” https://en.wikipedia.org/wiki/PrimeSense .
  8. K. Mankoff and T. Russo, “The Kinect: a low-cost, high-resolution, short-range 3D camera,” Earth Surf. Process. Lardf. 38, 926–936 (2013).
    [Crossref]
  9. J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Advances in Computer Vision and Pattern Recognition (2013), pp. 3–25.
  10. K. Khoshelham and S. Elberink, “Accuracy and resolution of Kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
    [Crossref]
  11. S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.
  12. M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.
  13. W. C. Chiu, U. Blanke, and M. Fritz, “Improving the Kinect by cross-model stereo,” in British Machine Vision Conference (BMVC) (2011), pp. 1–10.
  14. P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
    [Crossref]
  15. F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
    [Crossref]
  16. F. Alhwarin, A. Ferrein, and I. Scholl, “IR stereo Kinect: improving depth images by combining structured light with IR stereo,” in Pacific Rim International Conference on Artificial Intelligence (2014), Vol. 8862, pp. 409–421.
  17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
    [Crossref]
  18. M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
    [Crossref]
  19. B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49, 5501–5509 (2010).
    [Crossref]
  20. B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
    [Crossref]
  21. M. L. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?” in Proceedings of the Tenth IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.
  22. L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
    [Crossref]
  23. Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
    [Crossref]
  24. H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
    [Crossref]
  25. F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.
  26. Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
    [Crossref]
  27. J. Hernandez, D. McDuff, and R. W. Picard, “BioWatch: estimation of heart and breathing rates from wrist motions,” in Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2015), pp. 169–176.
  28. J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
    [Crossref]
  29. X. Shao, X. Dai, Z. Chen, and X. He, “Real-time 3D digital image correlation method and its application in human pulse monitoring,” Appl. Opt. 55, 696–704 (2016).
    [Crossref]
  30. H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
    [Crossref]
  31. Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
    [Crossref]
  32. R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
    [Crossref]
  33. J. Espinosa, J. Perez, B. Ferrer, and D. Mas, “Method for targetless tracking subpixel in-plane movements,” Appl. Opt. 54, 7760–7765 (2015).
    [Crossref]
  34. T. Nguyen, G. Nehmetallah, D. Tran, A. Darudi, and P. Soltani, “Fully automated, high speed, tomographic phase object reconstruction using the transport of intensity equation in transmission and reflection configurations,” Appl. Opt. 54, 10443–10453 (2015).
    [Crossref]

2016 (1)

2015 (5)

H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
[Crossref]

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

J. Espinosa, J. Perez, B. Ferrer, and D. Mas, “Method for targetless tracking subpixel in-plane movements,” Appl. Opt. 54, 7760–7765 (2015).
[Crossref]

T. Nguyen, G. Nehmetallah, D. Tran, A. Darudi, and P. Soltani, “Fully automated, high speed, tomographic phase object reconstruction using the transport of intensity equation in transmission and reflection configurations,” Appl. Opt. 54, 10443–10453 (2015).
[Crossref]

2014 (4)

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
[Crossref]

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

2013 (2)

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

K. Mankoff and T. Russo, “The Kinect: a low-cost, high-resolution, short-range 3D camera,” Earth Surf. Process. Lardf. 38, 926–936 (2013).
[Crossref]

2012 (3)

K. Khoshelham and S. Elberink, “Accuracy and resolution of Kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref]

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE MultiMedia 19, 4–10 (2012).
[Crossref]

2011 (2)

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
[Crossref]

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

2010 (1)

2009 (1)

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

2007 (1)

J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

Adib, F.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

Albert, S.

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

Alhwarin, F.

F. Alhwarin, A. Ferrein, and I. Scholl, “IR stereo Kinect: improving depth images by combining structured light with IR stereo,” in Pacific Rim International Conference on Artificial Intelligence (2014), Vol. 8862, pp. 409–421.

Argyros, A. A.

M. L. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?” in Proceedings of the Tenth IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

Asundi, A.

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

Blake, A.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Blanke, U.

W. C. Chiu, U. Blanke, and M. Fritz, “Improving the Kinect by cross-model stereo,” in British Machine Vision Conference (BMVC) (2011), pp. 1–10.

Bo, L.

K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.

Burgard, W.

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Chang, R.

J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
[Crossref]

Chen, Y.

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Chen, Z.

X. Shao, X. Dai, Z. Chen, and X. He, “Real-time 3D digital image correlation method and its application in human pulse monitoring,” Appl. Opt. 55, 696–704 (2016).
[Crossref]

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Chiu, W. C.

W. C. Chiu, U. Blanke, and M. Fritz, “Improving the Kinect by cross-model stereo,” in British Machine Vision Conference (BMVC) (2011), pp. 1–10.

Cook, M.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Cremers, D.

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Dai, X.

Darudi, A.

Davison, A.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Dietrich, A.

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

Duo, M.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

Elberink, S.

K. Khoshelham and S. Elberink, “Accuracy and resolution of Kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref]

Endres, F.

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Espinosa, J.

Ferrein, A.

F. Alhwarin, A. Ferrein, and I. Scholl, “IR stereo Kinect: improving depth images by combining structured light with IR stereo,” in Pacific Rim International Conference on Artificial Intelligence (2014), Vol. 8862, pp. 409–421.

Ferrer, B.

Finocchio, M.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Fitzgibbon, A.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

Fox, D.

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.

Freeman, D.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Fritz, M.

W. C. Chiu, U. Blanke, and M. Fritz, “Improving the Kinect by cross-model stereo,” in British Machine Vision Conference (BMVC) (2011), pp. 1–10.

Fuchs, H.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

Han, J.

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

He, X.

Henry, P.

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

Herbst, E.

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

Hernandez, J.

J. Hernandez, D. McDuff, and R. W. Picard, “BioWatch: estimation of heart and breathing rates from wrist motions,” in Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2015), pp. 169–176.

Hess, J.

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Hilliges, O.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Hoang, T.

Hodges, S.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Izadi, S.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Jancosek, M.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Advances in Computer Vision and Pattern Recognition (2013), pp. 3–25.

Jiang, J.

J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
[Crossref]

Kabelac, Z.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

Katabi, D.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

Kei, P.

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Khoshelham, K.

K. Khoshelham and S. Elberink, “Accuracy and resolution of Kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref]

Kieu, H.

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
[Crossref]

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

Kim, D.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Kipman, A.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Kohli, P.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Kraimin, M.

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

Lai, K.

K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.

Lau, D.

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Le, M.

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
[Crossref]

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

Lourakis, M. L. A.

M. L. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?” in Proceedings of the Tenth IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

Luu, L.

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
[Crossref]

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

Ma, J.

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
[Crossref]

Mankoff, K.

K. Mankoff and T. Russo, “The Kinect: a low-cost, high-resolution, short-range 3D camera,” Earth Surf. Process. Lardf. 38, 926–936 (2013).
[Crossref]

Mao, H.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

Mas, D.

McDuff, D.

J. Hernandez, D. McDuff, and R. W. Picard, “BioWatch: estimation of heart and breathing rates from wrist motions,” in Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2015), pp. 169–176.

Miller, R.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

Molyneaux, D.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Moore, R.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Nehmetallah, G.

Newcombe, R.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Ng, S.

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Nguyen, D.

Nguyen, H.

H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
[Crossref]

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
[Crossref]

Nguyen, T.

T. Nguyen, G. Nehmetallah, D. Tran, A. Darudi, and P. Soltani, “Fully automated, high speed, tomographic phase object reconstruction using the transport of intensity equation in transmission and reflection configurations,” Appl. Opt. 54, 10443–10453 (2015).
[Crossref]

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

Pajdla, T.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Advances in Computer Vision and Pattern Recognition (2013), pp. 3–25.

Pan, B.

B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49, 5501–5509 (2010).
[Crossref]

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

Pan, T.

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

Pan, Y.

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Penzlin, F.

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

Perez, J.

Picard, R. W.

J. Hernandez, D. McDuff, and R. W. Picard, “BioWatch: estimation of heart and breathing rates from wrist motions,” in Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2015), pp. 169–176.

Qian, K.

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

Quisberth, J.

Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
[Crossref]

Ren, X.

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.

Russo, T.

K. Mankoff and T. Russo, “The Kinect: a low-cost, high-resolution, short-range 3D camera,” Earth Surf. Process. Lardf. 38, 926–936 (2013).
[Crossref]

Scholl, I.

F. Alhwarin, A. Ferrein, and I. Scholl, “IR stereo Kinect: improving depth images by combining structured light with IR stereo,” in Pacific Rim International Conference on Artificial Intelligence (2014), Vol. 8862, pp. 409–421.

Shao, L.

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

Shao, X.

Sharp, T.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Shotto, J.

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

Shotton, J.

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

Smisek, J.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Advances in Computer Vision and Pattern Recognition (2013), pp. 3–25.

Soltani, P.

Sturm, J.

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Taylor, J.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

Teo, J.

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Tran, D.

Vo, M.

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
[Crossref]

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

Wang, Q.

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Wang, Z.

H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015).
[Crossref]

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
[Crossref]

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36, 3070–3072 (2011).
[Crossref]

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49, 5501–5509 (2010).
[Crossref]

Wu, J.

J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
[Crossref]

Wu, R.

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Xie, H.

B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49, 5501–5509 (2010).
[Crossref]

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

Xu, D.

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

Yang, X.

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Zhang, D.

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Zhang, Z.

Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE MultiMedia 19, 4–10 (2012).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

Zug, S.

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

Appl. Opt. (5)

Earth Surf. Process. Lardf. (1)

K. Mankoff and T. Russo, “The Kinect: a low-cost, high-resolution, short-range 3D camera,” Earth Surf. Process. Lardf. 38, 926–936 (2013).
[Crossref]

IEEE MultiMedia (1)

Z. Zhang, “Microsoft Kinect sensor and its effect,” IEEE MultiMedia 19, 4–10 (2012).
[Crossref]

IEEE Trans. Cybernet. (1)

J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Trans. Cybernet. 43, 1290–1303 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
[Crossref]

IEEE Trans. Robot. (1)

F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. Robot. 30, 177–187 (2014).
[Crossref]

Int. J. Robot. Res. (1)

P. Henry, M. Kraimin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31, 647–663 (2012).
[Crossref]

J. Biomed. Opt. (1)

Z. Chen, D. Lau, J. Teo, S. Ng, X. Yang, and P. Kei, “Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor,” J. Biomed. Opt. 19, 057001 (2014).
[Crossref]

Meas. Sci. Technol. (2)

H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Meas. Sci. Technol. 25, 035401 (2014).
[Crossref]

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009).
[Crossref]

Opt. Eng. (2)

Z. Wang, H. Nguyen, and J. Quisberth, “Audio extraction from silent high-speed video using an optical technique,” Opt. Eng. 53, 110502 (2014).
[Crossref]

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Opt. Eng. 50, 110503 (2011).
[Crossref]

Opt. Lasers Eng. (2)

R. Wu, Y. Chen, Y. Pan, Q. Wang, and D. Zhang, “Determination of three-dimensional movement for rotary blades using digital image correlation,” Opt. Lasers Eng. 65, 38–45 (2015).
[Crossref]

Z. Wang, H. Kieu, H. Nguyen, and M. Le, “Digital image correlation in experimental mechanics and image registration in computer vision: similarities, differences and complements,” Opt. Lasers Eng. 65, 18–27 (2015).
[Crossref]

Opt. Lett. (1)

Sensors (2)

J. Wu, R. Chang, and J. Jiang, “A novel pulse measurement system by using laser triangulation and a CMOS image sensor,” Sensors 7, 3366–3385 (2007).
[Crossref]

K. Khoshelham and S. Elberink, “Accuracy and resolution of Kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref]

Other (13)

S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotto, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011), pp. 559–568.

M. Duo, J. Taylor, H. Fuchs, A. Fitzgibbon, and S. Izadi, “3D scanning deformable objects with a single RGBD sensor,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 493–501.

W. C. Chiu, U. Blanke, and M. Fritz, “Improving the Kinect by cross-model stereo,” in British Machine Vision Conference (BMVC) (2011), pp. 1–10.

F. Alhwarin, A. Ferrein, and I. Scholl, “IR stereo Kinect: improving depth images by combining structured light with IR stereo,” in Pacific Rim International Conference on Artificial Intelligence (2014), Vol. 8862, pp. 409–421.

“Kinect for Xbox One,” http://www.xbox.com/en-US/xbox-one/accessories/kinect .

“PrimeSense,” https://en.wikipedia.org/wiki/PrimeSense .

J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Advances in Computer Vision and Pattern Recognition (2013), pp. 3–25.

S. Zug, F. Penzlin, A. Dietrich, T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic application?” in IEEE International Symposium on Robotic and Sensors Environments (IEEE, 2012), pp. 144–149.

K. Lai, L. Bo, X. Ren, and D. Fox, “Sparse distance learning for object recognition combining RGB and depth information,” in IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4008–4013.

J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from a single depth images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1297–1304.

M. L. A. Lourakis and A. A. Argyros, “Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?” in Proceedings of the Tenth IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1526–1531.

F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (ACM, 2015), pp. 837–846.

J. Hernandez, D. McDuff, and R. W. Picard, “BioWatch: estimation of heart and breathing rates from wrist motions,” in Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2015), pp. 169–176.

Supplementary Material (3)

NameDescription
» Visualization 1       Vibration test
» Visualization 2       Respiration test
» Visualization 3       Pulse test

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Example of the proposed system and the images captured by the left and right IR cameras. (a) Experimental setup; (b) and (c) images captured with IR illumination; and (d) and (e) images captured without IR illumination.
Fig. 2.
Fig. 2. Schematic of the stereo-vision imaging.
Fig. 3.
Fig. 3. Example of feature-point matching with the SIFT method.
Fig. 4.
Fig. 4. Out-of-plane coordinate plot of the gage block over the left image.
Fig. 5.
Fig. 5. 3D shape and deformation measurements of a variety of objects.
Fig. 6.
Fig. 6. Vibration test: (a) experimental setup; (b) color illustration of the out-of-plane coordinates (see Visualization 1 ); (c) and (d) displacements and vibration frequency acquired from the accelerometer; and (e) and (f) displacements and vibration frequency measured by the proposed technique.
Fig. 7.
Fig. 7. Measurement of respiration rate: (a) and (b) out-of-plane coordinate maps at the lowest and highest positions; (c) displacement distribution; and (d) frequency spectrum (see Visualization 2 ).
Fig. 8.
Fig. 8. Measurement of pulse wave: (a) and (b) out-of-plane coordinate maps at the lowest and highest positions; (c) displacement distribution; and (d) frequency spectrum (see Visualization 3 ).

Tables (1)

Tables Icon

Table 1. Actual and Measured Displacements of the Gage Block a

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

[ x l y l z l ] = [ R 11 l R 12 l R 13 l T 1 l R 21 l R 22 l R 23 l T 2 l R 31 l R 32 l R 33 l T 3 l ] [ x w y w z w 1 ] = [ R l T l ] [ x w y w z w 1 ] ,
[ u l v l 1 ] = 1 z l [ α γ u 0 0 β v 0 0 0 1 ] [ x l y l z l ] = [ α γ u 0 0 β v 0 0 0 1 ] [ x ln y ln 1 ] ,
[ u ˜ l v ˜ l 1 ] = [ α γ u 0 0 β v 0 0 0 1 ] [ x ˜ ln y ˜ ln 1 ] ,
x ˜ ln = ( 1 + k 0 r 2 + k 1 r 4 + k 2 r 6 + k 3 r 8 + k 4 r 10 ) x ln + ( k 5 + k 7 r 2 ) r 2 + ( k 9 + k 11 r 2 ) ( r 2 + 2 x ln 2 ) , y ˜ ln = ( 1 + k 0 r 2 + k 1 r 4 + k 2 r 6 + k 3 r 8 + k 4 r 10 ) y ln + ( k 6 + k 8 r 2 ) r 2 + ( k 10 + k 12 r 2 ) ( r 2 + 2 y ln 2 ) , r 2 = x ln 2 + y ln 2 ,
[ x ln z l y ln z l z l ] = [ R 11 l R 12 l R 13 l T 1 l R 21 l R 22 l R 23 l T 2 l R 31 l R 32 l R 33 l T 3 l ] [ x w y w z w 1 ] [ x r n z r y r n z r z r ] = [ R 11 r R 12 r R 13 r T 1 r R 21 r R 22 r R 23 r T 2 r R 31 r R 32 r R 33 r T 3 r ] [ x w y w z w 1 ] ,
C = 1 N 2 i = 1 N [ a f ( u ˜ l i , v ˜ l i ) + b g ( u ˜ r i , v ˜ r i ) ] 2 ,
u ˜ r i = u ˜ l i + ξ + ξ u Δ u + ξ v Δ v + ξ u u Δ u 2 + ξ v v Δ v 2 + ξ u v Δ u Δ v v ˜ r i = v ˜ l i + η + η u Δ u + η v Δ v + η u u Δ u 2 + η v v Δ v 2 + η u v Δ u Δ v ,

Metrics