Abstract
In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e., depth) information. Depth is then estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. This method is applied here within a new active DFD method using a dense textured projection and a chromatic lens for image acquisition. The projector adds texture for low-textured objects, which is usually a limitation of DFD, and the chromatic aberration increases the estimated depth range with respect to a conventional DFD. Here, we provide quantitative evaluations of the depth estimation performance of our method on simulated and real data of fronto-parallel untextured scenes. The proposed method is then experimentally evaluated qualitatively using a 3D printed benchmark.
© 2021 Optical Society of America
Full Article | PDF ArticleMore Like This
Pauline Trouvé, Frédéric Champagnat, Guy Le Besnerais, Jacques Sabater, Thierry Avignon, and Jérôme Idier
Appl. Opt. 52(29) 7152-7164 (2013)
Rémy Leroy, Pauline Trouvé-Peloux, Bertrand Le Saux, Benjamin Buat, and Frédéric Champagnat
Appl. Opt. 61(29) 8843-8849 (2022)
P. Trouvé-Peloux, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier
J. Opt. Soc. Am. A 38(10) 1489-1500 (2021)