This paper describes a depth from defocus (DFD) method to recover the depth of scene from two images taken by a liquid crystal (LC) lens imaging system. The system is composed of a camera module and an LC lens. The system’s focal length is electrically adjusted by the voltages that are applied on the LC lens. We use two images that are taken at maximum positive and negative, respectively, and the LC lens’s optical power, to obtain the depth information via DFD. The principle is described and the experimental results are successfully obtained. The method is simple in that it needs not to involve any mechanical lens movements in the imaging system.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
With the development of computer vision and image sensor technology, more and more methods have been developed for depth measurement which is helpful for three-dimensional (3D) reconstruction of scene. As an active depth measurement, time-of-flight (TOF) [1–3] measures the phase difference between emitted and reflected infrared waves to estimate the depth from each sensor pixel to objects. It has been applied to many fields such as robot navigation [4,5], gesture recognition, etc. Another active method is structured light technique [6–8]. A modulated light is projected on the object, based on the decode analysis of the reflected image, the scene depth can be obtained accurately. The technique is mainly used in games, 3D scanning, and in particular, in industrial automation [9,10]. Both methods are sensitive to environment light, and are not suitable for outdoor applications. The passive depth measurement, on the other hand, use the natural light images acquired with imaging systems, and use the image characteristics, such as disparity, ambiguity, etc., to derive the scene depth. Binocular stereo vision acquires depth via position difference in images [11–13]. In binocular imaging method, however, there includes camera calibration procedure and the image match algorithm are complicated, so that long execution time is necessary to get dense depth information. Depth from focus (DFF) method reconstructs a depth map from a sequence of differently focused images . Depth from defocus (DFD) method, on the other hand, obtains depth from two images basing on blur radius [15,16].
Traditional DFD method derives depth from different defocused images taken by the imaging system with different lens position, which resulting in change in image magnification that has to be compensated in calculation. Furthermore, the change of lens position needs accurate mechanical moving equipment, and the long term friction loss leads to a short lifecycle of the system. The problem of magnification change can be avoided using a telecentric optical system . However, a telecentric optics is of large size and heavy weight.
Recently, researches in depth measuring using lenses with electrically controllable focal lengths have been reported [18–20]. It becomes unnecessary for any mechanical lens movements in these approaches. It shows that depth can be measured via DFF using an LC lens . However, one has to take a stack of images for the calculation, which is a time-consuming process. Generally, DFD approaches needs only two images for the calculation and required much short processing time. Recently, a depth measuring method via DFD using a liquid lens has been reported [19,20]. The system projects a Ronchi fringes on the object. The contrast of the fringes on the object varied with the change of the focal length of the liquid lens. By comparing the contrast curve with the calibration result, a high precision depth map is obtained. For there are a light source, an LCD, a beam splitter, etc., the system is rather complicated.
In this paper, we propose a DFD measuring method using an LC lens  to recover the scene depth. Only two images are acquired and analyzed, and the system is simple and compact. For there are no lens movements involved in, the system is of long lifecycle and lower power consumption. The principle is discussed, and the proposal is successfully demonstrated experimentally.
2. DFD optic model
2.1 LC lens
The structure of the LC lens in this work is shown in Fig. 1 [21,23]. Indium Tin Oxide (ITO) transparent electrode are coated on the inside of glass substrate 1 and 2, respectively. The upper patterned electrode is composed of two parts: one includes a hole of 2.00 mm diameter applied with square-wave voltage V1, and the other is a circle of 1.98 mm diameter with square-wave voltage V2. The resistive film is made from ZnO, and its surface resistivity is in the order of 108 Ω/□. The LC (MLC 6080 from Merck) layer is of thickness of 30 μm. The electric field resulted from V1 and V2, rotate the LC director an angle of , and the effective refractive index experienced by the e-wave is :
The properties of the LC lens are measured using a Mach-Zehnder interferometer with a laser beam of 532 nm wavelength. The aberrations and the optical powers of the LC lens are obtained by analyzing the interference fringes. The results are shown in Table 1. The optical power can be tuned from negative to positive lens states, and the rms aberration is kept below 0.07 wave. The frequency of both voltages is 800 Hz. In positive lens state, V1 is kept to be a constant of 3.5 Vrms and V2 tunes the optical power from 0.0 to 4.4 m−1, and in negative lens state, V2 is kept a constant of 2.2 Vrms and V1 tunes the power from 0.0 to −3.9 m−1. The power range of the LC lens is = [-3.9 m−1, 4.4 m−1], and the rms aberration remains below 0.07 wave, which is usually considered to be a tolerable value for an optical lens . and are, respectively, the maximum powers of the LC lens working in negative and positive lens states.
The LC lens is placed in front of a lens module (the main lens), and its aperture is used as the aperture stop of the compound lens. The magnification of the imaging system for one object point is then a constant even the optical power of the LC lens is changed.
2.2 DFD optic model
Suppose the focal length of the main lens is fg, the optical power is Pg = 1/fg. In this study, the aperture of the main lens is 2.0 mm and fg = 8.0 mm. The power of the compound lens is then P = Pg + PLC-dPgPLC. So, the power range of the compound lens is = ].
We define ∇P as:
Let the image on the image sensor of one object O of object distance u is I when the optical power of the system is P, we introduce a parameter ∂∈(−1,1) to describe P, that is:Fig. 2, and:
Watanabe, et al  proposed an effective method to compute the depth from the two images using the telecentric optics imaging system taken at two sensor positions. Here, adopt a similar optic model.
Let’s the radius of the aperture of the system is a. In the experiment, firstly the object of distance u0 is adjusted to be exactly focused when the LC lens is in non-lens state, that is, PLC = 0. Then the image of distance v0 = fgu0/(u0-fg). When the object O is imaged on plane v1, from Eqs. (2)-(4), the radius of the defocused spot on the sensor plane is:
Similarly, when O is imaged on the plane v2, the radius of the defocused spots on the sensor plane is:
For the blur function, we adopt the pillbox ambiguity function
The Fourier transform of the pillbox is
The defocus functions of the object O when the system in the maximum power and minimum power states is as following equation, respectively,
According to Watanabe, et al, the normalized ratio:Eq. (9) and (10) respectively. It can be seen that M/P is a monotonic function of α for −1≤∂≤1, provided the radial frequency is not too large. We will prove how larger radial frequency is feasible. When the focal power of imaging system changed from Pmin to Pmax, as a rule of thumb, this frequency range equals the width of the main lobe of the defocus function H when it is maximally defocused. Obviously, the ambiguity of I1 is larger than I2. If the first zero crossing  of H2 is , we have . Therefore, space frequency of the image I2 is:
We can find out when the space frequency fr≤0.61(u0-fg)/(2afgu0∇P),the normalized image ratio M/P ratio is monotonic .
Efficient DFD computation methods have been proposed in previous works [25–27]. In this paper, we just select an approach based on rational filters  which is better for real time computation. Rational filters are broadband linear filters. We can implement them with small convolution kernels, low computational cost and high spatial resolution. In our system, we also adopt the simple model: . The theory of the rational filters generation, implementation has been clarified and detailed by Watanabe, et al .
In fact, the final design issue pertains to maximum frequency frmax. Since the discrete Fourier transform of a kernel of size ks has the minimum discrete frequency period of 1/ks. The maximum frequency shall be above 1/ks. According to Nyquist theorem, we can specify the condition as: frmax≥2/ks where frmax = 0.73(u0-fg)/(2afgu0∇P). This condition can be interpreted as follows: The max blur circle diameter 2afgu0∇P/(u0-fg) must be smaller than 73% of the kernel size ks.
For our imaging system with LC lens, if the radius of the aperture is 2.0 mm, the focal length fg = 8 mm, the focal power of the LC lens is P∈[-2, 2] m−1, one can easily get the maximum radius of defocused spot is rmax = 32 μm. Provided the pixel size is 2.2 μm, rmax = 14.544 pixels. From this computation we can clear acknowledge how large the rational filter we need and how to setup the experiment.
3. Experimental results
The experimental setup is composed of a camera, a main lens, and an LC lens; the LC lens is stick to the main lens of 8.0 mm focal length. The distance d between the LC lens and the main lens is negligible. The camera is MD50-T from MINGMEI Company with resolution of 2592*1944 (1280*960 is actually used in experiment) and pixel size of 2.2 μm. Firstly, tune the LC lens working in the non-lens status, then adjust the object distance with u0 correspond to v0. The image distance keeps unchanged with the adjustment of the optical power.
There are seven objects including an oil painting in the scene placed with different distances. The camera module is first adjusted to focus object at distance u0 = 250 mm, and two images are taken with the optical powers of 4.4 and −3.9 m−1 of the LC lens, as shown in Figs. 3(a) and 3(b), respectively. The depth map, as shown in Fig. 4, is derived from the two images assuming the depth value be 128 when the object distance is u0 = 250 mm. The depth values are summarized in Table 2. Figure 5 shows the linearly relationship between the depth value and the reciprocal of the object distance.
A DFD measurement method using LC lens with constant-magnification structure is proposed. We introduce the focal power variation and parameterized defocus function relative to focal plane. Defocus function is also redefined so that the normalized image ratio is monotonic under the designed space frequency. The depth map can be recovered from near focused image and far focused image. In the experiment, the depth values of objects ranging from 150 to 2250 mm are obtained.
Sichuan Science and Technology Program (2017JY0022); Fundamental Research Funds for the Central Universities (ZYGX2016J076).
1. M. Hansard, S. Lee, O. Choi, and R. Horaud, Time of Flight Cameras: Principles, Methods, and Applications (Springer Publishing Company, 2012).
2. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” in Proceeding of IEEE Conference on Sensors Journal (IEEE, 2011), pp. 1917–1926.
3. B. Kang, S. Kim, S. Lee, K. Lee, J. Kim, and C. Kim, “Harmonic distortion free distance estimation in ToF camera,” Proc. SPIE 7864, 786403 (2011). [CrossRef]
4. S. May, B. Werner, H. Surmann, and K. Pervolz, “3D time-of-flight cameras for mobile robotics,” in Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 790–795.
5. T. Schamm, M. Strand, T. Gumpp, R. Kohlhaas, J. Zollner, and R. Dillmann, “ Vision and ToF-based driving assistance for a personal transporter,” in Proceeding of IEEE Conference on Advanced Robotics (IEEE, 2009), pp. 1–6.
6. P. J. Besl, Active Optical Range Imaging Sensors (Adcances in Machine Vision, 1989).
7. C. H. Lin, R. A. Powell, L. Jiang, H. Xiao, S. J. Chen, and H. L. Tsai, “Real-time depth measurement for micro-holes drilled by lasers,” Meas. Sci. Technol. 21(2), 025307 (2010). [CrossRef]
8. E. Horn and N. Kiryati, “Toward optimal structured light patterns,” in Proceeding of IEEE International Conference on Recent Advances in 3-D Digital Imaging and Modeling (IEEE, 1997), pp. 28–35.
9. C. H. Wu, Y. N. Sun, and C. C. Chang, “Three-dimensional modeling from endoscopic video using geometric constraints via feature positioning,” in Proceeding of IEEE Transactions on Bio-medical Engineering (IEEE, 2007), pp. 1199–1211.
10. J. F. Li, Y. K. Guo, J. H. Zhu, X. Lin, Y. Xin, K. Duan, and Q. Tang, “Large depth-of-view portable three-dimensional laser scanner and its segmental calibration for robot vision,” Opt. Lasers Eng. 45(11), 1077–1087 (2007). [CrossRef]
11. N. Short, “3-D point cloud generation from rigid and flexible stereo vision systems,” Masters Degree Thesis of Virginia Polytechnic Institute and State University, (2009).
12. S. Mattoccia, “Stereo Vision Algorithms and Applications,” Lecture of Department of Computer Science (DISI) University of Bologna, (2011).
13. M. Gerrits and P. Bekaert, “Local stereo matching with segmentation-based outlier rejection,” in Proceeding of IEEE Canadian Conference on Computer and Robot Vision (IEEE, 2006), pp. 66–72. [CrossRef]
14. P. Grossmann, “Depth from focus,” Pattern Recognit. Lett. 5(1), 63–69 (1987). [CrossRef]
15. A. S. Malik and T. S. Choi, “A novel algorithm for estimation of depth map using image focus for 3D shape recovery in the presence of noise,” Pattern Recognit. 41(7), 2200–2225 (2008). [CrossRef]
16. M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27(3), 203–225 (1998). [CrossRef]
17. M. Watanabe and S. K. Nayar, “Telecentric optics for constant-magnification imaging,” Technical Report CUCS-026–95, Dept. of Computer Science, Columbia University, New York, NY, USA, (1995).
18. M. Kawamura and S. Ishikuro, “Feature extraction from multiply focal images by using a liquid crystal lens,” Mol. Cryst. Liq. Cryst. (Phila. Pa.) 613(1), 51–58 (2015). [CrossRef]
19. S. Pasinetti, I. Bodini, M. Lancini, F. Docchio, and G. Sansoni, “A depth from defocus measurement system using a liquid lens objective for extended depth range,” in Proceeding of IEEE Transactions on Instrumentation and Measurement (IEEE, 2017), pp. 441–450. [CrossRef]
20. S. Pasinetti, I. Bodini, M. Lancini, F. Docchio, and G. Sansoni, “Automatic selection of focal lengths in a depth from defocus measurement system based on liquid lenses,” Opt. Lasers Eng. 96, 68–74 (2017). [CrossRef]
22. M. Born and E. Wolf, Principle of Optics (Cambridge University Press, 1999).
23. M. Ye, B. Wang, M. Uchida, S. Yanase, S. Takahashi, M. Yamaguchi, and S. Sato, “Low-voltage-driving liquid crystal lens,” Jpn. J. Appl. Phys. 49(10), 100204 (2010). [CrossRef]
24. A. N. Joseph Raj and R. C. Staunton, “Rational filters design for depth from defocus,” Pattern Recognit. 45(1), 198–207 (2012). [CrossRef]
25. A. P. Pentland, “A new sense for depth of field,” in Proceeding of IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE, 1987), pp. 523–531. [CrossRef]
26. M. Gokstorp, “Computing depth from out-of-focus blur using a local frequency representation,” in Proceeding of 12th International Conference on Pattern Recognition (IEEE, 1994), pp. 153–158. [CrossRef]
27. Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 68–73. [CrossRef]