Abstract

An auto-focus method for digital imaging systems is proposed that combines depth from focus (DFF) and improved depth from defocus (DFD). The traditional DFD method is improved to become more rapid, which achieves a fast initial focus. The defocus distance is first calculated by the improved DFD method. The result is then used as a search step in the searching stage of the DFF method. A dynamic focusing scheme is designed for the control software, which is able to eliminate environmental disturbances and other noises so that a fast and accurate focus can be achieved. An experiment is designed to verify the proposed focusing method and the results show that the method's efficiency is at least 3-5 times higher than that of the traditional DFF method.

© 2014 Optical Society of America

1. Introduction

Auto-focus technology plays an important role in optical vision imaging systems. It has been widely used in a variety of optical imaging systems, such as consumer cameras, industrial inspection tools, microscopes, and scanners [1, 2]. There are many kinds of auto-focus methods that have been studied since 1990. Generally, these methods can be categorized into active auto-focusing methods and passive auto-focusing methods [3].

Active auto-focusing methods use some auxiliary optical devices to measure the position of a reference point on the sample. For example, Liu et al. introduced a laser beam, a splitter, and an extra CCD to the normal optical system and the sample position could be measured by detecting the centroid of the reflected light spot on the sample [4, 5]. Additionally, Hsu et al. embedded an astigmatic lens into the optical path to produce a focus error signal, which could be converted to the sample’s defocus distance [6, 7]. However, the timing and geometrical fluctuations of the light source and the mechanical error reduced the positioning accuracy of the auto-focus system. Furthermore, this method is expensive and complicated. Alternatively, passive methods are based on a number of images taken by varying focus lens positions. The advantage of passive methods over active methods is that they are simpler and less expensive. Therefore, passive auto-focusing methods are widely used in vision applications.

The depth from focus (DFF) method and the depth from defocus (DFD) method are two typical passive auto-focusing methods that are currently studied. The DFF-type methods are based on the fact that the image formed by an optical system is focused at a particular distance whereas objects at other distances are blurred or defocused [3]. DFF includes two stages: the first stage is to determine the focus function that describes the degree of focus at different positions and the second stage is to search and find the best focus position according to the focus function. Researchers have proposed various focus functions, including the Sobel gradient method [8], band pass filter-based technique [9], energy of Laplacian [10, 11], and sum-modified Laplacian [12]. Wavelet transforms based on the discrete cosine transform have also been developed in recent years [13–15]. Additionally, searching algorithms are studied, which include the mountain climbing servo [16], the fast hill-climbing search [17], and the modified fast climbing search with adaptive step size [18]. Very high accuracy can be achieved by DFF methods. However, since all these methods need to acquire a large number of images at different distances and calculate their focus values, they require long scanning times and high power consumption, which limit their application.

The DFD method is popularly used in depth estimation and scene reconstruction, which can measure the position of samples by just a few images. The DFD method directly estimates the focus location from a measurement of the level of defocus. Therefore, the efficiency of the method is high, which makes it suitable for real-time auto-focusing. However, the accuracy of the DFD method is relatively low because the method needs to build a model of the optical imaging system that is approximate and introduces theoretical errors.

In this paper, we propose a new auto-focusing method with low computation amount and accuracy suitable for real application. Firstly, the traditional DFD methods are improved by a rapid calculation. Then, the improved DFD method is combined with the DFF method to form a new fast and accurate auto-focus method. The combination method is verified by experiments and the results show that the proposed auto-focusing method can decrease the computational cost and achieve high accuracy for real application.

2. The improved DFD method

2.1 Conventional DFD methods

The conventional DFD methods are mainly based on the power spectrum in the frequency domain or the point spread function (PSF) of the image in the spatial domain. Subbarao and Surya proposed a general method in which an S-transform was applied to conduct deconvolution in the frequency domain on two defocused images taken with different camera settings [19, 20]. Favaro and Soatto applied a functional singular value decomposition to compute the PSF [21]. Zhou et al. used a coded aperture [22, 23] that customized the PSF by modifying the aperture shape with a complex statistical model for depth estimation. Hong et al. analyzed the power spectrum from a novel aspect, namely, oriented heat-flow diffusion [24], based on the fact that its strength and direction correspond to the amount of blur.

Generally, complex modeling or great computation loads are required when these conventional DFD methods are used, which may involve computation times comparable with or longer than the DFF methods.

2.2 The improved DFD method

2.2.1 Defocus blurring analysis

Figure 1 shows the imaging situation of three adjacent points A, B, and C on three different planes FP, IP1, and IP2. FP is the focal plane, and IP1 and IP2 are two defocused planes. A, B, and C become three separate blurred spots that possess a certain shape and size owing to the fact that A, B, and C will spread in the propagation direction of the light path according to geometrical optics. When IP1 and IP2 are far from FP, the three blurred spots will overlap within a certain area on planes IP1 and IP2; this will produce a small region containing information on multiple imaging points, which is the reason that the images on IP1 and IP2 are fuzzy.

 

Fig. 1 Spreading process of the image points.

Download Full Size | PPT Slide | PDF

Suppose that the spread angles of A, B, and C on FP are the same and there are no energy losses in the spreading process, and set the radii of the blurred spots on IP1 and IP2 as R1 and R2, respectively, then all the points within the area of radius R1 (R2) around the coordinate (x, y) on FP can spread towards the same coordinate (x, y) and overlap at (x, y) on IP1 (IP2). The pixel value at any point on imaging plane corresponds to the light intensity on it, and generally the light intensity is supposed to be evenly distributed in the blurred spots. Set the area of the pixel (x, y) as S0, then S0 /πR12 times of light intensity at point A on FP will spread to the position of pixel (x, y) on IP1. The intensity at point B, C and all points within the area of radius R1 around the coordinate (x, y) on FP can also be deduced in this way. So, the pixel value at (x, y) on IP1 (IP2) is S0 /πR12 (S0 /πR22) times of all the pixel values in the area of radius R1 (R2) around (x, y) on FP [25]. So far, the qualitative relationship between the radius of the blurred spots and the corresponding pixel values has been established and this is the basis of the calculation method presented in the next section.

2.2.2 The improved DFD calculation method

The two defocused images can be obtained in two ways. One is by changing the image distance; in this way, the defocus distance (the distance between the current imaging plane and the focal plane) in the image space can be calculated using the two defocused images. The other is by changing the object distance; in this case, the defocus distance (distance between the current object plane and the best imaging object plane) in the object space can also be calculated. According to these two ways, the improved DFD method will be separately analyzed in the following. The selection of methods is determined by the hardware structure of the auto-focus system.

a) Improved DFD method by changing image distance

Figure 2 shows a scheme of the improved DFD method by changing image distance. P is an object point on the object plane FP, which is blurred within a radius of R1 (R2) on the imaging plane IP1 (IP2).

 

Fig. 2 The optical imaging model.

Download Full Size | PPT Slide | PDF

The radius of blurred spots can be calculated with the similar triangle principle:

R1=dD2(1f1u).

Where D is the lens diameter, and u, f, and v denote the objective distance, the focal length, and the image distance of the optical imaging system, respectively. The parameter d is the distance between the imaging plane and the focal plane. In the automatic focusing imaging system, the imaging plane corresponds to IP1 or IP2 and the corresponding defocus distance is d or d+Δd (Fig. 2). Therefore, the aim is the calculation of d.

Suppose that u, f, and D are constants; the relationship between R1, R2, and d can be expressed as:

R1=kd.
R2=k(d+Δd).

Where k = D/2(1/f-1/u) and Δd is the distance between IP1 and IP2. Figure 3 shows the blurred spots on IP1 and IP2.

 

Fig. 3 Blurred image of P on different planes.

Download Full Size | PPT Slide | PDF

In Fig. 3(b), set the pixel value at (x, y) on IP1 as V1 and the area of the blurred spot as S1 ; in Fig. 3(c) the corresponding parameters on IP2 are set as V2 and S2 ; the area of the pixel (x, y) is assumed as S0. According to the previous description, the pixel (x, y) on IP1 (IP2) includes all the information on the points contained within the area of radius R1 (R2) around the coordinates (x, y) on the focal plane FP; furthermore, V1 is S0 /S1 times the sum of the pixel values in the area S1 on FP, and V2 is S0 /S2 times the sum of the pixel values in the area S2 on FP. Since S2 incudes S1, then

V1S1<V2S2.
S1=πR12 andS2=πR22; substitute them into Eq. (4), then
R1R2<V2V1.
When V1 > V2, we can get from Eqs. (3)-(5)
d<ΔdV2/V11V2/V1.
Set

dmax=ΔdV2/V11V2/V1.

Although the specific calculation formula of d has not been given, its maximum dmax can be estimated from Eq. (7). Additionally, the calculation is simple and quick, which may provide the possibility for a fast algorithm for auto-focus.

On theoretical deduction, the (x, y) location doesn’t change on FP, IP1 and IP2. In fact, however, it changes according to the slope of the chief ray as shown in Fig. 2. And the error becomes greater as slope increases. In this paper, V1 and V2 are taken in the center area of the images, thus the slope of chief ray is small. The method we proposed is still efficient with a small error.

b) Improved DFD method by changing object distance

For the application of the DFD method by changing the object distance u, which is a variable, the calculation of dmax in Eq. (7) must be converted into the calculation of the defocus distance of the object plane in the object space. The converting scheme and process are separately shown in Fig. 4(a) and Fig. 4(b).

 

Fig. 4 (a). The relevant parameters in the modeling change. ∆u denotes the variation of the object distance u, corresponding to two images taken at two positions in the focusing process, ∆v is the variation of the image distance v caused by ∆u, dmax is the defocus distance in image space, and uref denotes the defocus distance in the object space. (b) The process of the modeling change.

Download Full Size | PPT Slide | PDF

When the object distance changes with an amount of ∆u (∆u is very small), ∆v, which is the corresponding variation of the image distance, can be calculated according to the Gaussian imaging formula,

1f=1u+Δu+1v+Δv.
In this case, ∆v corresponds to ∆d in Eq. (2). Then
dmax=ΔvV2/V11V2/V1.
According to geometrical optics, ∆u and ∆v can be expressed as
ΔvΔu=β1β2=fu1ffu2f.
Similarly,

dmaxuref=β1β0=fu1ffu0f.

Where u1 and u2 are two object distances corresponding to two different positions in the object space, u0 is the best imaging distance in the object space.β0,β1, andβ2denote the paraxial magnifications at the object distance of u0, u1, and u2, respectively.

From Eqs. (9)-(11), uref can be calculated as in Eq. (12):

uref(K)=KCΔu.

Where

C=V2/V11V2/V1,K=u0fu2f.

In Eq. (13), section C is calculated similarly to the previous example. In imaging systems that need to be focused, u2 is normally close to u0, thus K approximately equals to 1. Even in the case of u2 is relatively far from u0, K can be set as 1, then uref (K = 1) is still a valuable estimated value because just 1/n times of it is used according to the focusing scheme which will be proposed in section 3. So,

uref=ΔuV2/V11V2/V1.
The real defocus distance ureal <uref, set

ureal(max)=uref=ΔuV2/V11V2/V1.

Similarly, the defocus distance in the object space can’t be calculated precisely, but its maximum can be estimated from Eq. (15). And V1 and V2 are taken in the center area of the images to reduce the error.

3. The combination of DFF and the improved DFD

A focusing range can be acquired by the improved DFD method in a rapid calculation. Furthermore, the goal of accurate automatic focusing can be achieved by the combination of the improved DFD with the DFF method. The combination method can be divided into two stages: the rough focusing stage and the fine focusing stage, as shown in Fig. 5. In the rough focusing stage, at a certain position, two images of an object are taken with a certain interval distance to estimate the current defocus distance by the improved DFD method. The next position is taken with a step of 1/n times the above estimated defocus distance (since the estimated defocus distance does not equal the real defocus distance, we consider 1/n times of it for the final accuracy, and n is usually assumed as 5-10). At the new position, we sample the two images and estimate the current defocus distance again. The process is repeated until the defocus distance is smaller than the default threshold, which depends on the optical imaging system. Next, the fine focusing stage will be conducted. The DFF method is used to search for the peak position of the focus function with a fixed step length that is less than the depth of focus (DOF). The peak position of the focus function is the focusing position.

 

Fig. 5 The combined focusing method.

Download Full Size | PPT Slide | PDF

The conventional automatic focusing methods perform the searching using the same step along the whole process. Additionally, the step value, usually a fraction of the DOF, is very small to ensure accuracy. Thus, the searching efficiency is low and the local peak of the focus function may be acquired owing to the small step length, which further lowers the efficiency and may even cause focusing failure. In contrast to the conventional methods, the proposed searching strategy divides the whole searching process into two stages. Large steps are used in the rough searching stage and small steps in the fine searching stage, which can remove the influence of the local peak. Meanwhile, high searching efficiency can be achieved.

4. Experiments and Results

4.1 Experimental implementation

An experiment was conducted to verify the auto-focusing algorithm with a microscopic system controlled by a PC. The structure diagram is shown in Fig. 6. It is composed of an optical microscope (Olympus IX71, 10X zoom, working distance 18.5 mm, DOF 50 μm with maximum magnification), a CCD camera, an image acquisition card, an invisible LED light source, an electric translation stage/motion controller (displacement accuracy of 0.36 μm corresponding to one pulse), and a computer (2.50 GHz × 2 CPU, 2.0 GB RAM).

 

Fig. 6 The schematic of the experimental setup.

Download Full Size | PPT Slide | PDF

In this experiment, auto-focusing was achieved by changing the object distance. The entire auto-focus process was as follows: the sample on the translation stage was imaged by the microscope and the images were analyzed by the computer, which then gave the corresponding command to the motion controller; then, the motion controller adjusted the translation stage’s vertical motion to change the object distance until it was at the best imaging position.

Glass slides were used as the focusing targets. In order to test the accuracy of the proposed method for estimating the defocus distance, the translation stage was driven to 18 known positions (the corresponding real defocus distance ureal was 720 x 1, 720 x 2, 720 x 3, 720 x 4, …., 720 x 18 μm, 0.36 μm/pulse), and at every position, two images were sampled with a certain interval distance ∆u (36, 72, and 108 μm) to estimate the defocus distance.

4.2 Experimental data analysis

We tested 10 glass slides and acquired 10 groups of data. There were many similarities between the groups, so we selected one group as follows.

4.2.1 Convergence analysis

Figure 7 shows the estimated defocus distance at the 18 positions, calculated by the improved DFD method with a sampling interval distance ∆u of 36, 72, and 108 μm.

 

Fig. 7 Curve of the estimated defocus distance. The horizontal axis corresponds to the 18 sampling positions from small to large distances from the focal position, and the value is the real corresponding defocused distance. The vertical axis denotes the corresponding estimated defocus distance.

Download Full Size | PPT Slide | PDF

It can be seen that the estimated defocus distances increase with the sampling positions, and the estimated values are approximately linearly proportional to the true values real defocus distance to some degree, which verifies that the proposed method is qualitatively correct. Furthermore, the estimated defocus distances increase with ∆u, which also presents an approximate linear relationship.

It should be noted that for ∆u = 36 μm case, the estimated defocus distance is less than the real defocus distance, which seems contradictive to the theoretical deduction before. The probable reasons for the error may be as follows. In order to reduce random error, multiple sets of V1 and V2 in the central zone of the images should be taken to calculate the defocus distances and the average of these defocus distances are used. It is different from the theoretical deduction, which may cause the error. Furthermore, the optical imaging system is supposed incoherent. On this assumption, light intensities can be added directly. However, it is actually partially coherent, and this may also lead to some error.

In order to verify the proposed method, further experiments are conducted using another three objects, all of which are more complex focusing targets that include complex depth variations. The three objects are a coin, a printed circuit board (PCB) containing chip pins welding and an iron with irregular fracture surface, referred to as Obj 1, Obj 2 and Obj 3 respectively. The results are listed in Tab. 1.

Tables Icon

Table 1. Estimated defocus distance for the three objects.

It can be seen that the results for each object are similar to the one that is diagramed before, which shows the proposed method provides a good applicability for different focusing targets.

4.2.2 Error analysis

In order to reduce the computation load and increase the efficiency, the improved DFD method proposed in this paper considers some approximations regarding the optical imaging system model, which introduce some errors. For the data in Fig. 7, the relative errors between the estimated and the true values of the defocus distance are shown in Fig. 8.

 

Fig. 8 Curve of the relative error.

Download Full Size | PPT Slide | PDF

As shown in Fig. 8, the relative errors in the three conditions (∆u = 36, 72, 108 μm) vary widely and change with the sampling position. It is evident that the relative error increases with the increase of ∆u. We can choose a reasonable ∆u to control the magnitude of the relative error. Additionally, according to ∆u, we can expand or contract the estimated defocus distance by a certain proportion to compensate for the inadequacy of the method. From Fig. 8, it seems that the relative error of the improved DFD is still significant (the maximum is 175.4%); however, it is used only in the rough focusing stage. Thereafter, fine focusing is implemented to ensure accuracy. The benefit of the application of rough focusing is that an effective efficiency can be achieved.

4.2.3 Efficiency analysis of the combination of DFF and the improved DFD

In the searching stage of auto-focusing, the searching step is determined by the defocus distance, which is calculated by the improved DFD method. Therefore, the searching times are much smaller than in the DFF method, which is why the proposed combination method has a higher efficiency. Next, some specific cases will be analyzed.

In the range from ureal = 12960 to ureal = 720 μm, when ∆u = 36 μm, n = 5, and Pth = 720 μm (about 10 DOF, where Pth is the default threshold of the defocus distance stated before, often set as several times the value of DOF), 10 images were taken, 5 calculations were required, and the translation stage needed to be driven 5 times (the 5 step values were 4287, 3162, 2510, 1430, and 727 μm). Similarly, when n = 10 and Pth = 720 μm, 20 images were taken and 10 calculations were required, corresponding to 10 moves of the translation stage (the step values were 2143, 1780, 1661, 1349, 1313, 1036, 959, 715, 573, and 364 μm).

For conventional DFF methods, under the same accuracy conditions, at least 17 images need to be taken and 17 repetitions of the calculations are conducted (in this case, the searching step is set as 720 μm, and the translation stage is driven with the step 17 times). Furthermore, the single computation load (computation of the focus function value, Grey Level Variance [11]) is more than that of the improved DFD (computation of the defocus distance). Table 2 shows the comparison between the different methods.

Tables Icon

Table 2. Comparison between the proposed combination method and the DFF method.

It can be seen that the efficiency of the proposed method is at least 3-5 times that of the traditional DFF method at a rough estimate. If we choose more suitable values of ∆u, n, and Pth, then the efficiency will be further improved. Actually, the searching step in conventional DFF methods is very small (much less than 720 μm in this experiment), thus the efficiency of the proposed combination method is much more than 3-5 times that of the conventional DFF method.

5. Conclusions

In this paper, a combination algorithm of DFF and improved DFD for auto-focusing is proposed and experimentally demonstrated to be efficient and significantly more effective than the traditional ones. The proposed novel DFD auto-focus method can be applied to the auto-focus system both by changing the image distance and the object distance, which provides much more flexibility when designing the structure of the auto-focusing system. However, in real application, the accurate estimation of the defocus distance is not easy. An inaccurate estimation will directly affect the searching efficiency or even lead to focusing failure. Thus, the combination method is still worthy of our continued study.

Acknowledgments

The authors thank the financial support from the National Science and Technology Major Project of the Ministry of Science and Technology of China (Grant No. 2014ZX07104) and National Key Foundation for Exploring Scientific Instrument of China (2013YQ03065104), the National Science and Technology Support Program of China (2012BAI23B00).

References and links

1. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008). [CrossRef]   [PubMed]  

2. J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011). [CrossRef]  

3. B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008). [CrossRef]  

4. C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012). [CrossRef]  

5. C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013). [CrossRef]  

6. W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009). [CrossRef]  

7. D. K. Cohen, W. H. Gee, M. Ludeke, and J. Lewkowicz, “Automatic focus control: the astigmatic lens approach,” Appl. Opt. 23(4), 565–570 (1984). [CrossRef]   [PubMed]  

8. C. Mo and B. Liu, “An auto-focus algorithm based on maximum gradient and threshold,” 5th International Congress on Image and Signal Processing (CISP) (IEEE, 2012), pp.1191–1194. [CrossRef]  

9. M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993). [CrossRef]  

10. M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995). [CrossRef]  

11. M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998). [CrossRef]  

12. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994). [CrossRef]  

13. J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002). [CrossRef]  

14. C. H. Lee and T. P. Huang, “Comparison of Two Auto Focus Measurements DCT-STD and DWT-STD,” in Proceedings of the international MultiConference of Engineers and Computer Scientists (Academic, 2012), pp.746–750.

15. G. Yang and B. J. Nelson, “Wavelet-based autofocusing and unsupervised segmentation of microscopic images,” in Proceedings of IEEE Conference on Intelligence Robots and Systems (IEEE, 2003), pp. 2143–2148.

16. K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990). [CrossRef]  

17. K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999). [CrossRef]  

18. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003). [CrossRef]  

19. M. Subbarao, “Parallel Depth Recovery by Changing Camera Aperture,” in Proceedings of International Conference on Computer Vision, (Academic, 1988), pp. 149–155.

20. M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994). [CrossRef]  

21. P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005). [CrossRef]   [PubMed]  

22. A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).

23. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011). [CrossRef]  

24. L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215. [CrossRef]  

25. D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

References

  • View by:
  • |
  • |
  • |

  1. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
    [Crossref] [PubMed]
  2. J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
    [Crossref]
  3. B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
    [Crossref]
  4. C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
    [Crossref]
  5. C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
    [Crossref]
  6. W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
    [Crossref]
  7. D. K. Cohen, W. H. Gee, M. Ludeke, and J. Lewkowicz, “Automatic focus control: the astigmatic lens approach,” Appl. Opt. 23(4), 565–570 (1984).
    [Crossref] [PubMed]
  8. C. Mo and B. Liu, “An auto-focus algorithm based on maximum gradient and threshold,” 5th International Congress on Image and Signal Processing (CISP) (IEEE, 2012), pp.1191–1194.
    [Crossref]
  9. M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
    [Crossref]
  10. M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995).
    [Crossref]
  11. M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
    [Crossref]
  12. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994).
    [Crossref]
  13. J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
    [Crossref]
  14. C. H. Lee and T. P. Huang, “Comparison of Two Auto Focus Measurements DCT-STD and DWT-STD,” in Proceedings of the international MultiConference of Engineers and Computer Scientists (Academic, 2012), pp.746–750.
  15. G. Yang and B. J. Nelson, “Wavelet-based autofocusing and unsupervised segmentation of microscopic images,” in Proceedings of IEEE Conference on Intelligence Robots and Systems (IEEE, 2003), pp. 2143–2148.
  16. K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
    [Crossref]
  17. K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
    [Crossref]
  18. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
    [Crossref]
  19. M. Subbarao, “Parallel Depth Recovery by Changing Camera Aperture,” in Proceedings of International Conference on Computer Vision, (Academic, 1988), pp. 149–155.
  20. M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
    [Crossref]
  21. P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
    [Crossref] [PubMed]
  22. A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).
  23. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
    [Crossref]
  24. L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.
    [Crossref]
  25. D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

2013 (2)

C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
[Crossref]

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

2012 (1)

C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
[Crossref]

2011 (2)

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
[Crossref]

2009 (1)

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

2008 (2)

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
[Crossref] [PubMed]

2005 (1)

P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

2003 (1)

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
[Crossref]

2002 (1)

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

1999 (1)

K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
[Crossref]

1998 (1)

M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
[Crossref]

1995 (1)

M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995).
[Crossref]

1994 (2)

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

1993 (1)

M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
[Crossref]

1990 (1)

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

1984 (1)

Chen, F. Z.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Chen, N. T.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Chen, P. J.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Choi, K. S.

K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
[Crossref]

Choi, T.

M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
[Crossref]

Chung, D. S.

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Cohen, D. K.

Corwin, A. D.

Dixon, E. L.

Durand, F.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).

Favaro, P.

P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

Fergus, R.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).

Filkins, R. J.

Flusser, J.

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

Gee, W. H.

Han, J. W.

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

He, J.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
[Crossref]

Hong, Z.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
[Crossref]

Hsu, W. Y.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Hu, P. H.

C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
[Crossref]

C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
[Crossref]

Huang, D. T.

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

Hwang, C. H.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Izumi, K.

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

Kautsky, J.

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

Kenny, K. B.

Kim, C. Y.

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Kim, J. H.

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

Kim, S. S.

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Ko, S. J.

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
[Crossref]

Kuo, C. H.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Lee, C. S.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Lee, H. T.

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

Lee, J. S.

K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
[Crossref]

Lee, S. D.

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Levin, A.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).

Lewkowicz, J.

Lin, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
[Crossref]

Lin, Y. C.

C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
[Crossref]

C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
[Crossref]

Liu, C. S.

C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
[Crossref]

C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
[Crossref]

Liu, X. C.

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

Ludeke, M.

Nakagawa, Y.

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994).
[Crossref]

Nayar, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
[Crossref]

Nayar, S. K.

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994).
[Crossref]

Nikzad, A.

M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
[Crossref]

Nozaki, M.

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

Ooi, K.

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

Park, B. K.

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Simberova, S.

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

Soatto, S.

P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

Subbarao, M.

M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
[Crossref]

M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
[Crossref]

Surya, G.

M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

Takeda, I.

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

Tasimi, K.

Tyan, J. K.

M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
[Crossref]

M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995).
[Crossref]

Wu, Z. Y.

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

Yazdanfar, S.

Yu, Z. R.

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Zhang, H. S.

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

Zhou, C.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
[Crossref]

Zhou, R.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
[Crossref]

Zitova, B.

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

Appl. Opt. (1)

Appl. Phys. B (1)

C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109(2), 259–268 (2012).
[Crossref]

IEEE Trans. Consum. Electron. (4)

J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
[Crossref]

K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
[Crossref]

K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
[Crossref]

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
[Crossref]

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994).
[Crossref]

P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 406–417 (2005).
[Crossref] [PubMed]

Int. J. Comput. Vis. (2)

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93(1), 53–72 (2011).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–294 (1994).
[Crossref]

J. Opt. Eng. (1)

M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
[Crossref]

Journal of Optoelectronics laser. (1)

D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

Meas. Sci. Technol. (1)

W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
[Crossref]

Microsyst. Technol. (1)

C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
[Crossref]

Opt. Express (1)

Pattern Recognit. Lett. (1)

J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002).
[Crossref]

Proc. SPIE (2)

B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc. SPIE 2598, 89–99 (1995).
[Crossref]

Other (6)

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph.26(70), 70 (2007) (TOG).

L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.
[Crossref]

C. Mo and B. Liu, “An auto-focus algorithm based on maximum gradient and threshold,” 5th International Congress on Image and Signal Processing (CISP) (IEEE, 2012), pp.1191–1194.
[Crossref]

C. H. Lee and T. P. Huang, “Comparison of Two Auto Focus Measurements DCT-STD and DWT-STD,” in Proceedings of the international MultiConference of Engineers and Computer Scientists (Academic, 2012), pp.746–750.

G. Yang and B. J. Nelson, “Wavelet-based autofocusing and unsupervised segmentation of microscopic images,” in Proceedings of IEEE Conference on Intelligence Robots and Systems (IEEE, 2003), pp. 2143–2148.

M. Subbarao, “Parallel Depth Recovery by Changing Camera Aperture,” in Proceedings of International Conference on Computer Vision, (Academic, 1988), pp. 149–155.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Spreading process of the image points.
Fig. 2
Fig. 2 The optical imaging model.
Fig. 3
Fig. 3 Blurred image of P on different planes.
Fig. 4
Fig. 4 (a). The relevant parameters in the modeling change. ∆u denotes the variation of the object distance u, corresponding to two images taken at two positions in the focusing process, ∆v is the variation of the image distance v caused by ∆u, dmax is the defocus distance in image space, and uref denotes the defocus distance in the object space. (b) The process of the modeling change.
Fig. 5
Fig. 5 The combined focusing method.
Fig. 6
Fig. 6 The schematic of the experimental setup.
Fig. 7
Fig. 7 Curve of the estimated defocus distance. The horizontal axis corresponds to the 18 sampling positions from small to large distances from the focal position, and the value is the real corresponding defocused distance. The vertical axis denotes the corresponding estimated defocus distance.
Fig. 8
Fig. 8 Curve of the relative error.

Tables (2)

Tables Icon

Table 1 Estimated defocus distance for the three objects.

Tables Icon

Table 2 Comparison between the proposed combination method and the DFF method.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

R 1 =d D 2 ( 1 f 1 u ).
R 1 =kd.
R 2 =k(d+Δd).
V 1 S 1 < V 2 S 2 .
R 1 R 2 < V 2 V 1 .
d<Δd V 2 / V 1 1 V 2 / V 1 .
d max =Δd V 2 / V 1 1 V 2 / V 1 .
1 f = 1 u+Δu + 1 v+Δv .
d max =Δv V 2 / V 1 1 V 2 / V 1 .
Δv Δu = β 1 β 2 = f u 1 f f u 2 f .
d max u ref = β 1 β 0 = f u 1 f f u 0 f .
u ref (K)=KCΔu.
C= V 2 / V 1 1 V 2 / V 1 ,K= u 0 f u 2 f .
u ref =Δu V 2 / V 1 1 V 2 / V 1 .
u real(max) = u ref =Δu V 2 / V 1 1 V 2 / V 1 .

Metrics