Abstract

Spot centroid detection is required by Shack-Hartmann wavefront sensing since the technique was first proposed. For a Shack-Hartmann wavefront sensor, the standard structure is to place a camera behind a lenslet array to record the image of spots. We proposed a new Shack-Hartmann wavefront sensing technique without using spot centroid detection. Based on the principle of binary-aberration-mode filtering, for each subaperture, only one light-detecting unit is used to measure the local wavefront slopes. It is possible to adopt single detectors in Shack-Hartmann wavefront sensor. Thereby, the method is able to gain noise benefits from using singe detectors behind each subaperture when used for sensing rapid varying wavefront in weak light. Moreover, due to non-discrete pixel imaging, this method is a potential solution for high measurement precision with fewer detecting units. Our simulations demonstrate the validity of the theoretical model. In addition, the results also indicate the advantage in measurement accuracy.

© 2015 Optical Society of America

1. Introduction

Hartmann wavefront sensing was first proposed in 1904 [1]. Decades later, Platt and Shack modified the classical Hartmann test by using a lenticular screen instead of a screen with holes [2] which became the classical setup of Shack-Hartmann (SH) wavefront sensors. Nowadays, SH wavefront sensor is one of the most ubiquitous wavefront sensor and used in a wide range of areas including adaptive optics [35], ophthalmology [6], laser beam characterization [7] and so on. Modern SH wavefront sensors today still employ the classical SH’s setup. Incoming light is split into separate subapertures by each lenslet in a lenslet array and focused onto a camera. Each lenslet intercepts a small portion of the incoming wavefront, whose tilt from a planar wavefront leads to deviation of the focused beam behind each lenslet [8,9]. The reconstructed wavefront derives from measuring the centroid shifting of those focused spots. For spots detections, array photo detectors (e.g. CCD/CMOS cameras) are indispensable in conventional SH wavefront sensors.

As a thumb of rule, the wavefront measurement precision of a SH wavefront sensor is limited by the noise of the camera. Since there are many factors that contribute to the centroid measurement error, camera dynamic range and the number of pixels affect the spot position determination directly. For a given camera, allotting more pixels to detect each spot could be a way to improve the measurement accuracy. However, due to discrete sampling, error is always inevitable while measuring spot centroids. Besides, if the intensity of the incident beam is very weak, or the camera exposure time is quite short for a high frame rate, the signal to noise ratio (SNR) will be very low when the spot’s intensity is spread over many pixels. As suggested above, the centroid detection and the use of the array photo detectors have brought some problems, such as the signal noises and the decrease in accuracy caused by pixels’ discrete sampling, to SH wavefront sensors especially in the situation of fast wavefront sensing in weak light.

A wavefront sensing technique based on binary-aberration mode filtering and detection was proposed several years ago [10,11]. The reconstructed wavefront derives from measuring the light intensities rather than the intensity distributions. Its wavefront sensing precision primarily has nothing to do with the number of light detecting units. In this method, there is no array photo detector but just a single detector to record light intensities. However, there are still two major problems. First, its wavefront measurement precision will significantly influence the measurement rate. The number of times that the single detector works during one measurement is equal to the number of binary-aberration modes that are used to reconstruct wavefront. For the acceptable measurement precision, adopting hundreds of binary-aberration modes is necessary. As a result, its speed advantage will be weaken sharply. The other problem is the mode coefficients calculation. The number of binary-aberration modes determines the dimensions of the coefficient-equation set. But the mode aliasing makes it too difficult to give analytical solution. If hundreds of mode coefficients need to be calculated, solving so many coefficients will be a rather difficult problem. So this method is still far away from practical application.

In this paper, we propose a kind of SH wavefront sensing technique without spot detection. The basic thought is to apply the idea of above-mentioned no-image wavefront measurement to SH wavefront sensing. Wavefront tilt over each subaperture can be measured with one light-detecting unit based on the principle of binary-aberration mode filtering and detection. Besides, according to the characteristic of wavefront in subapertures (sub-wavefront), analytical solution of sub-wavefront slopes has been found. For each subaperture, detecting intensities is enough for sub-wavefront gradient measurement. As a result, it is possible to adopt single detectors like photodiodes, photomultipliers, avalanche photo diodes, even photon counters as the light-detecting devices in SH wavefront sensors. There are obvious SNR benefit when using single detectors as opposed to an array detector when used for sensing rapid varying wavefront in weak light. Meanwhile, the problems of discrete imaging and centroid detection do not exist. The method could focus all light energy in a subaperture onto one detecting unit and enable precise wavefront measurement in weak light with fewer detecting units. The principle of sub-wavefront slope measurement and the simulation results are going to be discussed below.

2. Sub-wavefront slope measurement based on binary-phase modulation

2.1 Wavefront sensing through binary phase modulation

A comprehensive description of wavefront sensing through binary phase modulation is presented in [10,11]. However, we have made several essential adjustments to apply this technique in SH wavefront sensing conveniently. The aperture shape becomes square, because the lenslet shape is square. Accordingly, the binary aberration modes we use are not the modified Walsh functions over a circular aperture but the original Walsh functions in square domain, as shown in Figs. 1(a) [1013]. Symbol Wm,n represents a specific Walsh function (m and n are nonnegative integers). To avoid the energy loss of fiber coupling and transmission that may occur when implementing this technique, the aberration-mode filter, a single-mode fiber, has been replaced with a pinhole as Figs. 1(b) illustrates. As discussed in [14,15], Fourier diffraction theory shows that the signal measured by the photodetector is given by the following formula:

I=D0|AΦ(x,y)dxdy|2,
where Φ(x,y) is the incident light filed and D0 is proportional to the incident light power. Integral domain A is the square aperture. The filter function of pinholes to Walsh functions can be expressed as below:

 

Fig. 1 Principle of wavefront sensing through binary phase modulation. (a) The original square Walsh functions (Wm,n) with only two values, 1 and −1, indicated by the bright and dark areas and (b) the optical arrangement.

Download Full Size | PPT Slide | PDF

AWm,ndxdy.

In every Walsh function pattern (except the lowest order), each value (1 or −1) has the same quantity or area. Hence, only the basic mode (W0,0) has a nonzero value to the overlap integral in Eq. (2). It means pinholes support only the lowest order and can work conveniently as binary-aberration-mode filters. Expanding the incident wavefront with original Walsh functions, we have:

E(x,y)=E0exp(jm,nam,nWm,n).
According to [10], the amount of intensity detected by the photodetector has the following form:
I0=f[cos(am,n)]2,
where am,n is the coefficient of Wm,n and f is a function of all the mode coefficients except for am,n. Then we have the phase-only spatial light modulator (SLM) add ϕ amount of Wm,n mode upon the incident wavefront and take another intensity measurement with photodetector
Im,n=f[cos(am,nϕ)]2.
The coefficient of Wm,n can be solved from the following equation:

I0[cos(am,nϕ)]2=Im,n[cos(am,n)]2

After modulating the incident wavefront with every Walsh-function-pattern phase mask, we can calculate the binary-aberration-mode coefficients. The incident wavefront can be reconstructed with these binary aberration modes.

2.2 Sub-wavefront expansion with binary aberration modes

Measuring wavefront with common distribution seems to exceed the ability of the technique discussed in 2.1. In order to retrieve wavefront with complicated continuous distribution, lots of Walsh functions should be adopted. Result in Figs. 2 shows that even simple wavefront tilt needs hundreds of modes for an acceptable reconstruction. Meanwhile, using more modes will bring higher wavefront sensing precision but lower sensing rate as well. For the limited speed of phase-only SLMs, the method’s measuring rate is hard to meet the needs of sensing rapid-varying wavefront. Therefore, it is rather difficult for this technique to achieve available wavefront sensing precision or speed.

 

Fig. 2 Wavefront tilt reconstruction with Walsh functions.(a) standard wavefront tilt form;(b) reconstructed wavefront with 16 Walsh functions;(c) residual wavefront for 16 Walsh function reconstruction ;(c) reconstructed wavefront with 64 Walsh functions;(d) residual wavefront for 64 Walsh function reconstruction ;(e) reconstructed wavefront with 256 Walsh functions;(f) residual wavefront for 256 Walsh function reconstruction.

Download Full Size | PPT Slide | PDF

However, we have noticed that wavefront over subapertures in SH wavefront sensors has simple distribution which contains only local tilt. In addition, the procedure of SH wavefront sensing is a kind of indirect measurement. It means that after measuring sub-wavefront tilt along two direction of Cartesian coordinates, wavefront over entire aperture can be obtained by SH wavefront reconstruction algorithms [16,17]. Thus, we expanded the two forms of wavefront tilt (along x axis and y axis) with Walsh functions separately and got the result as shown in Fig. 3. For expanding wavefront tilt in x direction, most of the Walsh-function coefficients remain zero except several specific modes whose values change only along x axis and have an integral multiple of change cycles within the domain. Correspondingly, expanding tilt in y direction has the similar result. To be more exactly, among the first 2n Walsh functions (n is a positive integer), only n modes contribute to constitute the wavefront tilt. In other words, for retrieving sub-wavefront tilt of SH wavefront sensors, the number of Walsh functions to be detected is actually very few. However, as mentioned above, reconstructing continuous wavefront form with binary aberration modes directly is not a suitable solution even though we have made it clear that which Walsh functions have nothing to do with wavefront tilt constitutions. It is necessary to find a way to overcome this barrier and measure wavefront tilt precisely with fewer Walsh functions.

 

Fig. 3 Wavefront tilt expansion with Walsh functions.

Download Full Size | PPT Slide | PDF

2.3 Analytical solution of sub-wavefront-slope coefficients

To SH wavefront reconstruction algorithms, the direct measurable quantities are wavefront slopes over every subaperture. Therefore, we should not focus on the details of sub-wavefront too much but manage to obtain the slope values of each subaperture. Since Walsh functions form an orthogonal and complete set [9,10], wavefront- tilt expansion discussed previously can be thought of as linear. Each expansion coefficient is proportional to the wavefront slope value:

Sx=pxa0,1,Sy=pya1,0
where Sx and Sy are slope values in two directions. The symbols px and py are corresponding proportional coefficients for expansion coefficients of W0,1 and W1,0 (a0,1 and a1,0). The reason for selecting these two modes is that they are the main components individually in two-direction wavefront tilt expansion. Then according to Eq. (7), slope values of x direction can be determined after obtaining the amount of particular binary-aberration mode, W0,1. The similar situation can be said for determining slope values of y direction. In other works, if there was simple wavefront distribution containing only local tilt of two directions, the slope values along two axes can be acquired through measuring only two Walsh functions. Wavefront over subapertures happens to meet this characteristic. According to this premise, we established a kind of SH wavefront sensing scheme based on binary-aberration-mode filtering, whose optical arrangement is shown in Fig. 4. The incident light first comes cross a phase mask, which is generated by a phase-only SLM. The phase of small light beam over each subaperture is modulated by chosen patterns separately. The lenslet array focuses the modulated light onto corresponding pinholes behind which there are photodetectors measuring the total light intensities. The photodetectors must correspond one to one with subapertures as well as the pinholes. With the patterns of W0,1 and W1,0, the phase-only SLM changes the patterns of additional phase mask twice for every subaperture accordingly. Each photodetector could get three different light intensities (no modulation, modulated by W0,1 pattern and W1,0 pattern), with which the coefficients of W0,1 and W1,0 can be calculated. Using the proportionate relationships in Eq. (7), we can determine wavefront slopes over subapertures. By SH wavefront reconstruction algorithms, the incident wavefront over entire apertures can be retrieved at last. The following is the mathematical model.

 

Fig. 4 Optical arrangement for Shack-Hartmann wavefront sensing without centroid detection.

Download Full Size | PPT Slide | PDF

As discussed in section 2.1, we represent the sub-wavefront with Walsh functions:

Esub(x,y)=Asubexp(jm,nam,nWm,n).
When the SLM gives no additional phase for each subapertures as shown in Figs. 5(a), the measured intensity for the ith subaperture, I(i) 0, has similar forms to Eq. (4):
I0(i)=f0,1[cos(a0,1(i))]2orI0(i)=f1,0[cos(a1,0(i))]2.
In the formula demonstrated above, a(i) 0,1 and a(i) 1,0 represent the amount of W0,1 and W1,0, that the sub-wavefront contains. If the SLM generates the phase patterns (W0,1 and W1,0) for every subaperture with the same amplitude of –β as Figs. 5(b) and 5(c) show, the two intensities measured by the ith photodetector behind the ith pinhole, I(i) 0,1 and I(i) 1,0 corresponding to phase modulation with W0,1 and W1,0 respectively, can be described as:
I0,1(i)=f0,1[cos(a0,1(i)+β)]2orI1,0(i)=f1,0[cos(a1,0(i)+β)]2.
From the expressions of I(i) 0, I(i) 0,1 and I(i) 1,0, we have
a0,1(i)=tan-1(I0,1(i)/I0(i)-cosβsinβ),a1,0(i)=tan-1(I1,0(i)/I0(i)-cosβsinβ).
Coefficients above indicate the magnitude of W0,1 and W1,0 that the sub-wavefront of the ith subaperture contains. Considering the major constituent of wavefront over a subaperture is the local tilt, it is reasonable to believe that W0,1 component all derives from tilt along x axis meanwhile W1,0 component all derives from tilt along y axis. We can get wavefront slopes over the ith subaperture through Eq. (7) with the obtained two Walsh-function coefficients, a(i) 0,1 and a(i) 1,0. Certainly, the proportional coefficients px and py should be determined first. Expanding normalized wavefront tilt with Walsh functions is a direct numerical way. But in fact, they have theoretical values according to their physical meaning. The magnitude of W0,1 and W1,0 present the ranges of sub-wavefront values along two directions. Assuming the radius of the entire aperture is r and the aperture has a N × N segmentation, the width of each subaperture is 2r/N. According to the definition of gradient, we have sub-wavefront gradients as follows:
Gx(i)=2a0,1(i)2r/N=Nra0,1(i),Gy(i)=2a1,0(i)2r/N=Nra1,0(i).
Substituting Eq. (11) into Eq. (12), wavefront slopes over the ith subaperture can be calculated:
Gx(i)=Nrtan-1(I0,1(i)/I0(i)-cosβsinβ),Gy(i)=Nrtan-1(I1,0(i)/I0(i)-cosβsinβ).
In conclusion, after modulating the phase twice, using Eq. (13), we can get the wavefront slopes over subapertures without spot-centroid detection.

 

Fig. 5 Phase mask generated by phase-only SLM in centroid detection-less SH wavefront sensor.

Download Full Size | PPT Slide | PDF

3. Numerical example

3.1 Random wavefront reconstruction

To validate the proposed approach, a series of necessary numerical simulations have been taken. The aperture of the SH wavefront sensor was circle in shape with the radius normalized to 1. The division of the light aperture is 8 × 8 subapertures as illustrated in Figs. 6(a) where the circle shows the aperture (i.e. the definition domain) of light field. Meanwhile, the squares shows the division scheme of subapertures. The subapertures in gray color are full of light and valid for wavefront reconstruction computing. As a result, there were thirty-two valid subapertures. The task of the simulation was to measure the wavefront slopes over these valid subapertures with our method and retrieve the distribution of incident wavefront.

 

Fig. 6 The schematic diagram of the aperture and subaperture division (subapertures in gray were valid) for (a) and the incident wavefront distribution for (b) (unit in wavelength).

Download Full Size | PPT Slide | PDF

A random incident wavefront, shown in Figs. 6(b), was constituted by the first twenty-three Zernike polynomials whose coefficients were generated stochastically. The amplitude of phase modulation β was π/4. Plugging these parameters into Eq. (13), we had the expressions to solve subaperture slopes from measured intensities:

Gx(i)=8tan-1(1-2I0,1(i)/I0(i)),Gy(i)=8tan-1(1-2I1,0(i)/I0(i)).

Based on the principle of binary-aberration-mode filtering, we calculated wavefront slope values over every subaperture (shown in Figs. 7). By comparing the measured slope values with initial values, we found the measured values were accurate and the differences between them were rather small. It demonstrates the sub-wavefront-slope measurement method we proposed is very effective. With the measured slope values, we retrieved the incident wavefront by Zernike modal wavefront reconstruction algorithm [16]. The reconstructed wavefront, shown in Figs. 8(a), is very consistent with the incident one and almost presents every detail of wavefront distribution exactly. The reconstruction error, shown in Figs. 8(b), is suppressed significantly, in which rms (0.0176λ) and pv (0.1670λ) values are both two orders of magnitude smaller than the initial ones. There is phase error that is obvious in the top left hand corner of Figs. 8(b). In view of the accurately retrieved slopes and coefficients, the partial phase error is attributable to the distribution of incident wavefront in Figs. 6(b). With a different input distribution, the phenomenon will disappear. Figs. 8(c) and 8(d) also present the accuracy of coefficient reconstruction. These results demonstrate the wavefront sensing ability of the non-spot-detection SH wavefront sensor as well as the validity of its principle.

 

Fig. 7 Slope measurements over subapertures. (a) and (c) are comparisons of measured two-axes slope values with input values. (b) and (d) are slope measurement errors.

Download Full Size | PPT Slide | PDF

 

Fig. 8 Wavefront reconstruction through SH wavefront sensing based on binary-aberration-mode filtering for (a) reconstructed wavefront, (b) reconstruction error, (c) reconstructed coefficients and (d) coefficient reconstruction error (unit in wavelength).

Download Full Size | PPT Slide | PDF

3.2 Comparison with traditional SH wavefront sensing method

In order to provide a better reflection of wavefront sensing capability of the proposed technique, reconstructing the same wavefront distribution with traditional SH wavefront sensing method was also simulated. The aperture and subaperture division both remain the same. There were 128 × 128 pixels for each subaperture and 1024 × 1024 pixels used in all. Besides, in view of above-mentioned valid subapertures, only central thirty-two focused spots were involved in wavefront slope computation and wavefront reconstruction algorithm.

The calculated slope values with traditional method have be given in Figs. 9. We find that with a different principle of sub-wavefront-tilt measurement, the measured slope values over every subaperture are also a little different comparing with the result in section 3.1. The sub-wavefront slope measurements are still accurate but obviously have larger errors.

 

Fig. 9 Slope measurements over subapertures with traditional method. (a) is the comparison of measured two-axes slope values with input values and (b) is slope measurement error.

Download Full Size | PPT Slide | PDF

The reconstructed wavefront and coefficients is shown in Figs. 10. As expected, traditional methods also retrieved the Zernike-polynomial coefficients and wavefront distribution very well. The pv and rms values of wavefront reconstruction error are only about two percent of input values. Meanwhile, the coefficient error is controlled within 0.01 wavelength. However, similar to the results of slope measurement comparison, the proposed approach has a better performance in controlling coefficient measurement error and wavefront reconstruction error. The main reason for this disparity is that pixel discrete sampling is inevitable in course of imaging spots with cameras. It will distort the measured slopes and effect, to a significant degree, the wavefront reconstruction result. Furthermore, the proposed approach achieved these results by adopting just less than one ten thousandth of total detecting units that the traditional method used.

 

Fig. 10 Wavefront reconstruction with traditional SH wavefront sensing method. (a) Reconstructed wavefront, (b) reconstruction error, (c) reconstructed coefficients and (d) coefficient reconstruction error (unit in wavelength).

Download Full Size | PPT Slide | PDF

For a SH wavefront sensor, a high quality camera could help enhance its wavefront sensing performance. At present, with the performance improvement of the cameras (e.g. sCMOS, EMCCD), SH wavefront sensor can easily work at high frame rates (in the order of kHz) with high sensitivity. However, if a higher wavefront sensing rate is required, SH wavefront sensor relies on developing a faster camera. For our method, another scheme can be adopted to achieve an extremely high measurement rate. Essentially, three measurements are required for each photodetector in subapertures. In order to remove the phase-only modulator, we could divide the light into three. Each part is modulated by a different phase mask which is one of the patterns in Figs. 5. The phase masks are fixed and there is no moveable or modulation device. So the photodetectors can realize their full potential in working frequency. Therefore, our method has good potential of high speed measurement. In the other hand, the sCMOS and EMCCD also have remarkable sensitivity and SNR. The relevant performances are almost in the same level with the photodetector. However, in our method we can focus all light in a subaperture onto one detecting unit while conventional SH wavefront sensor has to spread the light over several pixels to form a spot. If a SH wavefront sensor tries to detect the spot position with just one pixel, its measurement precision will decline unavoidably. Comparatively, the measurement precision of our method primarily has nothing to do with the number of detecting units whose quantity is certain and just related to the subaperture division. Furthermore, the photodetector can achieve high sensitivity and SNR easier with low cost.

3.3 wavefront reconstruction with measurement noise

As mentioned previously, an important application of this technique is to sense wavefront in weak light. Measurement in this condition is usually affected by noise. In order to be more practical, the data in our modelling should be noisy inherently. So we added Gaussian noise (mean zeros, standard deviation 0.01) to the data in Section 3.1 and reconstructed the input wavefront in Figs. 6(b). The measurement results are given in Figs. 11.

 

Fig. 11 Wavefront reconstruction with measurement noise. (a) Coefficient reconstructed error, (b) wavefront reconstruction error (unit in wavelength).

Download Full Size | PPT Slide | PDF

From Figs. 11, it is obvious that the measurement error increases. But the pv and rms values still prove the effectiveness of the reconstruction, just not so accurate as the results in Figs. 8. These results are consistent with our expectation. Because the basic principle of measuring slopes in our method is to perceive change in light energy, not the spot centroid shifting. Therefore, as a method always has its limitation, the measurement result of our method would be more sensitive to noise relatively. As a result, photodetectors with high SNR are desired. Different SNR performance will bring different measurement precision. From another point of view, the above-mentioned advantages of our method is also derived from detecting the amount of light energy. For almost all energy in a subaperture is focused on just one detector, our method can measure weak signals that are too weak to be sensed by a conventional SH wavefront sensor.

4. Conclusion

We have presented a kind of SH wavefront sensing technique based on binary-aberration-mode filtering and detection. A new approach for measuring sub-wavefront slopes without centroid detection was proposed. The results show that the slope measurement can be achieved by simple phase modulation and intensity detection. Moreover, the measured slopes and reconstructed wavefront indicate better accuracy than the conventional method under identical conditions. This approach allows for obtaining high precision by adopting only one light-detecting unit per subaperture. Besides, the wavefront sensing rate can be improved further by splitting light beam. Therefore, the new SH wavefront sensing technique should be a useful scheme involving weak-light, high precision wavefront sensing by a small quantity of single photodetectors.

Actually, the performance of measurement speed is very important for practical applications. Photodetectors can reach incredibly high measurement speed. But the SLM used in this method will limit the wavefront sensing speed ultimately. Unlike the method in [10], our method just needs the SLM to change its phase mask three times. And three measurements are required for each photodetector. Then the wavefront sensing rate will be one-third of the SLM’s modulating speed. At present, there are two main types of phase-only SLMs, liquid crystal spatial light modulators and deformable mirrors. Their highest modulating frequency could be in the order of kHz. Therefore, the fastest wavefront sensing speed could reach kHz by using this scheme. Since the polarization characteristic of the liquid crystal will cause energy loss, the deformable mirror is a better choice.

Further study will focus on analyzing the dynamic range of this method and the factors that could affect the wavefront measurement precision, including detection noise and accuracy of photodetectors, the practical filtration efficiency of pinholes, the misalignment between phase masks and lenslet arrays and so on. Moreover, the light efficiency and sensitivity should be found out. Most importantly, we will design a SH wavefront sensor based on the principle and take the principle verification experiment.

Acknowledgments

Thanks for the special support from Lanxuan Gao. This work is supported by the National Natural Science Foundation of China (Grant No.11173008), the National Key Scientific and Research Equipment Development Project of China (ZDYZ2013-2), the Preeminent Youth Fund of Sichuan Province (No.2012JQ0012) and the Outstanding Youth Science Fund of CAS.

References and links

1. J. Hartmann, “Objektuvuntersuchungen,” Zt. Instrumentenkd 24(1), 1 (1904).

2. B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).

3. B. R. Masters, “Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo,” Opt. Express 3(9), 332–338 (1998). [CrossRef]   [PubMed]  

4. J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997). [CrossRef]  

5. J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991). [CrossRef]  

6. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11(7), 1949–1957 (1994). [CrossRef]   [PubMed]  

7. T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014). [CrossRef]  

8. B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [PubMed]  

9. Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003). [CrossRef]  

10. F. Wang, “Wavefront sensing through measurements of binary aberration modes,” Appl. Opt. 48(15), 2865–2870 (2009). [CrossRef]   [PubMed]  

11. F. Wang, “Control of deformable mirror with light-intensity measurements through single-mode fiber,” Appl. Opt. 49(31), G60–G66 (2010). [CrossRef]  

12. K. G. Beauchamp, Walsh Funcitons and Their Applications (Academic, 1975).

13. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition (Pearson Education, 2010).

14. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006). [CrossRef]   [PubMed]  

15. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy (Academic, 1984).

16. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” JOSA 70(8), 998–1006 (1980). [CrossRef]  

17. R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. 31(32), 6902–6908 (1992). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. J. Hartmann, “Objektuvuntersuchungen,” Zt. Instrumentenkd 24(1), 1 (1904).
  2. B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).
  3. B. R. Masters, “Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo,” Opt. Express 3(9), 332–338 (1998).
    [Crossref] [PubMed]
  4. J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
    [Crossref]
  5. J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
    [Crossref]
  6. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11(7), 1949–1957 (1994).
    [Crossref] [PubMed]
  7. T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
    [Crossref]
  8. B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001).
    [PubMed]
  9. Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
    [Crossref]
  10. F. Wang, “Wavefront sensing through measurements of binary aberration modes,” Appl. Opt. 48(15), 2865–2870 (2009).
    [Crossref] [PubMed]
  11. F. Wang, “Control of deformable mirror with light-intensity measurements through single-mode fiber,” Appl. Opt. 49(31), G60–G66 (2010).
    [Crossref]
  12. K. G. Beauchamp, Walsh Funcitons and Their Applications (Academic, 1975).
  13. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition (Pearson Education, 2010).
  14. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006).
    [Crossref] [PubMed]
  15. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy (Academic, 1984).
  16. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” JOSA 70(8), 998–1006 (1980).
    [Crossref]
  17. R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. 31(32), 6902–6908 (1992).
    [Crossref] [PubMed]

2014 (1)

T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
[Crossref]

2010 (1)

2009 (1)

2006 (1)

2003 (1)

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

2001 (1)

B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001).
[PubMed]

1998 (1)

1997 (1)

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

1994 (1)

1992 (1)

1991 (1)

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

1980 (1)

W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” JOSA 70(8), 998–1006 (1980).
[Crossref]

1971 (1)

B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).

1904 (1)

J. Hartmann, “Objektuvuntersuchungen,” Zt. Instrumentenkd 24(1), 1 (1904).

Bille, J. F.

Birch, P. M.

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Bliss, E. S.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Booth, M. J.

Gaessler, W.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Goelz, S.

Gong, M.

T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
[Crossref]

Goto, M.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Gourlay, J.

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Grimm, B.

Hartmann, J.

J. Hartmann, “Objektuvuntersuchungen,” Zt. Instrumentenkd 24(1), 1 (1904).

Hayano, Y.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Huang, L.

T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
[Crossref]

Iye, M.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Kamata, Y.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Kobayashi, N.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Lane, R. G.

Li, T.

T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
[Crossref]

Liang, J.

Long, T. W.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Love, G. D.

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Masters, B. R.

Minowa, Y.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Orham, E. L.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Platt, B. C.

B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001).
[PubMed]

B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).

Presta, R. W.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Purvis, A.

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Salmon, J. T.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Shack, R. V.

B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001).
[PubMed]

B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).

Sharples, R. M.

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Southwell, W. H.

W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” JOSA 70(8), 998–1006 (1980).
[Crossref]

Swift, C. D.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Takami, H.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Takato, N.

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Tallon, M.

Wang, F.

Ward, R. L.

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Appl. Opt. (3)

J. Opt. Soc. Am. A (1)

J. Refract. Surg. (1)

B. C. Platt and R. V. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001).
[PubMed]

JOSA (1)

W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” JOSA 70(8), 998–1006 (1980).
[Crossref]

Opt. Commun. (1)

J. Gourlay, G. D. Love, P. M. Birch, R. M. Sharples, and A. Purvis, “A real-time closed-loop liquid crystal adaptive optics system: first results,” Opt. Commun. 137(1–3), 17–21 (1997).
[Crossref]

Opt. Eng. (1)

T. Li, L. Huang, and M. Gong, “Wavefront sensing for a nonuniform intensity laser beam by Shack–Hartmann sensor with modified Fourier domain centroiding,” Opt. Eng. 53(4), 044101 (2014).
[Crossref]

Opt. Express (2)

Opt. Sci. Newsl. (1)

B. C. Platt and R. V. Shack, “Lenticular Hartmann Screen,” Opt. Sci. Newsl. 5, 15–16 (1971).

Proc. SPIE (2)

J. T. Salmon, E. S. Bliss, T. W. Long, E. L. Orham, R. W. Presta, C. D. Swift, and R. L. Ward, “Real time wavefront correction system using a zonal deformable mirror and a Hartmann sensor,” Proc. SPIE 1542, 2–17 (1991).
[Crossref]

Y. Hayano, H. Takami, W. Gaessler, N. Takato, M. Goto, Y. Kamata, Y. Minowa, N. Kobayashi, and M. Iye, “Upgrade plants for Subaru AO system,” Proc. SPIE 4839, 32–43 (2003).
[Crossref]

Zt. Instrumentenkd (1)

J. Hartmann, “Objektuvuntersuchungen,” Zt. Instrumentenkd 24(1), 1 (1904).

Other (3)

T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy (Academic, 1984).

K. G. Beauchamp, Walsh Funcitons and Their Applications (Academic, 1975).

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition (Pearson Education, 2010).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Principle of wavefront sensing through binary phase modulation. (a) The original square Walsh functions (Wm,n) with only two values, 1 and −1, indicated by the bright and dark areas and (b) the optical arrangement.
Fig. 2
Fig. 2 Wavefront tilt reconstruction with Walsh functions.(a) standard wavefront tilt form;(b) reconstructed wavefront with 16 Walsh functions;(c) residual wavefront for 16 Walsh function reconstruction ;(c) reconstructed wavefront with 64 Walsh functions;(d) residual wavefront for 64 Walsh function reconstruction ;(e) reconstructed wavefront with 256 Walsh functions;(f) residual wavefront for 256 Walsh function reconstruction.
Fig. 3
Fig. 3 Wavefront tilt expansion with Walsh functions.
Fig. 4
Fig. 4 Optical arrangement for Shack-Hartmann wavefront sensing without centroid detection.
Fig. 5
Fig. 5 Phase mask generated by phase-only SLM in centroid detection-less SH wavefront sensor.
Fig. 6
Fig. 6 The schematic diagram of the aperture and subaperture division (subapertures in gray were valid) for (a) and the incident wavefront distribution for (b) (unit in wavelength).
Fig. 7
Fig. 7 Slope measurements over subapertures. (a) and (c) are comparisons of measured two-axes slope values with input values. (b) and (d) are slope measurement errors.
Fig. 8
Fig. 8 Wavefront reconstruction through SH wavefront sensing based on binary-aberration-mode filtering for (a) reconstructed wavefront, (b) reconstruction error, (c) reconstructed coefficients and (d) coefficient reconstruction error (unit in wavelength).
Fig. 9
Fig. 9 Slope measurements over subapertures with traditional method. (a) is the comparison of measured two-axes slope values with input values and (b) is slope measurement error.
Fig. 10
Fig. 10 Wavefront reconstruction with traditional SH wavefront sensing method. (a) Reconstructed wavefront, (b) reconstruction error, (c) reconstructed coefficients and (d) coefficient reconstruction error (unit in wavelength).
Fig. 11
Fig. 11 Wavefront reconstruction with measurement noise. (a) Coefficient reconstructed error, (b) wavefront reconstruction error (unit in wavelength).

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I= D 0 | A Φ(x,y)dxdy | 2 ,
A W m,n dxdy .
E(x,y)= E 0 exp(j m,n a m,n W m,n ).
I 0 =f [cos( a m,n )] 2 ,
I m,n =f [cos( a m,n ϕ)] 2 .
I 0 [cos( a m,n ϕ)] 2 = I m,n [cos( a m,n )] 2
S x = p x a 0,1 , S y = p y a 1,0
E sub (x,y)= A sub exp(j m,n a m,n W m,n ).
I 0 (i) = f 0,1 [cos( a 0,1 (i) )] 2 or I 0 (i) = f 1,0 [cos( a 1,0 (i) )] 2 .
I 0,1 (i) = f 0,1 [cos( a 0,1 (i) +β)] 2 or I 1,0 (i) = f 1,0 [cos( a 1,0 (i) +β)] 2 .
a 0,1 (i) = tan -1 ( I 0,1 (i) / I 0 (i) -cosβ sinβ ), a 1,0 (i) = tan -1 ( I 1,0 (i) / I 0 (i) -cosβ sinβ ).
G x (i) = 2 a 0,1 (i) 2r/N = N r a 0,1 (i) , G y (i) = 2 a 1,0 (i) 2r/N = N r a 1,0 (i) .
G x (i) = N r tan -1 ( I 0,1 (i) / I 0 (i) -cosβ sinβ ), G y (i) = N r tan -1 ( I 1,0 (i) / I 0 (i) -cosβ sinβ ).
G x (i) =8 tan -1 (1- 2 I 0,1 (i) / I 0 (i) ), G y (i) =8 tan -1 (1- 2 I 1,0 (i) / I 0 (i) ).

Metrics