Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extended source pyramid wave-front sensor for the human eye

Open Access Open Access

Abstract

We describe a new wave-front sensor based on the previously proposed pyramid sensor. This new sensor uses an extended source instead of a point-like source avoiding in this manner the oscillation of the pyramid. After an introductory background the sensor functioning is described. Among other possible optical testing uses, we apply the sensor to measure the wave-front aberration of the human eye. An experimental system built to test this specific application is described. Results obtained both in an articficial eye and in a real eye are presented. A discussion about the sensor characteristics, the experimental results and future work prospects is also included.

©2002 Optical Society of America

1. Introduction

There have been proposed several promising devices for the human eye using adaptive optics techniques[1, 2] requiring the use of wave-front sensors with fast acquisition rate. To date, the only sensor used in the eye for adaptive optics applications has been the Hartmann-Shack sensor[3, 4]. Still not investigated for the eye, the oscillating pyramid proposed by Ragazzoni[5] is a technique based on a different concept with characteristics that can be advantageous. Akin to the Hartmann-Shack, this sensor provides instantaneous local information of the first derivative of the wave-front aberration. Unlike the Hartmann-Shack, in the oscillating prism sensor, both the sampling and the dynamic range are adjustable, properties that could be useful for practical applications of adaptive optics in the eye. In this paper we investigate the use of a modified version of the oscillating prism sensor as a fast measuring device of the wave-front aberration of the living human eye and other optical systems.

2. Sensor description

2.1 Background: The oscillating prism sensor

Figure 1 schematizes the basic elements of the oscillating prism sensor. As depicted in Fig. 1 (a) , a lens, L1, is used to form the Fourier transform of the pupil function of the optical system under study. As drawn in Fig. 1 (b), at this plane -one focal length apart from the lens- a four-faceted glass pyramid with a large vertex angle is placed. Introducing four different tilts, the pyramid splits and angularly separates the field into four parts. A second lens, L2 in Fig. 1 (a), is used to conjugate the optical system exit pupil plane with a new plane where an intensity sensor, as a CCD, is placed. As shown in Fig. 1 (c), if the system is aberration-free, discarding diffraction effects, the sensor acquires four copies of the aperture with binary intensity. If the system suffers aberrations, the four pupil images are no longer equal and from the relative point-to-point intensity differences the local gradient can be computed.

 figure: Fig. 1.

Fig. 1. (a) Schematics of the wave-front sensor where L1 states for the lens used to form the Fourier transform of the probed field at its focal length where the pyramid is placed. A second lens, L2, is placed behind the pyramid to focus the exit pupil of the optical system under study on an intensity detector. (b) The glass pyramid. (c) A different view of the sensor; A, B, C and D indexes the four re-imaged pupils.

Download Full Size | PDF

The gradient computation from the intensity can be introduced using the ray aberration concept instead of the wave aberration. If a ray leaving a particular exit pupil location does not suffer aberration, it reaches the origin of the Fourier plane. In this situation, it can be assumed that the pyramid splits the ray in exactly four equal rays each arriving in the sensor plane with the same relative location as depicted in Fig. 1 (c). However, if the same ray suffers aberration, it reaches the Fourier plane at coordinates given by

ξζ=f(wx,wy)=fw,

where f is the focal length of the focusing lens, L1 in Fig.1 (b), and w represents the wave aberration function at a given exit pupil coordinates (x, y). Conceptually, if it were possible to use a multiplicity of quad-cell sensors centered at the Fourier plane origin, providing each one independent signals for each ray, then from these signals the coordinates (ξ,ζ) could be obtained for each ray. Thus as Eq. (1) states, the wave-aberration gradient at each set of pupil coordinates could be calculated. This hypothetical system is not realizable in practice, however, the prism deviating the rays and the pupil relay optical setup play a close role; it can be understood as an indirect mean to have a set of independent signals -the pixel intensities registered by the CCD sensor- for each ray, closely related to the signals of a quad-cell. In order to better understand this, one can consider a particular ray suffering a non-zero wave aberration gradient. The ray reaches only one of the four facets. Consequently, just one of the four associate pixels for the given pupil position has non-zero intensity. From this information it is possible to know the quadrant where the vector ∇⃗w tip lies. However, this signal will be independent from the gradient modulus or in other words, the sensor is saturated whatever the aberration, provided it is non-zero.

 figure: Fig. 2.

Fig. 2. (a) A simple linear oscillation path in the Fourier plane. The pyramid vertex, marked with a crosshair, follows the dotted line around the origin. In (b) it is drawn a simple extended emitter with binary intensity (white area means light, black means no light) modeled (see text) as an infinite collection of oscillation paths, as the one in Figure (a), with different amplitudes. The doted line on the graph in panel (c) represents the sensor normalized response for the path of Figure (a). The solid line on the same graph represents the response for the extended source of Figure (b).

Download Full Size | PDF

To overcome the saturation one solution is to oscillate the pyramid[5]. The oscillation is done in a way that the pyramid vertex follows a point-symmetric closed path around a given Fourier plane coordinates, not necessary the axis coordinates, without rotation. While the pyramid moves, the CCD is allowed to integrate over, at least, one complete oscillation cycle. With this new sensor concept, the value of the four related pixels on the four re-imaged pupils is balanced according to the relative gradient modulus, i.e., to the distance between the ray intersection with the Fourier plane and the oscillation center. At this stage, the signal for a particular set of the associated four pixels is fully equivalent to the quad-cell signals needed to compute (ξ, ζ) coordinates for each ray. In particular, the gradient components are obtained from the pupil intensities using the standard quad-cell computation, i.e., using the expressions

wx|i,jci,j+di,j(ai,j+bi,j)ai,j+bi,j+ci,j+di,j,wy|i,jai,j+ci,j(bi,j+di,j)ai,j+bi,j+ci,j+di,j

where ai,j, bi,j, ci,j, y di,j represent the values of the pixels indexed as i and j in the four pupils classified as A, B, C and D as showed in Fig. 1 (c).

The sampling can be controlled changing the scaling in the re-imaging of the system exit pupil in the CCD plane. On the other hand, as Eq.(1) states, the gain of the sensor depends on the focal length f. Moreover, the sensor response curve depends strongly on the particular path the pyramid vertex follows. As an example, Fig. 2 (a) shows a simple linear path of amplitude Δ. The pyramid vertex, represented with a cross hair symbol, oscillates around the center of the Fourier plane. Then, without loss of generality, the sensor response with the change in the gradient only in the x direction -equivalently with a ξ variation- for a given ray goes as follows. In a oscillation cycle, the intensity of the pixel i,j on the four re-imaged pupils is

ai,j=bi,j={0ξ>Δ2(Δξ)ΔξΔ22Δξ<Δci,j=di,j={22Δξ>Δ2(Δ+ξ)ΔξΔ0ξ<Δ

Thus, using Eq. 2, the gradient is proportional to

ξ={1ξ>Δξ/ΔΔξΔ1ξ<Δ

The dotted line of Fig. 2 (b) shows the normalized sensor response computed using Eq. (4). A change in the oscillation amplitude changes the gain, also limiting the range: the sensor becomes saturated when an aberrated ray reaches the Fourier plane outside the path (ξ outside the [-Δ, Δ] interval). At this point, it must be noticed that the path must not necessary be centered around the optical axis; de-centering solely means that the sensor measures a signal corresponding to a global tilt that is not present in the optical system under study. The linear path used here is not the only possibility, other less simple but more practical paths, as a circular one[5], are possible with the primary effect of de-linearizing the response.

2.2 The extended source pyramid sensor

Conceptually, one can imagine an alternative to oscillating the pyramid: to leave the pyramid static and oscillate the field over the pyramid. A closely related situation is given in the incoherent image formation process of an extended object. Consequently let us consider the same optical system of Fig. 1 substituting the point-like source by an extended incoherent source leaving the pyramid static at some coordinates of the Fourier plane. In that case, assuming isoplanatism, the field in the Fourier plane can be though as an ensemble of infinite copies of the field, incoherent with each other, transporting each of them distinct energies according to the source intensity distribution. Therefore, in an exposure time, this new system “probes” simultaneously infinite energy-balanced equally aberrated fields against the static pyramid; a process similar to the combination of point-like-source, oscillating pyramid and a large enough integration time. In fact, if the object is an ideal uni-dimensional point-symmetric incoherent emitting curve, the two systems are exactly equivalent.

The response of this new system can be calculated modeling the object as a collection of infinite oscillation curves filling the object extension. As a result, together with the incoherent emission, an additional restriction over the source arises: in order to preserve the symmetry of the sensor response with ξ and ζ, the source must be point-symmetric.

Fig. 2 (b) shows an example of a simple uniform extended source based on the linear oscillation path of Fig. 2(a), consisting of a flat square of extension Δ. In this case, when only the gradient in the x direction changes its magnitude, the value of a particular set of associate pixels of the four re-imaged pupils can be computed using the expressions of Eq. (2) integrating for Δ

ai,j=bi,j={0ξ>Δ12(Δξ)2ΔξΔ2Δ2ξ<Δci,j=di,j={2Δ2ξ>Δ12(Δ2+2ξΔξ2)ΔξΔ0ξ>Δ

thus producing the following sensor response,

ξ={1ξ>Δ(2Δξ)ξ/Δ2ΔξΔ1ξ<Δ

For a region where |ξ| is significantly smaller than Δ, Eq. 6 can be approximated by

ξ={1ξ>Δ2ξ/ΔΔξΔ1ξ<Δ,

a linear response with doubled gain compared with the oscillating-pyramid case of Eq. 4.

The continuous line of Fig. 2 (b) represents the normalized response witch was computed using Eq. (6) for this particular source geometry. Although, there are still a region with linear behavior the linear range is reduced compared with the oscillating-prism. Adjusting the focal length f, or changing the object extension, Δ, allows to adjust the gradient range to the linear regime. In any case, if the signal would be used to drive a wave aberration compensation device, it would not be compulsory to use the sensor only in the linear regime.

This especially simple extended source has been chosen for the sake of clarity; it can be inferred that a similar response can be found in others symmetric objects with not necessarily constant intensity distribution.

 figure: Fig. 3.

Fig. 3. Schematics of the wave-front measuring apparatus. S electronic shutter; SF Spatial filter; RD rotating diffusers; A1 A2, apertures; L1, L2, L3, L4, L5, L6 lenses. M1, M2, M3, M4 mirrors (M1, M2 on a translation stage). BS beam splitter. CCD charged coupled device.

Download Full Size | PDF

3. Experimental results

3.1 Artificial eye

We implemented the described sensor building the electro-optical system depicted in Fig. 3. A He-Ne laser (543 nm) was used as illumination source. A couple of independent rotating diffusers, (RD) frosted glass, were used to produce time varying random phase. A lens system, L1-L2, and the beam splitter, BS, collect the light emerging from the diffusers to illuminate an extended area on the retina. The rotating diffuser effect is to rapidly change the speckled pattern on the retina. Then, in a large enough exposure time, one can assume that the optical system of the eye is measured using an extended incoherent source with gaussian intensity profile emitting from the retina.

The light reflected back from the eye goes through a Badal optometer (mirrors M1, M2, M3 and M4 and lenses L2 and L3) in which mirrors M2 and M3 are mounted on a translation stage allowing us to introduce or correct defocus when displaced. After the Badal setup, light passes through lens L4, which plays the role of lens L1 in Fig. 1. Then, the glass pyramid splits the beam and lens L5, equivalent to the relay lens L2 in Fig. 1, re-images the pupil on to the CCD detector. In order to have light in the system only when the sensor integrates, the CCD operation is synchronized with an electronic shutter, ES, placed immediately after the laser aperture. Finally, to control the pupil size an aperture, A2, is placed in a plane conjugated with the natural pupil of the eye.

Software was developed to process the acquired images. Firstly, an image processing module was implemented to find the central coordinates of the four images of the exit pupil. This calculation must be done only once at the beginning of the wave-front measurements. The obtained pupil coordinates are used to crop four images, each of them corresponding to matrixes A,B,C and D (see Fig. 1 (c)). After the computation of Eq. 2, computer integrates the phase from the gradient data[6,7] using an implementation of the Singular Value Decomposition (SVD)[8] algorithm to calculate the matrix pseudo-inverse. Finally, the program outputs the wave-aberration expressed in coefficients of the Zernike[9] expansion excluding the piston, since the sensor is insensitive to this term, and the tip and tilt, given that we have not precise control of the pyramid vertex transversal position. Must be noticed that, after a first image, the integration for subsequent data involves only a matrix multiplication which means that the computer loses a small fraction of time in the processing.

Not having precise control over the extension of the source on the retina, we can not assume a gain slope. Therefore, we calibrated the system using a parameter obtained by means of comparing the measured defocus coefficient with the actual value a given displacement of the translation stage must produce.

 figure: Fig. 4.

Fig. 4. (1.8 Mb) Movie of the acquired data, (a), the gradient in both orthogonal directions, (b), and the computed phase of the pupil function represented modulus 2π obtained by moving the translation stage.

Download Full Size | PDF

We tested the system in an artificial eye which was constructed using a short focal length lens as eye optics and a static diffuser as retina. As we have no means to measure consistently the wave aberration by other methods using the same illumination -a convergent beam generating an extended spot in the retina-, we used an indirect method to check the performance of the sensor. We moved the translation stage measuring the wave aberration at intervals. Once carefully aligned, the translation stage displacement introduces basically a defocus amount proportional to the mirrors displacement. Figure 4 shows a movie of the data obtained when the translation stage moves. Panel (a) displays the images CCD acquired; (b) shows the computed gradients in the two orthogonal directions and. (c) the computed phase of the pupil function represented wrapped (modulus 2π). The software has not limitation in the number of Zernike coefficients it can use; we arbitrary limited the fitting up to the twelve coefficient to speed up the calculation.

 figure: Fig. 5.

Fig. 5. Variation of the different Zernike coefficients moving the translation stage in the artificial eye experiment. One centimeter of displacement introduces 0.97 diopters of refractive defocus (0.52 μm of Z4).

Download Full Size | PDF

Figure 5 shows the behavior of the different Zernike coefficients using the ordering and normalization given by Noll[9] expressed in microns as a function of the different locations in the translation stage. The sensor produces a linear response in the measured defocus for controlled values of defocus induced in the system. We used this result to find the sensor response to the local gradient. The signals the sensor measures at the different pixels locations within the pupil (Fig. 4 (b)) are proportional to the corresponding local variation of the wave-front gradient, i.e., the gradients are approximately in the linear regime of the sensor response Fig. 2 (solid line). It is important to note that this behavior does not involve any particular mode but the sensor response to the local gradient. Then, it can be assumed that the sensor is able to provide the correct coefficient for whatever Zernike mode would be present in the wave-front providing enough pupil sampling. The same analysis could be performed using a controlled change in any different mode; defocus was the easier controllable aberration to be introduced in our experiment. The results of Fig. 5 shows some variation in higher coefficients not correlated with the defocus. Given that the sensor responds with almost no deviation for a linear response for the defocus, we assume that the values for other coefficients were not artifacts, but they were also present in the system and changed slightly for each translation stage position due to some misalignment of the Badal lenses.

It must be noticed that if the sensor is primarily to be used as a sensor for measuring aberrations, and not to drive an adaptive optics system, the local gradients must be in the linear response range. For higher gradient values the sensor signals must be corrected using not just a constant but the complete information of the response curve. As an alternative, as mentioned before, the extension of the source on the retina can be modified to increase the linear range. However it can be appreciated that, in this particular optical setup, the sensor shows a linear response for a gradient range convenient for applications in the human eye with high sampling rate: approximately eight thousand wave-front slope data points for a 4mm pupil diameter.

3.2 Human eye results

In the system used previously we included a bite-bar to fix the head of a human subject in order to probe the sensor in the living eye (one of the authors II served as subject). We collected images with 200-msec exposure and 4 mm pupil diameter. The illuminated area on the retina subtended roughly one degree. The subject observed a target to stabilize fixation between expositions. Accommodation was not paralyzed. As with the artificial eye, the experiment consisted of acquiring sequentially images for different translation stage displacements observing the behavior of the Zernike terms. Figure 6 shows a movie of the pupil plane images recorded when the translation stage moves (panel (a)); panel (b) the computed gradients using the images of panel (a); (c) the wave aberrations obtained.

 figure: Fig. 6.

Fig. 6. (518 KB) Movie of the acquired data, (a), the gradient in both orthogonal directions, (b), and the phase of the pupil function represented modulus 2π obtained by moving the translation stage.

Download Full Size | PDF

When comparing the images in Fig. 6 (a) with those in Fig. 4 (a), one can observe that new features appear (see arrows on Fig. 6). That signal corresponds to spurious reflections on the first surface of the cornea, the small structure (arrow number 1), and crystalline, the larger (arrow number 2). These are the first and third Purkinje images. The effect on the measures is clearly deleterious but difficult to quantify. The light coming from the first surface is intense enough to leave a small number of pixels with unreliable gradient information. The number of pixels is minimized providing that the beam enters the eye converging in to the cornea as in our system does (see Fig. 3). In any case, these wrong measurements involve a reduced data set compared with the correct ones. Therefore no significant difference in the Zernike coefficients is obtained removing them out of the fitting calculation, at least up to the twelve coefficient. The light coming from the crystalline (arrow number 2 in Fig. 6 (a)) is less intense and more spread since it is not focused in the detector plane as the light from the first reflection is. The gradient values are distorted in the affected region but, similarly to the first reflection, to remove these pixels from the computation does not modify significantly the Zernike coefficients.

Figure 7 shows the variation of the first twelve Zernike coefficients vs. the translation stage displacements. The behavior closely resembles the artificial eye: a linear response for changes in defocus, while other terms remain stable. In this figure, the response of defocus is displaced to the right, accounting for the subject refraction (approximately half a diopter). Different experimental factors accounts for the larger noise found in the living eye (Fig. 7) compared to the artificial eye (Fig. 5). The accommodation was not paralyzed and the subjects eye’s position and alignment could be slightly different during the series of measurements. Can be noticed that the line of defocus appear displaced half a centimeter to the right accounting for the subject defocus refraction (approximately half a diopter). It can be observed that the eye present also astigmatism (z6) mainly because the fixation point was not aligned with the optical system axis.

 figure: Fig. 7.

Fig. 7. Variation of the different Zernike coefficients moving the translation stage in the living eye experiment.

Download Full Size | PDF

4. Conclusion

In this paper we have described a new sensor to measure the wave-front aberration of a general optical system. It is based in the previously reported idea of the oscillating pyramid wave-front sensor by Ragazzoni[5], but using an extended incoherent source together with a static four facet pyramid. This version of the sensor has the same potential advantages: adjustability of range, gain and sampling, but without moving parts. The extended source pyramid wave-front sensor has been successfully used to measure aberration in the living human eye. An important advantage of this sensor for the eye is the easy adaptability to the variations in the range of the aberrations one can expect in the human eye optics: from very little aberrated normal eyes to extremely aberrated eyes in patients with pathological corneas. The dynamic range of the sensor can be modified simply by changing the extension of the source on the retina. These capabilities can be useful in practical implementations of devices using human eye wave aberration measurements. A drawback is that the sensor gathers light from spurious reflections from the ocular surfaces. Although this is not a problem in a low order modal estimation given the density of the not contaminated data, the wrong pixels must be detected and removed in order to obtain reliable high-order estimations. Further investigation is required to address this issue. In summary, we demonstrated the successful application of a new wave-front sensor: the extended source pyramid sensor to estimate the aberrations of the living human eye.

5. Acknowledges

This research was supported by grant DGES-Spain (grant PB97-1056).

References and Links

1. E.J. Fernandez, I. Iglesias, and P. Artal, “Closed-loop adaptive optics in the human eye,” Opt. Lett. 26, 746–748 (2001). [CrossRef]  

2. H. Hofer, L. Chen, G. Yoon, B. Singer, Y. Yamauchi, and D.R. Williams, “Improvement in retinal image quality with dynamic correction of the eye’s aberration,” Opt. Express 10, 631–643 (2001) http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-11-631 [CrossRef]  

3. J. Liang, B. Grimm, S. Goelz, and J.F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt.Soc. Am. A. 11, 1949–1957 (1994). [CrossRef]  

4. P.M. Prieto, P.M., F. Vargas-Martin, S. Goelz, and P. Artal, “Analysis of the performance of the Hartmann-Shack sensor in the human eye,” J. Opt. Soc. Am. A. 17, 1388–1398 (2000). [CrossRef]  

5. R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. of Mod. Opt. 43, 289–293 (1996). [CrossRef]  

6. W.H. Southwell. “Wavefront estimation from wavefront slope measurements,”. J. Opt. Soc. Am. A. 70, 998–1006 (1980). [CrossRef]  

7. R. Cubalchini, “Modal wavefront estimation from phase derivative measurements,” J. Opt. Soc. Am. A. 69, 972–977(1979). [CrossRef]  

8. W.H. Press, W.H., S.A. Teukolsky, Vetterling, W.T., and B.P. FlanneryNumerical recipes in C, Second Edition, Cambridge University Press (Cambridge, 1992).

9. R.J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66, 207–211 (1976). [CrossRef]  

Supplementary Material (2)

Media 1: GIF (1827 KB)     
Media 2: GIF (518 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) Schematics of the wave-front sensor where L1 states for the lens used to form the Fourier transform of the probed field at its focal length where the pyramid is placed. A second lens, L2, is placed behind the pyramid to focus the exit pupil of the optical system under study on an intensity detector. (b) The glass pyramid. (c) A different view of the sensor; A, B, C and D indexes the four re-imaged pupils.
Fig. 2.
Fig. 2. (a) A simple linear oscillation path in the Fourier plane. The pyramid vertex, marked with a crosshair, follows the dotted line around the origin. In (b) it is drawn a simple extended emitter with binary intensity (white area means light, black means no light) modeled (see text) as an infinite collection of oscillation paths, as the one in Figure (a), with different amplitudes. The doted line on the graph in panel (c) represents the sensor normalized response for the path of Figure (a). The solid line on the same graph represents the response for the extended source of Figure (b).
Fig. 3.
Fig. 3. Schematics of the wave-front measuring apparatus. S electronic shutter; SF Spatial filter; RD rotating diffusers; A1 A2, apertures; L1, L2, L3, L4, L5, L6 lenses. M1, M2, M3, M4 mirrors (M1, M2 on a translation stage). BS beam splitter. CCD charged coupled device.
Fig. 4.
Fig. 4. (1.8 Mb) Movie of the acquired data, (a), the gradient in both orthogonal directions, (b), and the computed phase of the pupil function represented modulus 2π obtained by moving the translation stage.
Fig. 5.
Fig. 5. Variation of the different Zernike coefficients moving the translation stage in the artificial eye experiment. One centimeter of displacement introduces 0.97 diopters of refractive defocus (0.52 μm of Z4).
Fig. 6.
Fig. 6. (518 KB) Movie of the acquired data, (a), the gradient in both orthogonal directions, (b), and the phase of the pupil function represented modulus 2π obtained by moving the translation stage.
Fig. 7.
Fig. 7. Variation of the different Zernike coefficients moving the translation stage in the living eye experiment.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ξ ζ = f ( w x , w y ) = f w ,
w x | i , j c i , j + d i , j ( a i , j + b i , j ) a i , j + b i , j + c i , j + d i , j , w y | i , j a i , j + c i , j ( b i , j + d i , j ) a i , j + b i , j + c i , j + d i , j
a i , j = b i , j = { 0 ξ > Δ 2 ( Δ ξ ) Δ ξ Δ 2 2 Δ ξ < Δ c i , j = d i , j = { 2 2 Δ ξ > Δ 2 ( Δ + ξ ) Δ ξ Δ 0 ξ < Δ
ξ = { 1 ξ > Δ ξ / Δ Δ ξ Δ 1 ξ < Δ
a i , j = b i , j = { 0 ξ > Δ 1 2 ( Δ ξ ) 2 Δ ξ Δ 2 Δ 2 ξ < Δ c i , j = d i , j = { 2 Δ 2 ξ > Δ 1 2 ( Δ 2 + 2 ξΔ ξ 2 ) Δ ξ Δ 0 ξ > Δ
ξ = { 1 ξ > Δ ( 2 Δ ξ ) ξ / Δ 2 Δ ξ Δ 1 ξ < Δ
ξ = { 1 ξ > Δ 2 ξ / Δ Δ ξ Δ 1 ξ < Δ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.