Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation

Open Access Open Access

Abstract

Vision-correcting near-eye displays are necessary concerning the large population with refractive errors. However, varifocal optics cannot effectively address astigmatism (AST) and high-order aberration (HOAs); freeform optics has little prescription flexibility. Thus, a computational solution is desired to correct AST and HOA with high prescription flexibility and no increase in volume and hardware complexity. In addition, the computational complexity should support real-time rendering. We propose that the light field display can achieve such computational vision correction by manipulating sampling rays so that rays forming a voxel are re-focused on the retina. The ray manipulation merely requires updating the elemental image array (EIA), being a fully computational solution. The correction is first calculated based on an eye’s wavefront map and then refined by a simulator performing iterative optimization with a schematic eye model. Using examples of HOA and AST, we demonstrate that corrected EIAs make sampling rays distributed within ±1 arcmin on the retina. Correspondingly, the synthesized image is recovered to nearly as clear as normal vision. We also propose a new voxel-based EIA generation method considering the computational complexity. All voxel positions and the mapping between voxels and their homogeneous pixels are acquired in advance and stored as a lookup table, bringing about an ultra-fast rendering speed of 10 ms per frame with no cost in computing hardware and rendering accuracy. Finally, experimental verification is carried out by introducing the HOA and AST with customized lenses in front of a camera. As a result, significantly recovered images are reported.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Near-eye displays (NEDs) providing virtual reality (VR), augmented reality (AR), and mixed reality (MR) will become ubiquitous in the future life, as the emerging concept of Metaverse expects [1]. According to the World Health Organization (WHO) [2], at least 2.2 billion people (more than a quarter world population) are suffering from vision impairments, dominantly caused by refractive errors such as myopia, hyperopia, presbyopia, astigmatism (AST), and higher-order aberrations (HOAs). Thus, we must adapt NEDs to refractive errors. Though it is straightforward to wear spectacles while using a NED, it is uncomfortable, and the room to accommodate the spectacles induces a more extended eye relief, unacceptably increasing the system volume. Therefore, the great demand for vision-correcting NEDs is highlighted. In addition, it is precious to endow true-3D NEDs (e.g., holographic displays and light field displays) with vision correction because realistic 3D content is necessary for the Metaverse [1].

Adding a zoom eyepiece is a practical way of vision correction [3,4]. However, the zoom lens occupies a large room. Furthermore, adjusting focal length can only compensate for diopter variation caused by myopia, hyperopia, and presbyopia but can hardly address AST and HOAs. Therefore, many studies have been dedicated to integrating vision-correcting optics into NEDs, mainly including two strategies.

  • (i) Integrated focus-tunable component. For example, Chakravarthula et al. [5] used a semi-reflective deformable membrane extended from Dunn’s work [6] as the varifocal unit on a liquid crystal (LC) display. The electrically-controlled LC lens with a gradient index is another good choice for vision correction [7], as the work by Lin et al. [8]. Jamali et al. [9] proposed a segmented LC lens with spatially varying focal lengths, mainly used for presbyopia correction like progressive glasses. The Alvarez lens with a sliding-induced tunable focal length [10,11] and the liquid lens actuated by the piezoelectric effect [12] are also potential. However, in addition to the hardware complexity, AST and HOAs can still not be well corrected.
  • (ii) Prescription-customized lens or mirror. The refractive errors can be compensated by modifying an optical surface of a NED according to the ophthalmic prescription. In particular, the customized lens or mirror will have a freeform profile if used for AST or HOAs. For example, Wu et al. [13] added a freeform mirror in an AR-NED to correct AST. Cheng et al. [14] modified a freeform prism-based optical see-through NED so that the surface closest to the eye is curved to provide a customized optical power. Compared with the varifocal approach, the customized optics can address AST and HOAs by precisely modulating light, benefiting from modern freeform optics fabrication. However, different NEDs must be fabricated for other prescriptions, meaning little flexibility and high cost.

The above methods must modify a NED’s hardware and potentially increases the volume; hence, computational correction is preferred by some studies because no cost in hardware and volume is paid. The computational correction needs computational displays, comprising two types: the wavefront-based holographic display [15] and the ray-based light field display [1618]—the two most promising true-3D display technologies.

  • (i) Modern holographic displays use a digital spatial light modulator (SLM) to reconstruct a wavefront, so the complex amplitude recorded on the SLM can be tailored to correct refractive errors [1921]. In the holographic Maxwellian-view system that projects the spatial frequency spectrum of an SLM on an eye pupil, computational correction can be similarly performed by modifying the SLM, proposed by Takaki et al. [22]. However, the holographic display per se is encountering significant challenges in computational complexity, field of view, speckle, and space-bandwidth product, making the vision correction attached to it a little impractical.
  • (ii) The light field display (LFD) provides true-3D content by sampling light with vectorial rays. In the typical integral imaging light field display (InIm-LFD), the rays are modulated by an image source and a microlens array. In addition to compact volume and feasible hardware, a fascinating characteristic of LFDs is the computational focus cue, which alleviates the well-known vergence-accommodation conflict to support true-3D.

An InIm-LFD can correct myopia, hyperopia, and presbyopia by directly shifting the image depth. In addition, complex aberrations can also be computationally addressed by modifying the elemental image array (EIA). For example, Pamplona et al. [23] regarded an eye’s refractive errors as spatially varying focal lengths on the pupil; thus, they calculated the EIA by applying different image depths according to the positions where light bundles penetrated the pupil. However, this solution requires the focal length variation on the eye pupil to be smooth so that a light bundle expanded by a lenslet can be processed with a constant focal length. Huang et al. [24] improved the above work by prefiltering elemental images to minimize the difference between the synthesized retinal image and the ideal image. Although it is impossible to correct an aberrated optical system by prefiltering a single input image [25], multiple element images synthesizing a retinal image provide a much higher degree of freedom (DoF) for correction. Their work achieved significant retrieval in contrast and resolution for myopia and HOAs. However, the minimization problem must solve a complicated matrix equation, severely increasing the computational complexity.

Although the above works demonstrate the feasibility of computational vision-correcting LFDs, we are still concerned about several limitations. First, the computational correction aggravates the high computational complexity of LFDs. Secondly, the eye optics was assumed to be a simple focal length variation or a wavefront map applied on an ideal pupil; thus, non-paraxial optics that may play a non-negligible role was not incorporated. Furthermore, the current computational correction was performed based on viewpoints but not sampling rays, which means the sampling rays that essentially form 3D images were potentially not manipulated with the highest DoF.

To advance this area, this study proposes a computational vision-correcting LFD where every sampling ray is manipulated. We precisely incorporate eye optics based on a schematic eye model with a wavefront map. AST and HOAs can be addressed due to the high DoF in manipulating rays. The corrected retinal image is verified through simulation and experiment. More importantly, we propose a new voxel-based EIA generation method, utterly different from popular viewpoint-based methods. The new approach is hundreds of times faster regardless of whether performing vision correction.

2. Correction method

We will first review the working principle of LFDs from optics and algorithm perspectives in Sec. 2.1. Next, Sec. 2.2 will introduce the concept of our vision correction method that separately manipulates every sampling ray by modifying the EIA. For the implementation of the concept, a preliminary correction derives a corrected EIA based on a wavefront map under a paraxial assumption, as Sec. 2.3 discusses. Then, a precise correction optimizes the EIA using an in-house built simulator incorporating non-ideal conditions, as discussed in Sec. 2.4.

2.1 Introduction of light field displays

The optical fundamental of LFDs is sampling light with rays, as Fig. 1(a) shows. Under a dense sampling, the focus cue induced by the rays will be equivalent to a real object [1618]. For an adjustable focus cue, the sampling rays must be manipulable to mimic an object at a designated depth. To this end, as shown in Fig. 1(b), two light-modulating components, P1 and P2, provide coordinates (u,v) and (s,t), respectively. A vectorial ray is thus fully manipulated by configuring its 4D light field (u,v,s,t). The very typical LFD, InIm-LFD, adopts a microdisplay [26] and a microlens array (MLA) as the two light-modulating components, as shown in Fig. 1(c). In an InIm-LFD, the coordinates (u,v) denote microdisplay pixel positions, and (s,t) are lenslet centers. Because the MLA is unconfigurable, ray manipulation is achieved by changing pixel positions (u,v). Such ray manipulation is the basis of the proposed vision correction.

 figure: Fig. 1.

Fig. 1. LFDs: from the general concept to implementation. (a) The fundamental concept: sampling light with rays; (b) manipulating the rays’ light fields using two light-modulating components, P1 and P2; (c) implementing using a microdisplay and a MLA as the light-modulating components. (d) Viewpoint-based understanding of the LFD and the derived EIA.

Download Full Size | PDF

For an object containing many voxels, appropriately rendering the microdisplay to reconstruct sampling rays from all voxels elicits the problem of EIA generation. The early point raytracing rendering (PRR) method [27] adopts a straightforward approach. It regards an object as a point cloud, traces from each point to multiple lenslets to initiate sampling rays, and then finds the intersections between the rays and the microdisplay to acquire this point’s homogeneous pixels. Nevertheless, because many points need to be processed and every point requires complex raytracing, the computational complexity is very high. Modern InIm-LFDs adopt an equivalent but faster approach—viewpoint-based rendering. The InIm-LFD can be understood as the reversed light path of InIm photography, which uses a camera array to take a photograph array of an object from different viewpoints [18]. The photograph array with small parallaxes is called the EIA. Reversibility of light tells that displaying the EIA acquired in the pickup stage can reconstruct the object. Therefore, the EIA can be generated by virtually performing InIm photography. As Fig. 1(d) shows, each lenslet performs a geometric projection from the object to the microdisplay from its viewpoint. The projected images form the wanted EIA. Note that a voxel, e.g., the red rectangle in Fig. 1(d), appears in multiple elemental images with its sampling rays, explaining the equivalence of voxel- and viewpoint-based rendering.

Though the viewpoint-based EIA generation is more efficient, it is still challenging to satisfy real-time rendering, thus attracting many studies [2831] to accelerate it. What is worse, the computational vision correction will put more computational loading on the EIA generation. In particular, irregular aberrations induced by HOAs invalidate popular acceleration methods. For example, the acceleration strategies of parallel rendering [28,29] and sparse viewpoints [30,31] both require the variation of elemental images to be regular and smooth. Therefore, we consider the viewpoint-based practice unacceptable, and in Sec. 3, we will reform the original voxel-based concept for real-time computational vision correction.

2.2 Proposed concept

Our core concept is to manipulate sampling rays in an InIm-LFD to re-focus them on the retina of an ametropic eye, like normal vision forming a sharp retinal image, as Fig. 2(a) shows. See Figs. 2(b) and (c). Obeying conventions in visual and ophthalmic optics [32], myopia and hyperopia caused by elongated and shortened eyeballs produce defocus; presbyopia caused by an insufficient refractive power of the crystalline lens similarly brings defocus. According to the Gaussian formula, the defocus due to variations in image distance or focal length can be directly addressed by changing the object distance, which is an intrinsic ability of LFDs achieved by applying a desired depth when generating the EIA. In other words, ray manipulation with a single value of depth change, i.e., a low DoF, can correct myopia, hyperopia, and presbyopia. We will ignore these simple cases in this paper.

 figure: Fig. 2.

Fig. 2. Retinal images formed in an InIm-LFD: (a) normal vision; (b) myopia with an elongated eyeball; (c) hyperopia with a shortened eyeball; (d) HOA caused by a distorted cornea; (e) HOA corrected by generating sampling rays from new pixels (blue rectangles with solid blue lines) instead of original pixels (red rectangles with dashed red lines). The ray manipulation through a new pixel is magnified in the inset.

Download Full Size | PDF

AST produces two different focusing planes, and HOAs are even irregular. For example, Fig. 2(d) shows a HOA caused by a distorted cornea, where the sampling rays are dispersed on the retina with no explicit regulation. At this point, coarse ray manipulation through a diopter shift is insufficient. Instead, consider the essential feature of LFDs is vectorial sampling rays modulated through pixel positions. Therefore, we propose to separately manipulate every ray by emanating a sampling ray from a new pixel instead of the original pixel, i.e., modifying the EIA, as Fig. 2(e) illustrates. A new pixel means a rotated ray, as described in the inset in Fig. 2(e). And the angular precision of the ray manipulation is p/g (p: pixel size; g: MLA-microdisplay gap), usually a precise number. Because a pixel is the smallest controllable unit in an InIm-LFD, such ray manipulation has the theoretically highest DoF.

2.3 Preliminary correction

The wavefront map is the standardized description of an eyeball’s refractive errors, which can be acquired by popular measures such as a Hartmann-Shack sensor or corneal topography [33,34]. Therefore, this study obeys the standard practice. As Fig. 3(a) shows, for a pixel at (u,v) that originally emits a sampling ray passing through a lenslet at (s,t), we need to determine a new pixel at (u+Δu, v+Δv). Correspondingly, the original sampling ray is rotated by (Δθx, Δθy) in the x- and y-directions, respectively. The MLA-microdisplay gap g links the pixel offset with the rotation, as given by Eq. (1). For near-eye InIm-LFDs, the gap is usually slightly smaller than the MLA’s focal length to create a native virtual image plane (i.e., the central depth plane). The distance of the MLA to the eyeball should produce sufficient eye relief, usually at a centimeter level. Sec. 2.4 will carefully determine the parameters following mainstream InIm-LFDs [1618].

$$\Delta u = \Delta {\theta _x} \cdot g,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \Delta v = \Delta {\theta _y} \cdot g$$

Now we assume a reversed light path, i.e., a spherical wave diverging from a point on the retina, penetrating the pupil and the MLA, and converging on the microdisplay. Under a specific refractive error, the wavefront on the pupil will deviate from a spherical wave. The wave aberration causes a ray initially directed to a particular pixel to rotate for a specific angle, which is the opposite of the ray rotation for correction, namely (Δθx, Δθy) in Eq. (1).

 figure: Fig. 3.

Fig. 3. (a) Pixel offset, (Δu, Δv), and the corresponding rotation of the sampling ray regarding the eyeball’s wavefront map. (b) The geometry to derive ray aberration from wave aberration.

Download Full Size | PDF

Solving the ray aberration from wave aberration is a classical problem [35]. As Fig. 3(b) illustrates, a spherical wavefront suggests that a ray from a point P(x,y) on the wavefront is directed to the sphere center O. However, an aberrant wavefront leads the ray to another point T. Along the ray PO, the optical path difference (OPD) PQ between the two wavefronts is the wave aberration W(x,y). Under a numerical aperture that is not too large, the deviation angle of the ray is the opposite of the wave aberration’s slope, as Eq. (2) gives.

$$\nabla W(x,y) = \frac{{\partial W(x,y)}}{{\partial x}}\overrightarrow {\mathbf i} + \frac{{\partial W(x,y)}}{{\partial y}}\overrightarrow {\mathbf j} = \Delta {\theta _x}\overrightarrow {\mathbf i} + \Delta {\theta _y}\overrightarrow {\mathbf j} = \frac{1}{R}\overrightarrow {{\mathbf OT}}, $$
where the spatial ray aberration, the planar vector OT, is incidentally given.

Note we calculate the aberration in the eyeball’s object space with a reversed light path because the ray manipulation is performed in the object space, conjugating to the image space form that is common in textbooks.

Substituting Eq. (2) into Eq. (1) can obtain the new pixel position (u+Δu, v+Δv). However, compare Figs. 3(a) and (b); the wave aberration is counted on the eyeball’s pupil, whereas the ray rotation is performed through the lenslet’s pupil. The non-coincidence of the two pupils must introduce errors, which, in fact, implicitly occurred in previous vision-correcting LFDs [23,24]. In addition, the MLA is treated with the thin lens model, which elicits Eq. (1). The oversimple lens model may also introduce errors. Therefore, we must refine the wavefront map-based preliminary correction.

2.4 Precise correction

To refine the above preliminary correction, we must comprehensively know the light propagation from the microdisplay to the retina. And the eye optics should be accurately considered rather than a paraxial lens. To this end, we adopt a simulator that can accurately obtain sampling rays and retinal images in an InIm-LFD. The simulator utilizes Zemax and Matlab, where Zemax is for high-accuracy optical simulation, and Matlab is for flexible data processing. The preliminary correction can be precisely optimized based on the simulator.

2.4.1 Simulator with a schematic eye model

The simulator was previously proposed [16] and utilized [3638], which models the Arizona eye model with adjustable accommodation [32], a microdisplay, and a MLA in Zemax. Matlab calls Zemax through the ZOS-API to send a generated EIA to Zemax and receives optical simulation data returned. Thanks to the joint simulation [39], the simulator has two features. (i) Highly flexible data processing; for example, the EIA calculation can be seamlessly performed along with the optical simulation. (ii) High optical accuracy because no paraxial assumption is made, but Zemax faithfully executes raytracing and diffraction integral from the first surface (the microdisplay) to the last surface (the retina). The refractive errors are introduced by applying a wavefront map to the original eye model—attaching a dummy surface on the eye model’s pupil, which records the phase difference data (usually in Zernike polynomials [32]).

Let us use an HOA example to introduce the simulator and parameters used for the following analysis. The HOA contains oblique astigmatism and trefoil aberrations, whose wavefront map is expressed in a Zernike polynomial of Z5 plus Z9 (the Noll notation) defined on a 4-mm pupil, as given by Eq. (3). Figure 4 provides parameters of the InIm-LFD and the modeled system in Zemax with its wavefront map.

$$W(\rho ,\theta ) = \frac{A}{{\sqrt {\pi /6} }}{\rho ^2}\sin 2\theta + \frac{B}{{\sqrt {\pi /8} }}{\rho ^3}\sin 3\theta, $$
where ρ and θ are the polar coordinates on the 4-mm pupil; A and B are constants to control the aberration’s magnitude, equaling 0.02.

 figure: Fig. 4.

Fig. 4. An InIm-LFD with HOA for the simulator to analyze: parameters of the InIm-LFD (the left table) and the system modeled in Zemax. The lower inset shows a single lenslet, and the right inset shows the wavefront map.

Download Full Size | PDF

We provide a voxel to be reconstructed to the simulator, which then outputs comprehensive image formation data, as Fig. 5 shows. First, the voxel is at infinity in this example (namely, on the central depth plane). The voxel’s sampling rays are determined by raytracing from the voxel to lenslets that can direct the rays into the 4-mm pupil (i.e., the voxel-based PRR method [27]). The rays’ footprint on the pupil is recorded (the first plot in Fig. 5) to incorporate the Stiles-Crawford effect of the first kind (SCE-I). SCE-I denotes varying weightings of the rays contributing to the retinal image formation according to the positions where they penetrate the pupil [16]. The weighting is 10E(-0.05r2), where r (in millimeters) is the distance between where a ray hits the pupil and the pupil center.

 figure: Fig. 5.

Fig. 5. Comprehensive data of the image formation in the InIm-LFD with the HOA, provided by the simulator. The first plot is drawn on the pupil with positional coordinates in millimeter; all other plots are defined on the retina with angular positions (i.e., visual angles) in arcmin. Similar plots following are drawn in the same way.

Download Full Size | PDF

Next, the most critical data, the rays’ retinal footprint, is acquired. As seen, the HOA causes the rays to be irregularly distributed on the retina in a range far beyond the human resolution limit. For each sampling ray emitting from a pixel center, the pixel’s four corners are also traced to know the retinal footprint of the pixel, which provides an area to be convoluted with the ray’s retinal point spread function (PSF) to acquire the pixel’s retinal image. Here, the PSF is simulated by Zemax using the fundamental Rayleigh-Sommerfeld diffraction. Then, retinal images of pixels corresponding to all sampling rays are synthesized with the weightings provided by SCE-I. By convoluting the synthesized retinal image with a picture, how the refractive error blurs a clear picture can be known (see the two images at the bottom of Fig. 5). More details about the simulator can be found in [16,3638].

2.4.2 Iterative optimization based on the simulator

The modelization of InIm-LFDs opens the possibility of efficient EIA optimization with a specific target, which, in this study, is re-focusing sampling rays on the retina. The starting point of the optimization is the EIA acquired in the preliminary correction. Considering the initial EIA obeying paraxial optics has primarily corrected the refractive error, the pixels in the optimal EIA should not depart from the initial pixels too much. Therefore, the solution space of the EIA optimization can be limited to neighbor areas of the initial pixels, as Fig. 6(a) shows, where neighbor pixels around the initial pixels determine the maximum manipulation extent.

 figure: Fig. 6.

Fig. 6. (a) Initial pixels (dark red) acquired in the preliminary correction and their neighbor pixels (light red) as the solution space of the EIA optimization. The inset shows the angular manipulation range determined by the neighbor pixels. (b) The flow of the EIA optimization.

Download Full Size | PDF

A merit function ε should be constructed, which, in this study, is the root mean square error (RMSE) of the sampling rays’ retinal footprint concerning the ideal image position, as Eq. (4) gives. The merit function can be evaluated in Matlab using the retinal footprint returned by Zemax. Other merit functions can also be used besides RMSE as long as they can reflect the extent to which sampling rays are focused.

$$\varepsilon = \sqrt {\sum\limits_{i = 1}^N {\frac{{{{({x_i} - {x_0})}^2} + {{({y_i} - {y_0})}^2}}}{N}} }, $$
where (xi,yi) is the retinal position of a sampling ray out of N rays; (x0,y0) is the ideal image position, e.g., x0 = y0 = 0 for an on-axis voxel.

Because the solution space is limited here, we choose the simple but effective hill-climbing algorithm, as shown in Fig. 6(b). The pixel in an initial pixel’s neighbor that produces the least merit function is chosen to be the optimal pixel. Because the optimization is performed in the general-purpose Matlab software with the merit function evaluated by Zemax, other optimization algorithms can be adopted by balancing the convergence efficiency and the possibility of finding the globally optimal solution, which is not covered by this paper.

2.5 Two examples: HOA and AST

Using the HOA investigated in the previous section (see Eq. 3), iterative optimization is performed. Fig. 7 shows the image formation after the optimization. Compared with Fig. 5 depicting the uncorrected system, the most significant difference is that sampling rays are well focused on the retina, within ±1 arcmin. Correspondingly, the synthesized retinal image of the voxel is much sharper. Fig. 8 compares normal vision, HOA without correction, and HOA with correction in terms of retinal footprints, synthesized retinal images of the voxel, and observed scenes. Besides subjective observation, the correction is also evaluated by the Structural Similarity (SSIM). Both subjective and objective evaluation demonstrate that the correction nearly achieves an image as sharp as normal vision despite a slight difference. The residual difference will be discussed in Sec. 5.

 figure: Fig. 7.

Fig. 7. Image formation data after computationally correcting for the HOA.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Retinal footprints, synthesized retinal images of the voxel, and observed retinal images corresponding to normal vision, HOA without correction, and HOA with correction.

Download Full Size | PDF

We then correct for a severe AST (which will be experimentally verified along with the above HOA). The AST is +5D, namely +65D in the tangential direction and +60D in the sagittal direction. The wave aberration is directly calculated as the OPD between an astigmatic wavefront and a spherical wavefront, as given and plotted in Fig. 9. Using the same correction method above, Fig. 10 demonstrates the correction in the simulator. The AST distorts the retinal image in a distinct way that sampling rays break up vertically but keep focused laterally. After correcting, sampling rays are re-focused (even better than normal vision). Correspondingly, subjective and objective evaluation demonstrate a picture approximating normal vision with a slight but intrinsic difference, similar to the HOA case.

 figure: Fig. 9.

Fig. 9. Wave aberration and wavefront map of the AST.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Retinal footprints, synthesized retinal images of the voxel, and observed retinal images corresponding to normal vision, AST without correction, and AST with correction. As the AST is severe, the retinal plot range is expanded.

Download Full Size | PDF

3. Fast image rendering

The above method manipulates sampling rays of each voxel, equivalent to replacing homogenous pixels of a voxel with a new group of pixels. Such voxel-based correction is difficult to be compatible with the mainstream viewpoint-based EIA generation. Moreover, the viewpoint-based method per se is computationally complex. Therefore, we in this section propose a real-time voxel-based generation method.

The early PRR method acquires homogenous pixels for every voxel. However, it ignores an important fact: an InIm-LFD contains a finite number of voxels with explicit and constant positions. Figure 11(a) illustrates this fact. Several homogeneous pixels on the display screen synthesize a voxel on the ith depth plane. The transverse magnification determines the voxel size Vi = p·Li/g, where p is the pixel size, g is the MLA-screen gap, and Li is the image depth. Thus, the voxel size is constant on the depth plane. And the maximal range of the depth plane is limited by the field of view of the MLA. Therefore, a finite number of voxels are explicitly defined on the depth plane.

 figure: Fig. 11.

Fig. 11. Left: A voxel Vi on the ith depth plane synthesized by several homogeneous pixels (marked green); right: a 3D voxel matrix formed by voxels on several depth planes.

Download Full Size | PDF

Similarly, voxels on all depth planes can be defined to form a 3D voxel matrix, as the right plot in Fig. 11 shows. In this manner, an input 3D scene should be first resampled to the voxel matrix, which is static and solely determined by hardware. The above discussion also reveals that the PRR method’s practice that dynamically acquires all points on a 3D object and traces for them is unnecessary, namely, oversampling.

Furthermore, interestingly, each voxel in the static voxel matrix is invariantly mapped to its homogeneous pixels [40,41]. For normal vision, the homogeneous pixels are determined through conventional raytracing; for vision correction, they are acquired using the method discussed in the previous Section. Naturally, we move the invariant voxel-pixel mapping outside the EIA generation and store it as a lookup table (LUT). Fig. 12(a) shows the table’s data structure, where each voxel is linked to its homogeneous pixels’ indices. The LUT is stored in random access memory (RAM) for frequent reading, so the LUT size matters. Assume a full-HD display screen; a pixel index can be recorded as an 11-bit integer (211 = 2048 > 1920). The LUT size S for one depth plane is estimated as Eq. (5). The estimate is preliminary and varies with specific computer techniques in practical use (e.g., optimizing the data structure in Fig. 12(a) for a reduced size), far beyond this study’s scope. And a higher-resolution screen or more depth planes will also dilate the LUT. Nevertheless, our estimate demonstrates that the LUT’s scale is affordable for modern electronic devices with increasing RAM.

$$S = \frac{{1920 \times 1080}}{{{N_{angular}}}} \cdot {N_{angular}} \cdot (11{\kern 1pt} {\kern 1pt} \textrm{bit} + 11{\kern 1pt} {\kern 1pt} \textrm{bit}) = 5.44{\kern 1pt} {\kern 1pt} \textrm{MB}, $$
where Nangular is the number of rays forming a voxel.

 figure: Fig. 12.

Fig. 12. (a) Data structure of the LUT storing the mapping between voxels and homogeneous pixels. (b) Flow chart of the proposed EIA rendering method.

Download Full Size | PDF

With the static voxel matrix and the LUT for voxel-pixel mapping, our EIA generation first uses the voxel matrix to resample an input 3D scene. The resampling is performed only once, a negligible overhand of the whole rendering. Next, rasterized data of every voxel (i.e., RGB values) is assigned to its homogeneous pixels through ultra-fast lookup operations. In addition, black voxels can be skipped; thus, the rendering speed for a black-ground screen can be further shortened. Figure 12(b) summarizes the rendering flow, highlighting that the voxel matrix acquisition and the voxel-pixel mapping with vision correction are managed offline. We like to emphasize that the vision correction does not affect the computational complexity as long as the corrected homogeneous pixels are recorded in the LUT.

Using the Lily picture in Sec. 2, we implement the EIA rendering on an entry-level PC with no standalone GPU or other advanced computing hardware, as shown in Table 1. Because the rendering merely requires LUT operations with a negligible resampling operation, the runtime is as fast as 10 ms per frame. Therefore, by exploiting the static information of an InIm-LFD, the proposed voxel-based method achieves real-time rendering with no cost in hardware or rendering accuracy compared with the viewpoint-based family. More importantly, the rendering method is compatible with the proposed voxel-based vision correction. We will report the rendering method with its signal processing analysis in another study.

Tables Icon

Table 1. The proposed voxel-based EIA rendering: implementation details

4. Experimental verification

Considering ophthalmic test on users with desired refractive errors is not available in this study, and subjective test data is not intuitive to demonstrate the correction, we perform a proof-of-concept verification with a camera imitating the human eye. The desired refractive errors are introduced by inserting a customized lens in front of the camera lens.

We build an InIm-LFD prototype containing a microdisplay (ECX335A 0.7’’ Si-OLED from SONY) and a MLA (#630 from Fresnel Technologies, 1 mm pitch), precisely as the configuration used in the above simulation, as Fig. 13(a) shows. The camera is the main rear camera of the iPhone 13 Pro Max, which has an approximately 4-mm entrance pupil (5.6 mm focal length @ f/1.5). Two lenses are fabricated to introduce the AST and HOA. The lens for AST is cylindrical with a + 5D diopter in the vertical direction. The other one for HOA is a freeform lens with its profile obeying the Zernike polynomials in Eq. (3) to act as a phase plate. Figure 13(b) shows the two lenses.

 figure: Fig. 13.

Fig. 13. (a) Experimental setup. (b) Two customized lenses to introduce the desired aberrations. The left lens is cylindrical, producing AST; the right lens is freeform, creating HOA.

Download Full Size | PDF

The experiment starts with taking photographs of the LFD, recorded as a normal vision case. Next, either of the lenses is attached in front of the camera to introduce aberrations. Distorted images are then acquired. Finally, the computational correction is applied by replacing the original EIA with a new EIA generated using the fast method introduced in Sec. 3. A recovered photo is taken with the new EIA.

We are aware of unavoidable errors between the experiment and the simulator. For example, the camera departs from the eye model used in the simulator. The aberration is introduced by attaching an external lens but is not natively generated. Thus, the corrected EIA obtained in the simulator must not meet the experiment perfectly, not to mention the alignment error between the microdisplay, the MLA, the external lens, and the camera. However, as Fig. 14 shows, the photographs show significantly recovered image quality. And the AST case performs better than the HOA case. We infer this is because the AST has a regular aberration distribution (i.e., only in the vertical direction). Meantime, the HOA is distributed very irregularly, so the HOA case is much more sensitive to alignment errors. For instance, in the lower right plot of Fig. 14, the region magnified is so well recovered that even the delta subpixel layout of the micro-OLED display is visible. In contrast, other areas do not perform as well as the inset, typically resulting from a slight rotation error between components. The above uncontrollable factors cause objective evaluation to be much less robust than the simulator-based result, so we only perform subjective evaluation here.

 figure: Fig. 14.

Fig. 14. Images formed by the InIm-LFD prototype, taken by a camera. In (b) and (c), the upper plot corresponds to aberrations without correction, and the lower plot denotes corrected images.

Download Full Size | PDF

5. Discussion

In Sec. 3, we mentioned that the corrected image is always slightly blurred than normal vision, even in the simulator where all things are ideal. For example, in Fig. 10 (the AST case), although our optimization pushed the sampling rays to be distributed within ±0.01 arcmin, the voxel’s corrected retinal image is still slightly expanded compared with normal vision. This comes from the fact that the retinal image is synthesized by light beams with a certain width but not ideal sampling rays, as Fig. 15(a) shows. Each beam is aberrated by the eye’s refractive errors, thus limiting the synthesized image’s quality.

 figure: Fig. 15.

Fig. 15. (a) Narrow beams through lenslets in an InIm-LFD; (b) a wide beam through the entire eye pupil in the normal view. Spot diagrams produced by the HOA are given.

Download Full Size | PDF

Nevertheless, as long as the beams are not too wide, near-perfect correction can still be achieved, as explained in Fig. 15. The normal view in Fig. 15(b) uses a wide beam through the entire eye pupil with a terrible HOA-induced spot diagram. At the same time, the unique principle of InIm-LFDs allows using multiple narrow beams with better spot diagrams, as the spot diagram is proportional to beam width. Thus, the InIm-LFD’s image quality is dominated by the narrow beam’s width as long as the beams can be well concentrated on the retina. Note that the above discussion implies that a MLA with a smaller aperture could improve the corrected image’s quality. Nevertheless, as discussed in [42], the smaller aperture also produces stronger diffraction, which may, instead, deteriorate image quality.

We like to discuss aberration correction further. Generally, image formation means a spherical wavefront converging at an image point. When encountering aberrations, light must be modulated to recover the spherical wavefront [43,44]. Since the LFD works by segmenting the wavefront through sampling rays (namely pupil segmentation), we manipulate beamlets, i.e., rotate wavefront segments to force them all normal to the image, as Fig. 16(a) shows. Although each wavefront segment is still aberrant, the local aberration of the beamlets keeps the overall aberration within a reasonable degree. Incidentally, the curvature of the wavefront segment is determined by the MLA but cannot be adjusted, causing the intrinsically shallow depth of field of LFDs [45].

 figure: Fig. 16.

Fig. 16. Strategies of modulating light for aberration correction. (a) The LFD rotates wavefront segments by manipulating sampling rays; (b) the holographic display reconstructs a spherical wavefront using a discrete phase profile.

Download Full Size | PDF

Recently, as the counterpart of LFDs, the light field microscope (LFM) also achieved computational aberration correction [46,47], proposed as digital adaptive optics (DAO). The computational correction in LFMs and LFDs both comes from the fact that spatio-angular information is recorded/presented; thus, the angular information can be adjusted to recover a spherical wavefront. However, besides our study being aimed at display applications but not imaging, we believe substantial differences lie between our research and the LFM ones. First, those aberration-corrected LFMs place a MLA at the native image plane of its objective, so a lenslet collects information from different angles. In comparison, a LFD must place the MLA at its pupil; hence, an elemental image recording spatial information must be shown under a lenslet. This difference makes the computational correction flow (i.e., postprocessing in LFMs and preprocessing in LFDs) quite different. Secondly, LFDs in this study are placed near the eyes, causing the eye pupil must deviate from the display system’s pupil. Because the aberration in such near-eye systems is counted on the eye pupil, we use the simulator-based optimization to address the pupil non-coincidence, which does not need to be considered in LFMs.

In addition to DAO, LFMs widely perform PSF-based deconvolution [43,44,46,47]. In comparison, our study is actually the compensation stage of DAO. Therefore, as long as the PSF is derived, deconvolution can also be applied to our vision-correcting LFDs to improve image quality further [48], which is the future plan of this paper.

On the other hand, the holographic display adopts a different strategy that directly tailors the wavefront through a SLM to recover a spherical wavefront. However, the pixel size of current SLMs (∼several microns) produces a discrete phase profile, as Fig. 16(b) shows. In contrast, the wavefront segment in LFDs has a fully continuous phase because a lens with a continuous profile creates it. Thus, the ability of holographic displays to modulate light for aberration correction is remarkably limited by the pixel size of SLMs.

From an optics point of view, it is not easy to tell whether the LFD or the holographic display is better at vision correction. Nevertheless, computational complexity is another concern in practice. Our image rendering method for LFDs supports real-time with entry-level hardware and, more importantly, is irrelevant to whether vision correction is performed. Therefore, our LFD-based solution should have more practicability considering this point.

To end the Discussion, Table 2 compares the proposed vision-correcting LFD with existing solutions. As discussed above, its DoF in light modulation benefits from separately controlling every sampling ray, comparable with holographic displays but lower than continuous profile-based freeform optics. AST and HOAs can be accordingly corrected. The computational characteristic makes our solution surpass hardware-based ones in prescription flexibility, volume, and hardware complexity. Furthermore, our voxel-based EIA rendering method addresses the high computational complexity. Therefore, we believe the proposed solution could be comprehensive for next-generation true-3D VR/AR.

Tables Icon

Table 2. Comparison of vision correction solutions for NEDsa

6. Conclusions

Concerning the requirement of vision-correcting NEDs, varifocal optics and customized freeform optics cannot simultaneously achieve AST and HOA correction, high prescription flexibility, and low cost in hardware complexity. Moreover, current LFD- and holographic display-based solutions significantly elevate computational complexity.

This study proposed a new LFD-based solution by precisely manipulating sampling rays since the sampling ray is the minimum controllable unit in an LFD. In addition to a preliminary calculation based on a wavefront map, we adopted a simulator to accurately investigate the sampling rays’ propagation from the display to the retina. With the aid of the simulator, the retinal distribution of sampling rays was optimized for a high degree of focusing. As a result, images quite approximating normal vision were achieved. More importantly, a new EIA generation method compatible with our vision correction method was proposed with a rendering speed as fast as 10 ms per frame on an entry-level PC. Finally, we performed proof-of-concept experimental verification using a camera attached with a customized lens imitating HOA or AST. Despite unavoidable errors between the experiment and the ideal simulator, captured images were significantly recovered to normal vision.

Funding

National Key Research and Development Program of China (2021YFB2802300); Natural Science Foundation of Guangdong Province (2021A1515011449); General Project of Basic and Applied Foundation of Guangzhou City (202102080234).

Acknowledgments

Portions of this work were presented at the Frontiers in Optics in 2022, JW4B-51 [40] and JW5B-50 [41].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, and S.-T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci. Appl. 10(1), 216 (2021). [CrossRef]  

2. World Health Organization, World report on vision: Executive Summary, No. WHO/NMH/NVI/19.12 (World Health Organization, 2019).

3. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U. S. A. 114(9), 2183–2188 (2017). [CrossRef]  

4. X. Xia, Y. Guan, A. State, P. Chakravarthula, K. Rathinavel, T.-J. Cham, and H. Fuchs, “Towards a switchable AR/VR near-eye display with accommodation-vergence and eyeglass prescription support,” IEEE Trans. Visual. Comput. Graphics 25(11), 3114–3124 (2019). [CrossRef]  

5. P. Chakravarthula, D. Dunn, K. Akşit, and H. Fuchs, “FocusAR: Auto-focus augmented reality eyeglasses for both real world and virtual imagery,” IEEE Trans. Visual. Comput. Graphics 24(11), 2906–2916 (2018). [CrossRef]  

6. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs, “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Trans. Visual. Comput. Graphics 23(4), 1322–1331 (2017). [CrossRef]  

7. K. Yin, E.-L. Hsiang, J. Zou, Y. Li, Z. Yang, Q. Yang, P.-C. Lai, C.-L. Lin, and S.-T. Wu, “Advanced liquid crystal devices for augmented reality and virtual reality displays: principles and applications,” Light: Sci. Appl. 11(1), 161 (2022). [CrossRef]  

8. Y.-H. Lin, T.-W. Huang, H.-H. Huang, and Y.-J. Wang, “Liquid crystal lens set in augmented reality systems and virtual reality systems for rapidly varifocal images and vision correction,” Opt. Express 30(13), 22768–22778 (2022). [CrossRef]  

9. A. Jamali, D. Bryant, A. K. Bhowmick, and P. J. Bos, “Large area liquid crystal lenses for correction of presbyopia,” Opt. Express 28(23), 33982–33993 (2020). [CrossRef]  

10. A. Wilson and H. Hua, “Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses,” Opt. Express 27(11), 15627–15637 (2019). [CrossRef]  

11. S. Chen, J. Lin, Z. He, Y. Li, Y. Su, and S.-T. Wu, “Planar Alvarez tunable lens based on polymetric liquid crystal Pancharatnam-Berry optical elements,” Opt. Express 30(19), 34655–34664 (2022). [CrossRef]  

12. N. Hasan, A. Banerjee, H. Kim, and C. H. Mastrangelo, “Tunable-focus lens for adaptive eyeglasses,” Opt. Express 25(2), 1221–1233 (2017). [CrossRef]  

13. J.-Y. Wu and J. Kim, “Prescription AR: a fully-customized prescription-embedded augmented reality display,” Opt. Express 28(5), 6225–6241 (2020). [CrossRef]  

14. D. Cheng, J. Duan, H. Chen, H. Wang, D. Li, Q. Wang, Q. Hou, T. Yang, W. Hou, D. Wang, X. Chi, B. Jiang, and Y. Wang, “Freeform OST-HMD system with large exit pupil diameter and vision correction capability,” Photonics Res. 10(1), 21–32 (2022). [CrossRef]  

15. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, “Neural holography with camera-in-the-loop training,” ACM Trans. Graph. 39(6), 1–14 (2020). [CrossRef]  

16. Z. Qin, P.-Y. Chou, J.-Y. Wu, Y.-T. Chen, C.-T. Huang, N. Balram, and Y.-P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019). [CrossRef]  

17. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

18. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q.-H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28(22), 32266–32293 (2020). [CrossRef]  

19. Y. Itoh, T. Langlotz, S. Zollmann, D. Iwai, K. Kiyoshi, and T. Amano, “Computational phase-modulated eyeglasses,” IEEE Trans. Visual. Comput. Graphics 27(3), 1916–1928 (2021). [CrossRef]  

20. D. Kim, S.-W. Nam, K. Bang, B. Lee, S. Lee, Y. Jeong, J.-M. Seo, and B. Lee, “Vision-correcting holographic display: evaluation of aberration correcting hologram,” Biomed. Opt. Express 12(8), 5179–5195 (2021). [CrossRef]  

21. L. Shi, B. Li, and W. Matusik, “End-to-end learning of 3D phase-only holograms for holographic display,” Light: Sci. Appl. 11(1), 247 (2022). [CrossRef]  

22. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018). [CrossRef]  

23. V. F. Pamplona, M. M. Oliveira, D. G. Aliaga, and R. Raskar, “Tailored displays to compensate for visual aberrations,” ACM Trans. Graph. 31(4), 1–12 (2012). [CrossRef]  

24. F.-C. Huang, G. Wetzstein, B. A. Barsky, and R. Raskar, “Eyeglasses-free display: towards correcting visual aberrations with computational light field displays,” ACM Trans. Graph. 33(4), 1–12 (2014). [CrossRef]  

25. F.-C. Huang, D. Lanman, B. A. Barsky, and R. Raskar, “Correcting for optical aberrations using multilayer displays,” ACM Trans. Graph. 31(6), 1–12 (2012). [CrossRef]  

26. Z. Li, K. Zhu, X. Huang, J. Zhao, and K. Xu, “All silicon microdisplay fabricated utilizing 0.18 μm CMOS-IC with monolithic integration,” IEEE Photonics J. 14(2), 1–5 (2022). [CrossRef]  

27. Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer-generated integral photograph,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]  

28. R. Yang, X. Huang, and S. Chen, “Efficient rendering of integral images,” SIGGRAPH'05, Proceedings of Annual Conference on Computer Graphics and Interactive Techniques44 (2005).

29. G. Chen, C. Ma, Z. Fan, X. Cui, and H. Liao, “Real-time lens based rendering algorithm for super-multiview integral photography without image resampling,” IEEE Trans. Visual. Comput. Graphics 24(9), 2600–2609 (2018). [CrossRef]  

30. D. Chen, X. Sang, P. Wang, X. Yu, X. Gao, B. Yan, H. Wang, S. Qi, and X. Ye, “Virtual view synthesis for 3D light-field display based on scene tower blending,” Opt. Express 29(5), 7866–7884 (2021). [CrossRef]  

31. H. Li, S. Wang, Y. Zhao, J. Wei, and M. Piao, “Large-scale elemental image array generation in integral imaging based on scale invariant feature transform and discrete viewpoint acquisition,” Displays 69, 102025 (2021). [CrossRef]  

32. J. Schwiegerling, Field Guide to Visual and Ophthalmic Optics (SPIE, 2004), p. 16.

33. A. G. Leal-Junior, A. Theodosiou, R. Min, J. Casas, C. R. Díaz, W. M. Dos Santos, M. José Pontes, A. A. G. Siqueira, C. Marques, K. Kalli, and A. Frizera, “Quasi-distributed torque and displacement sensing on a series elastic actuator’s spring using FBG arrays inscribed in CYTOP fibers,” IEEE Sens. J. 19(11), 4054–4061 (2019). [CrossRef]  

34. P. M. Prieto, F. Vargas-Martın, S. Goelz, and P. Artal, “Analysis of the performance of the Hartmann-Shack sensor in the human eye,” J. Opt. Soc. Am. A 17(8), 1388–1398 (2000). [CrossRef]  

35. J. Restrepo, P. J. Stoerck, and I. Ihrke, “Ray and wave aberrations revisited: a Huygens-like construction yields exact relations,” J. Opt. Soc. Am. A 33(2), 160–171 (2016). [CrossRef]  

36. Z. Qin, P.-Y. Chou, J.-Y. Wu, C.-T. Huang, and Y.-P. Huang, “Resolution-enhanced light field displays by recombining subpixels across elemental images,” Opt. Lett. 44(10), 2438–2441 (2019). [CrossRef]  

37. Z. Qin, J.-Y. Wu, P.-Y. Chou, Y.-T. Chen, C.-T. Huang, N. Balram, and Y.-P. Huang, “Revelation and addressing of accommodation shifts in microlens array-based 3D near-eye light field displays,” Opt. Lett. 45(1), 228–231 (2020). [CrossRef]  

38. Z. Qin, Y. Zhang, and B.-R. Yang, “Interaction between sampled rays’ defocusing and number on accommodative response in integral imaging near-eye light field displays,” Opt. Express 29(5), 7342–7360 (2021). [CrossRef]  

39. K. Xu, “Silicon electro-optic micro-modulator fabricated in standard CMOS technology as components for all silicon monolithic integrated optoelectronic systems,” J. Micromech. Microeng. 31(5), 054001 (2021). [CrossRef]  

40. Y. Cheng, J. Dong, B.-R. Yang, and Z. Qin, “Fast rendering method for computer-generated integral imaging light field displays,” in Frontiers in Optics (Optica Publishing Group, 2022), JW4B-51.

41. Y. Qiu, Y. Cheng, B.-R. Yang, and Z. Qin, “Computational vision-correcting light field displays with fast image generation,” in Frontiers in Optics (Optica Publishing Group, 2022), JW5B-50.

42. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019). [CrossRef]  

43. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]  

44. T. Li, T.-M. Fu, K. K. L. Wong, H. Li, Q. Xie, D. J. Luginbuhl, M. J. Wagner, E. Betzig, and L. Luo, “Cellular bases of olfactory circuit assembly revealed by systematic time-lapse imaging,” Cell 184(20), 5107–5121.e14 (2021). [CrossRef]  

45. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016). [CrossRef]  

46. J. Wu, Z. Lu, D. Jiang, et al., “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332.e17 (2021). [CrossRef]  

47. J. Wu, Y. Guo, C. Deng, A. Zhang, H. Qiao, Z. Lu, J. Xie, L. Fang, and Q. Dai, “An integrated imaging sensor for aberration-corrected 3D photography,” Nature 612(7938), 62–71 (2022). [CrossRef]  

48. X. Yu, H. Li, X. Sang, X. Su, X. Gao, B. Liu, D. Chen, Y. Wang, and B. Yan, “Aberration correction based on a pre-correction convolutional neural network for light-field displays,” Opt. Express 29(7), 11009–11020 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. LFDs: from the general concept to implementation. (a) The fundamental concept: sampling light with rays; (b) manipulating the rays’ light fields using two light-modulating components, P1 and P2; (c) implementing using a microdisplay and a MLA as the light-modulating components. (d) Viewpoint-based understanding of the LFD and the derived EIA.
Fig. 2.
Fig. 2. Retinal images formed in an InIm-LFD: (a) normal vision; (b) myopia with an elongated eyeball; (c) hyperopia with a shortened eyeball; (d) HOA caused by a distorted cornea; (e) HOA corrected by generating sampling rays from new pixels (blue rectangles with solid blue lines) instead of original pixels (red rectangles with dashed red lines). The ray manipulation through a new pixel is magnified in the inset.
Fig. 3.
Fig. 3. (a) Pixel offset, (Δu, Δv), and the corresponding rotation of the sampling ray regarding the eyeball’s wavefront map. (b) The geometry to derive ray aberration from wave aberration.
Fig. 4.
Fig. 4. An InIm-LFD with HOA for the simulator to analyze: parameters of the InIm-LFD (the left table) and the system modeled in Zemax. The lower inset shows a single lenslet, and the right inset shows the wavefront map.
Fig. 5.
Fig. 5. Comprehensive data of the image formation in the InIm-LFD with the HOA, provided by the simulator. The first plot is drawn on the pupil with positional coordinates in millimeter; all other plots are defined on the retina with angular positions (i.e., visual angles) in arcmin. Similar plots following are drawn in the same way.
Fig. 6.
Fig. 6. (a) Initial pixels (dark red) acquired in the preliminary correction and their neighbor pixels (light red) as the solution space of the EIA optimization. The inset shows the angular manipulation range determined by the neighbor pixels. (b) The flow of the EIA optimization.
Fig. 7.
Fig. 7. Image formation data after computationally correcting for the HOA.
Fig. 8.
Fig. 8. Retinal footprints, synthesized retinal images of the voxel, and observed retinal images corresponding to normal vision, HOA without correction, and HOA with correction.
Fig. 9.
Fig. 9. Wave aberration and wavefront map of the AST.
Fig. 10.
Fig. 10. Retinal footprints, synthesized retinal images of the voxel, and observed retinal images corresponding to normal vision, AST without correction, and AST with correction. As the AST is severe, the retinal plot range is expanded.
Fig. 11.
Fig. 11. Left: A voxel Vi on the ith depth plane synthesized by several homogeneous pixels (marked green); right: a 3D voxel matrix formed by voxels on several depth planes.
Fig. 12.
Fig. 12. (a) Data structure of the LUT storing the mapping between voxels and homogeneous pixels. (b) Flow chart of the proposed EIA rendering method.
Fig. 13.
Fig. 13. (a) Experimental setup. (b) Two customized lenses to introduce the desired aberrations. The left lens is cylindrical, producing AST; the right lens is freeform, creating HOA.
Fig. 14.
Fig. 14. Images formed by the InIm-LFD prototype, taken by a camera. In (b) and (c), the upper plot corresponds to aberrations without correction, and the lower plot denotes corrected images.
Fig. 15.
Fig. 15. (a) Narrow beams through lenslets in an InIm-LFD; (b) a wide beam through the entire eye pupil in the normal view. Spot diagrams produced by the HOA are given.
Fig. 16.
Fig. 16. Strategies of modulating light for aberration correction. (a) The LFD rotates wavefront segments by manipulating sampling rays; (b) the holographic display reconstructs a spherical wavefront using a discrete phase profile.

Tables (2)

Tables Icon

Table 1. The proposed voxel-based EIA rendering: implementation details

Tables Icon

Table 2. Comparison of vision correction solutions for NEDsa

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

Δ u = Δ θ x g , Δ v = Δ θ y g
W ( x , y ) = W ( x , y ) x i + W ( x , y ) y j = Δ θ x i + Δ θ y j = 1 R O T ,
W ( ρ , θ ) = A π / 6 ρ 2 sin 2 θ + B π / 8 ρ 3 sin 3 θ ,
ε = i = 1 N ( x i x 0 ) 2 + ( y i y 0 ) 2 N ,
S = 1920 × 1080 N a n g u l a r N a n g u l a r ( 11 bit + 11 bit ) = 5.44 MB ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.