Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning

Open Access Open Access

Abstract

To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging of ocular circulation is of major importance not only for ophthalmic diagnosis but also for the study of eye diseases such as glaucoma, diabetic retinopathy, and age-related macular degeneration [1, 2]. The most commonly used angiography methods include fluorescein angiography (FA) [1] and indocyanine green angiography (ICGA) [2]. However, these angiography methods are invasive and require dye injection into the human body, which can occasionally have adverse effects [3–6].

In recent years, optical coherence tomography (OCT) has become a popular noninvasive imaging method in ophthalmic applications [7, 8]. As an extension of OCT, en face OCT angiography (OCT-A) [9–13] provides vasculature images noninvasively and can be used to partially replace conventional invasive angiography. However, OCT-A measurements take several seconds and are thus vulnerable to the effects of eye motion. Any eye motion corrupts the en face OCT-A imaging process to produce structural gaps and distortions, which appear as motion artifacts. The most straightforward approach that can be used to correct these artifacts is to add hardware to monitor and compensate for eye motion [14–17]. However, any additional hardware may increase both the cost and the complexity of the system. Another approach is to use a redundant scanning pattern in conjunction with image post-processing [18–22].

In our previous research, we developed a Lissajous scanning optical coherence tomography method [22, 23]. In this method, the probe beam scans the retina using a Lissajous pattern. The trajectory of the Lissajous scan frequently overlaps. This property of the Lissajous scan is considered to have the following advantages for motion correction. First, eye motion estimation is considered to be more robust because it is based on registration of short segments that overlap each other in multiple regions. Next, the motion correction success rate will be improved. This improvement is based on the flexibility of the registration order. Finally, even if some data must be discarded, blank sections are unlikely to appear in the image. If some strips cannot be registered because of large-scale distortion or blinking, these strips can be discarded. Additionally, the sinusoidal scanning pattern used for each mechanical beam scanner is suitable for high-speed scanning. The characteristic properties of Lissajous scanning were used to develop a post-processing method that corrects the eye motion artifacts. However, the method was not compatible with OCT-A imaging. This is because the Lissajous scan is not suitable to be applied to multiple scans repeated at the same location within a short time period.

In this paper, we present a new Lissajous scanning pattern that is compatible with OCT-A imaging, and also present a motion correction algorithm that has been tailored for this scanning pattern. A motion-free OCT-A method based on this modified Lissajous scan pattern is demonstrated. To demonstrate the effectiveness of the proposed method, we compared en face Lissajous OCT-A images with and without motion correction. To validate the ability of the proposed method to retrieve true retinal vasculature, an en face Lissajous OCT-A image is compared with a scanning laser ophthalmoscopy (SLO) image, which is regarded as an appropriate reference standard. Additionally, to evaluate the repeatability of motion artifact correction, a checkerboard image was created using two Lissajous OCT-A images that were acquired from two independent measurements of the same subject. To demonstrate the clinical utility of the proposed method, slab-projection OCT-A images of three retinal plexuses and the motion-corrected Lissajous OCT-A image acquired from a blinking case are presented.

2. Method

The OCT data are acquired using the modified Lissajous trajectory. After OCT-A processing, a motion estimation algorithm is then applied. Then, motion-corrected en face OCT-A images are obtained by transforming the Lissajous data into a Cartesian grid. An overview of the motion-correction process is shown in Fig. 1. Step-by-step descriptions of the OCT system, the modified Lissajous scan, and the signal processing are given in the following. The notations used in the following sections are listed in Appendix A (Table 1).

 figure: Fig. 1

Fig. 1 Process diagram of OCT-A image reconstruction from Lissajous-scanned OCT signal. The boxes and circular nodes represent the processes and the data, respectively. In the red region, the OCT and OCT-A data are represented by the acquisition time sequence, i.e., the data along the original Lissajous scan trajectory. In contrast, in the blue region, these data are presented as remapped data on a Cartesian grid. The notations used in the following sections are listed in Appendix A.

Download Full Size | PDF

2.1. System

We used a 1-µm Jones-matrix OCT (JM-OCT) to collect the data. JM-OCT is polarization sensitive, however, only the intensity information is used in this study. The system has a depth resolution of 6.2 µm in tissue and an A-line rate of 100,000 A-lines/s. The probe power on the cornea is 1.4 mW. The system details are described in the literature [24, 25]. The macula and optic nerve head (ONH) of healthy human subjects were measured. The study protocol adhered to the tenets of the Declaration of Helsinki, and was approved by the institutional review board of the University of Tsukuba.

2.2. Modified Lissajous scanning pattern

In our previous research, we used a standard Lissajous scan to obtain a motion-free OCT intensity volume [22]. The probe beam scanning trajectory in the laboratory coordinate system is described as follows:

x(t)=Axcos(2πt/Tx)
y(t)=Aycos(2πt/Ty),
where x and y are the orthogonal lateral positions of the probe beam, and Tx and Ty are the periods of the x- and y-scans, respectively. From Ref. [26], the ratio of the coordinate periods is defined as Ty/Tx = 2N/(2N – 1), where the parameter N represents the number of y-scan cycles that are used to fill the scan area. Although it has been successfully applied in structural OCT imaging, the method is not compatible with OCT-A. We have therefore modified the scanning pattern as follows.

OCT-A measurements require the OCT probe to scan the same location multiple times with appropriate time separation. However, each cycle of x(t) and y(t) fails to form a closed scanning pattern because TxTy. Specifically, the start and end point positions for each of the cycles are not the same. If we repeat these cycles multiple times, this discontinuity then leads to mechanical ringing of the scanner and finally causes decorrelation in the acquired OCT signals. To mitigate this effect, we add a time margin between each scan cycle and its subsequent repeat cycle. Each scan is delimited by the x-scan period Tx, as indicated by the green curve in Fig. 2. The margin is then filled using a smooth trajectory with limited low acceleration (red curve). Hereafter, we call this set of scan cycles such as traces the green trajectory a “repeat-cycle-set.”

 figure: Fig. 2

Fig. 2 Example set of repeating scans in the modified Lissajous scanning pattern. The probe beam scans the green trajectory multiple times and the scan set is called the “repeat-cycle-set.” The red line indicates the trajectory that connects the repeating cycles, and is called the “margin.”

Download Full Size | PDF

This modified scanning pattern is then described as

x(t)=Axcos[2πχx(t)/Tx]
y(t)=Aycos[2πχy(t)/Ty]
where the t parameters in Eqs. (1) and (2) are replaced by χx and χy, respectively, which are both functions of the time t and are defined as
χx(t)={t(kn)ΔT,forkTcnΔTtkTcnΔT+Tx forkTcnΔT+Tx<t<(k+1)TcnΔT(k+1)Tx  andm<M1
χy(t)=t(kn)(TcTy)
where Tc is the time lag between the repeated scans at the same location and ΔT is the time margin between adjacent repeated cycles. Tc and ΔT are related in that Tc = Tx + ΔT. n is the index of a repeat-cycle-set that is given by
n=tMTcΔT.
k = m + Mn is the total number of cycles that were counted from the beginning of the Lissajous scan and is related to t as follows:
k=t+nΔTTc.
M is the number of repeated cycles included a repeat-cycle-set, while m is the repeated cycle count determined by kMn.

The profiles of the phases of the cosine functions 2πχx(t)/Tx and 2πχy(t)/Ty are shown in Fig. 3. At the margin between the repeats (i.e., the red regions), χx has a small flat region located between the repeats at which the x-scan stops. The phase of x′(t) is a multiple of 2π at the margin and is thus in the zero-slope (flat) region of the cosine function. This ensures that the resulting x-scan is smooth.

 figure: Fig. 3

Fig. 3 Temporal profiles of the phase of the modified Lissajous scanning pattern, where the number of repeats (M) is 4. Red regions indicate the scan margin.

Download Full Size | PDF

Similar to our previous work [22], the x-scan period is set to be a multiple of the A-line period TA as Tx = LxTA, where Lx is the number of A-lines within a single x-scan period. To scan the same location, the time margin ΔT must be a multiple of the acquisition period as follows:

ΔTδTA,
where δ is an arbitrary integer. A practical example of the selection of δ is described in the second last paragraph of this section. The time margin should be also close to the difference between the periods of the x- and y-scans, i.e., ΔTTyTx = Tx/(2N − 1) This is necessary to avoid abrupt changes in scanning location in the y-scan.

Acquisition of the A-lines is performed during the modified Lissajous scan described above with an acquisition rate of 1/TA. However, because the acquisition process was not performed during the time margin, the duty cycle is MLx/[M(Lx + δ) – δ].

The location of the i-th A-line (i = 0, 1, 2, ⋯) is then given by

xi =Axcos[2πχx,i/Tx]
yi =Aycos[2πχy,i/Ty]
where
χx,i=iTA
χy,i=iTA+(kn)(TyTx)
By substituting i = l + Lx(m + M) · n) into these equations, where l = 0, 1, 2, ⋯, Lx − 1, the location of the i-th line can be expressed using the indexes of (l, m, n) as
xi =Axcos[2π(l/Lx+m+Mn)],
yi =Aycos[2π{(l+Lxn)TA/Ty+m+(M1)n}].
It is evident that both xi  and yi  have the same value for each of the repeats, i.e., for any value of m. Therefore, the line acquisitions are performed at the same laboratory coordinate positions for the repeats.

We set the same Lissajous scanning parameter that was used in our previous study, i.e., N = Lx/4 (see Eq. (8) of Ref. [22]). Therefore, the difference between the periods of the x- and y-scans is TyTx = 2TA/(1 − 2/Lx). For large Lx, this difference between periods is approximately two A-line periods. We therefore set the time margin to be the time required to scan two A-lines, i.e., δ ≡ 2. The discrepancy between the set time margin 2TA and the ideal time margin TyTx is 4TA/(Lx − 2). Here, the maximum spatial gap in the y-scanning pattern becomes Δymax =Aysin(8π/Lx2). Because Lx usually has a value of several hundreds, the maximum spatial gap Δymax  is approximately Ay8π/Lx2. In this study, this gap is less than 1 µm and thus does not cause mechanical ringing. The duty cycle in the configuration presented here is 99.86% (M=2, δ=2, and Lx=724).

2.3. Motion-corrected OCT-A

2.3.1. OCT-A processing

Eye motion is detected and corrected using an en face OCT-A image. In our study, the OCT-A signal is obtained by computing the complex decorrelation among multiple repeated scans using Makita’s noise correction. The details of this OCT-A process are described in Ref. [10], so we simply summarize the essential aspects of the computation process here. To compute the temporal correlation for this OCT-A process, we first express a pair of OCT signals as a vertical vector gn as follows:

gn(Tc;l,z,m,p)=[g(l,n,z,m,p)g(l,n,z,m+1,p)],
where g(l, n, z, m, p) is the complex OCT signal at the z-th pixel along the depth of the i-th acquisition line (the l-th line of the m-th repeated cycle in the n-th repeat-cycle-set) of the p-th polarization channel of the JM-OCT. From Section 4.2 of Ref. [10], the correlation signal for OCT-A rn(Tc; l, z) with time lag Tc is computed using gn(Tc; l, z, m, p). We first substitute g(τ; x, z, f, p) into Eq. (30) of Ref. [10] and then compute Eq.((34). The value of this equation (which is r¯SM in the notation of Ref. [10]) is the noise-corrected correlation, which is denoted by rn(Tc; l, z) in this paper.

An en face OCT-A image is then used for the subsequent motion estimation and correction processes. For this purpose, an en face OCT-A image was created from a whole volumetric OCT-A in the manner described in Section 5.2 of Ref. [10]. In short, the en face OCT-A signal is given by

El,n=z{1M[rn(Tc;l,z)]},
where
M[a]={1,ifa1<1,a,otherwise..

2.3.2. Pre-processing

Rapid eye motion can lead to decorrelation artifacts and structural discontinuities in en face OCT-A images. To discard the repeat-cycle-sets that exhibit these artifacts, we detect the repeat-cycle-sets that are dominated by high OCT-A signals (high decorrelation) and/or show low structural correlation with successive sets.

These high-decorrelation repeat-cycle-sets are detected using the mean OCT-A value for each repeat-cycle-set, given by E¯n=1LxlEl,n, where n is the index of the repeat-cycle-set. The mean OCT-A value for the entire volume, E¯, is also computed. If the mean OCT-A value for a repeat-cycle-set is greater than 1.2 times the mean for the entire volume, i.e., if E¯n>1.2E¯, then the repeat-cycle-set is considered to be suffering from rapid eye motion, and is discarded.

The cross-correlation values of the en face OCT-A between the adjacent repeat-cycle-sets are calculated as ρn=12(cor[El,n1,El,n]+cor[El,n,El,n+1]), where cor is the correlation operation. If the cross-correlation value between adjacent repeat-cycle-sets is less than 0.8 times the mean value of the all cross-correlation values, i.e., if ρn<0.8ρn¯, then the repeat-cycle-set is considered to have low structural correlation and is discarded.

Figure 4 shows an example of en face OCT-A images before [Fig. 4(a)] and after [Fig. 4(b)] artifact removal. The vertical and horizontal directions in these images are used as the indexes of the repeat-cycle-set and the A-line in the repeat-cycle-set, respectively. While several horizontal white (highly decorrelated) lines are visible in Fig. 4(a), these lines can be removed using the method described above. In Fig. 4(b), these discarded lines are shown in black.

 figure: Fig. 4

Fig. 4 Example of OCT-A map (a) before and (b) after rapid eye motion artifact removal. The vertical and horizontal directions in the images represent the indexes of the repeat-cycle-set and the A-line in the repeat-cycle-set, respectively.

Download Full Size | PDF

After the artifacts caused by rapid eye motion are discarded, we divide the en face OCT-A image into strips using two rules. The first rule is that each strip must be free from rapid eye motion. The second rule is that the maximum acquisition time for each strip is set to be 0.2 s. Therefore, the en face projection is divided into strips that consist of continuous sets after the discarded sets have been removed. Strips that are longer than 0.2 s are then further divided. The resulting strips are then used to detect and correct the motion artifacts that are caused by eye motion, and this process will be described in the next section (Section 2.3.3).

2.3.3. Motion estimation

We performed lateral motion estimation, which was previously described in Section 2.2.2 of Ref. [22]. In the first step of our motion estimation process, the strips that were created in the previous section (Section 2.3.2) were remapped into a Cartesian grid with Lx/2 × Lx/2 grid points. The strips are then sorted in size from large to small. The strips are then registered sequentially to a reference en face image. At the beginning of this sequence, the reference image is the strip itself. Therefore, the registration process is not performed in practice. For the second strip, the reference image is the first strip. The registered strip is then merged with the reference strip and this merged image is used as the new reference for the next registration process. This registration sequence is iterated several times.

The motion correction process is identical to that which was described in Ref. [22], but with two exceptions. The first exception is that the method presented here uses the en face OCT-A signal, while the previous method used the en face OCT intensity. The second exception is that the resulting motion-corrected OCT-A is then evaluated using the method in the following, while it was blindly accepted in the previous method. Here, if there are more than 2Lx blank grid points to which no OCT-A value has been assigned, the motion estimation result is discarded. In this case, a subordinate strip is then selected as the initial reference strip and the lateral registration process is redone.

In the second step, a single strip is divided into four quadrants. Each of these quadrants is then registered to a reference image that is formed by all strips other than the target strip. After iteration of the registration process, the quantity of motion is obtained for each acquisition point. This lateral motion estimation process is described in more detail in Section 2.2.2 of Ref. [22].

Finally, the OCT-A signals are re-transformed into the Cartesian grid. The re-transformation process takes the estimated motion into account to cancel out the motion artifacts. A motion-corrected en face OCT-A image is then obtained.

2.4. Image formation

2.4.1. Slab en face OCT-A formation

The OCT intensity volume, I(l, d, n), is obtained using the sensitivity-enhanced scattering OCT method (Eq. (30), Section 3.8 of Ref. [24]), which combines a set of the scans from a repeat-cycle-set with phase correction. The retinal layers were identified from this intensity volume using the Iowa Reference Algorithms (Retinal Image Analysis Lab, Iowa Institute for Biomedical Imaging, Iowa City, IA) [27–29]. The segmentation process used here was then applied to the cross-sectional intensity image of each of the repeat-cycle-sets.

After layer segmentation, the en face OCT-A signal for each A-scan at a target layer is obtained by selecting the minimum correlation coefficient along the depth within the target layer. Motion-corrected slab en face OCT-A images are then obtained by correcting the motion using the quantities of motion that were estimated in Section 2.3.3.

2.4.2. Image enhancement using Gabor filter

A Gabor filter is used to suppress noise and enhance the visibility of the retinal vascular network [30].

We used the real part of the Gabor filter. The kernel is formulated as follows [20]:

h(x,y;θ,f,σx,σy)=12πσxσye12(ξ2/σx2+υ2/σy2)cos(2πfξ),
where
ξ=xcosθ+ysinθ,
υ=xsinθ+ycosθ.
Here, x and y are the lateral spatial coordinates, while the other parameters determine the size (σx and σy) and the direction of the kernel (θ).

We set σx = σy = σ and the kernel size (s) to be 6 × σ. Because we only detect vessels across the kernel’s center, f is set as follows so that each kernel contains only a single cosine period:

f=1/s.
The Gabor kernels thus become
h(x,y;θ,s)=18πs2e18(ξ2+υ2)/s2cos(2πsξ).

To filter the en face image, we convolved the image with the Gabor kernels that have several directions (θ) and a range of kernel sizes (s) that are based on the visible retinal vessel sizes. The final filtered image E′ (x, y) is formed based on the maximum between filter outputs as follows:

E(x,y)=maxθ,s[E(x,y)h(x,y;θ,s)],
where E(x, y) is the original en face OCT-A image. The function maxx[f(x)] returns the maximum value of f(x) from any values of x.

2.4.3. Color-coded multiple plexus imaging

To show multiple vascular plexuses within a single image, we integrate the images of these plexuses in a manner similar to that of the volume rendering process and assign different colors to the different images. For the volume rendering procedure, we assume that each ray is only either reflected or transmitted. Then, the reflection coefficient (r) and the transparency (α) have the following relationship:

α=1r.
The percentage of reflection (R) from each vascular plexus can then be calculated as follows
Rj=rjb=1j1αb
where j is the index of a slab (plexus) and j = 1 is the shallowest slab.

For the volume rendered image formation, an en face OCT-A projection of a slab after background subtraction and min-max scaling is used as rj as follows:

rj(x,y)[Ej (x,y)Bj(x,y)]maxx,y,j[Ej (x,y)Bj(x,y)],
where Bj is the background of the image Ej  that was estimated using the rolling-ball method [31]. Subsequently, different colors are assigned to each retinal vascular plexus and a single image is then created using the maximum red-green-blue (RGB) values from the three images at each pixel point.
E(x,y)=maxj[Rj(x,y)Cj].
where Cj is the color (RGB) vector assigned to the j-th slab.

2.5. Scanning protocols

Transverse areas of 2 mm × 2 mm (Ax = Ay = 1 mm), 3 mm × 3 mm (Ax = Ay = 1.5 mm), and 6 mm × 6 mm (Ax = Ay = 3 mm) on the posterior segment of the eyes were scanned using the modified Lissajous scanning pattern that was described in Section 2. A single repeat-cycle-set consists of two repeated cycles (M = 2). Data are acquired during a single cycle of the modified Lissajous scan that consists of 362 repeat-cycle-sets × 2 repeats ×724 A-scans (Lx = 724), and the acquisition takes 5.25 s. The time lag between the repeated cycles, Tc, is approximately 7.26 ms. The maximum separations between the adjacent trajectories are approximately 12.3 µm, 18.4 µm, and 36.8 µm for the 2 × 2, 3 × 3, and 6 × 6 mm2 scanning areas, respectively.

3. Results

3.1. Motion correction

Figure 5 shows the whole depth en face OCT-A images without [Figs. 5(a) and (c)] and with [Figs. 5(b) and (d)] motion correction. Figures 5(a) and (b) show the ONH and Figs. 5(c) and (d) show the macula. It is evident from these figures that the blurring and the ghost vessels that are caused by sample motion are clearly resolved in the images with motion correction.

 figure: Fig. 5

Fig. 5 Whole depth en face OCT-A images of (a), (b) the optic nerve head and (c), (d) the macula. (a) and (c) show images without motion correction, while (b) and (d) show the images with motion correction.

Download Full Size | PDF

The estimated lateral shifts at the first step of the motion estimation (Section 2.3.3) is shown in Fig. 6. Figures 6(a) and 6(b) correspond to the ONH scan [Figs. 5(a) and 5(b)] and the macular scan [Figs. 5(c) and 5(d)], respectively. Note that rapid eye motion which corrupted OCT-A imaging is not shown in these plots, because it has not been estimated in our method (Section 2.3.2). It shows that there are several fast motions and slow drifts. It is noteworthy that these slow drifts and few fast motions do not blur but warp raster images [see Fig. 13(a)]. On the other hand, the Lissajous image is blurred by these motions. It is because our volumetric Lissajous scan samples the same region of the eye multiple times.

 figure: Fig. 6

Fig. 6 Estimated lateral shifts with First motion estimation step. (a) Estimated motion in ONH imaging [Figs. 5(a) and 5(b)]. (b) Estimated motion in macular imaging [Figs. 5(c) and 5(d)].

Download Full Size | PDF

In Fig. 5(d), it appears that dot-like artifacts have occurred. These artifacts appear to have occurred along part of the Lissajous scan’s trajectory. This artifact may therefore be caused by residual decorrelation noise. For example, if only some of the A-lines in a repeat-cycle-set show high decorrelation, this particular set may not be eliminated by the algorithm that was described in section 2.3.2. Therefore, it seems that A-lines that show high decorrelation noise are included in the final image, and that artifacts such those as shown in Fig. 5(d) are considered to have occurred.

The typical processing times are approximately 240 s for lateral motion estimation and 8 s for motion correction (remapping) of a single en face OCT-A image. The processing time was studied using an Intel Core i7-6820HK CPU operating at 2.70 GHz with 16.0 GBytes of RAM, and the processing program was written in Python 2.7.13 (64-bit).

3.2. Validation of motion correction

The motion correction of the en face OCT-A image was validated using a comparison with an SLO image (Spectralis HRA+Blue Peak, Heidelberg Engineering Inc., Heidelberg, Germany). The SLO image can be used as a motion-free reference standard because of its short measurement time. The motion-corrected en face OCT-A image (internal limiting membrane (ILM) to outer plexiform layer (OPL) slab) is manually registered with the SLO image as shown in Fig. 7, in which the SLO and OCT-A images are displayed in cyan and yellow, respectively. The resulting images show good agreement between the OCT-A and SLO images.

 figure: Fig. 7

Fig. 7 En face OCT-A image (yellow) overlaid on a scanning laser ophthalmoscope (SLO) image (cyan). Because the SLO image is regarded as a motion-free reference standard, this image demonstrates that our method provides a sufficient motion correction capability.

Download Full Size | PDF

For quantitative analysis, SLO images and corresponding en face OCT images were compared. Because SLO image shows not only vasculature but also static scattering structures, the OCT image rather than OCT-A was used for this comparison. For three eyes, en face OCT-A images were registered with the corresponding SLO images. Then, the root-mean-square (RMS) errors between the en face OCT and the SLO were computed. Before calculating the RMS errors, the images have been normalized to have zero mean and unit variance. It was found that the mean RMS error decreased from 1.62 to 1.60 after the motion correction. The reductions of the RMS errors were computed for each eye. The mean reduction is 0.0257 ± 0.0102 (mean ± standard deviation). The motion correction improved the agreement with the SLO.

The motion correction repeatability was also evaluated by forming a checkerboard image from two motion-corrected en face OCT-A images (ILM to OPL slab), as shown in Fig. 8. Figures 8(a) and 8(b) are both motion-corrected macular images of the same eye. Figure 8(c) shows a checkerboard image that was created from Figs. 8(a) and 8(b), and Fig. 8(d) shows magnified images of the regions that are indicated by the colored boxes in Fig. 8(c). To create the checkerboard image, the two motion-corrected en face OCT-A images were rigidly and manually registered and were then combined to form the checkerboard image. The brighter squares that are shown in Figs. 8(c) and 8(d) are from Fig. 8(a), while the darker squares are from Fig. 8(b). The checkerboard image shows the good connectivity of the blood vessels at the boundaries of these squares. This indicates the good repeatability of the motion correction method.

 figure: Fig. 8

Fig. 8 (c) Checkerboard image created from the two motion-corrected en face images that were shown in (a) and (b), where the brighter squares are from (a) and the darker squares are from (b). (d) shows a set of magnified images of (c), in which the colored boxes indicate the magnified region.

Download Full Size | PDF

3.3. Imaging of three retinal vascular plexuses

Figure 9 shows three slab OCT-A images. Figure 9(a) shows the superficial plexus, which ranges from the ILM to 24.8 µm above the boundary between the inner plexiform layer (IPL) and the inner nuclear layer (INL). Figure 9(b) shows the intermediate capillary plexus, which is in the ± 24.8 µm range around the IPL-INL boundary. Figure 9(c) shows the deep capillary plexus, which ranges from 24.8 µm below the IPL-INL boundary to the OPL-Henle’s fiber layer (HFL) boundary. While projection artifacts do exist in the images, the fine vascular networks are visualized.

 figure: Fig. 9

Fig. 9 En face OCT-A images of three retinal vascular plexuses. (a) Superficial plexus; from the ILM to 24.8 µm above the IPL-INL boundary. (b) Intermediate plexus; ±24.8 µm around the IPL-INL boundary. (c) Deep plexus; from 24.8 µm below the IPL-INL boundary to the OPL-HFL boundary.

Download Full Size | PDF

In the color-coded multiple plexus image [Fig. 10(d)], we used three different colors (red, green, and blue) for the superficial [Fig. 10(a)], intermediate [Fig. 10(b)], and deep plexuses [Fig. 10(c)], respectively. The projection artifacts are not prominent when this visualization method is used and the vessel connectivity among the three plexuses can then be seen.

 figure: Fig. 10

Fig. 10 (a)–(c) show slab OCT-A images at different depths and correspond to Figs. 9(a)–(c), respectively. These images are color-coded and are then combined as shown in (d).

Download Full Size | PDF

3.4. Motion correction for the blinking case

One of the advantages of the Lissajous OCT-A is its immunity to blinking. As same as standard Lissajous OCT [22], one Lissajous OCT-A scan cycle consists of four quadrants, and the retinal area is scanned sequentially four times. Even if a blink disturbs one of these quadrants to produce a blank region, the other quadrants fill the blank region.

Figure 11 shows OCT [Fig. 11(a)] and OCT-A [Fig. 11(b)] images of an example case that includes a blink before Cartesian remapping. The vertical and horizontal directions in the images represent the indexes of the repeat-cycle-set and the A-line in that repeat-cycle-set, respectively. The blink appears as a black horizontal line region in the OCT image (indicated by red arrows) and a white (highly decorrelated) region in the OCT-A image. This region is thus discarded at the same time as the rapid motion artifacts, as shown in 11(c).

 figure: Fig. 11

Fig. 11 Examples of non-Cartesian remapped OCT (a) and OCT-A (b). The blink appears as the dark horizontal line in the OCT image and as the white (highly decorrelated) line in the OCT-A image. The blink region is removed from the OCT-A image at the same time as the rapid motion artifacts (c).

Download Full Size | PDF

After motion correction, Cartesian remapping, and Gabor filtering, the blink is no longer obvious as shown in the ILM-OPL slab OCT-A image [Fig. 12(a)]. Despite the partial information deficit, this deficit was filled and the motion was corrected perfectly as demonstrated by the perfect co-registration of the resulting image with the SLO image [Fig. 12(b)].

 figure: Fig. 12

Fig. 12 (a) Motion-corrected en face OCT-A image in blinking case. (b) Comparison with corresponding SLO image.

Download Full Size | PDF

4. Discussion

Several OCT-A devices scan the retina using a raster scanning pattern. Here, OCT-A images based on a raster scanning pattern and the Lissajous scanning pattern are compared. We used the raster scan protocol with the 300 × 300 transversal sampling points with four repetitions of the B-scan. Color-coded slab en face OCT-A images of the same three retinal plexuses that were shown in Section 3.3 are shown in Fig. 13(a) and (b). Both of raster-scan and Lissajous-scan images were obtained using the same JM-OCT system over a 3 × 3 mm2 area at the macula of the same subject. The data acquisition process took 5.14 s with the raster scan (70 % duty cycle). The time lag between the repeated B-scans is approximately 4.28 ms. In the Lissajous scan case (Section 2.5), the acquisition time is 5.25 s (99.86 % duty cycle). The Lissajous OCT-A image shows the retinal capillary vessels well, while the raster-scan OCT-A image is disturbed by several motion artifacts.

 figure: Fig. 13

Fig. 13 Color-coded slab en face OCT-A images with (a) raster scan and (b) modified Lissajous scan with motion correction. In both cases, the macular region scanned over an area of 3 × 3 mm2. The same location was scanned by Cirrus HD-OCT Model 5000 (Carl Zeiss). The color-coded AngioPlex image is shown in (c).

Download Full Size | PDF

The main disadvantage of the Lissajous scan may be that the density of the A-scan is not uniform in the scanned region. The spatial sampling step at the center of the scan region will be larger than that of the raster scan if the data are acquired at similar scanning times and over similar scanning ranges. In the configuration used for Fig. 13(b), the maximum distance between adjacent trajectories at the center of the scanning area is 18.4 µm, while the constant sampling step of the raster scan protocol is 10 µm. This will probably contribute to a reduction in contrast at the center. This disadvantage is believed to have been alleviated by scanning of the positions on the retina that are slightly displaced because of eye motion. However, longer measurement duration or multiple measurements may overcome this drawback. Since the Lissajous OCT-A has immunity to blinking, long-time measurement can be easily performed. Beside this, employing high speed light sources [32] or spectral splitting method [33] may improve the imaging quality of Lissajous OCT-A by increasing sampling density.

We used a commercial OCT-A device (Cirrus HD-OCT Model 5000 with Angioplex, Carl Zeiss) [34] for further comparison. The scanning mode of 3 × 3 mm2 was used. There are 245 × 245 sampling points and the B-scan is repeated at the same scanning location four times. The axial and transversal resolutions are 5 µm and 15 µm, respectively. The color-coded OCT-A image provided by Cirrus HD-OCT Model 5000 is shown in Fig. 13(c). Because different devices and signal and image processing were used, it is hard to compare imaging quality directly. However, the commercial device provides high-contrast capillary image. One reason of that is probably the lower resolutions of JM-OCT compared with those of the commercial OCT device. The shorter wavelength of the commercial device (840 nm of center wavelength) provides higher spatial resolutions. Other significant factor of imaging quality is number of repetitions. It is two times in Lissajous OCT-A while four times in the image of commercial device. Because vasculature contrast is similar between Fig. 13(a) and (b), the lower contrast than the commercial device [Fig. 13(c)] would not due to Lissajous scan but due to the difference in the OCT hardware and subsidiary image processing. In addition, the OCT-A image obtained with Cirrus HD-OCT Model 5000 [Fig. 13(c)] exhibits residual motion artifacts (discontinuity along horizontal direction indicated by orange arrows) even an active (hardware) eye tracking is used [34]. On the other hand, apparent motion artifacts are not evident in the case of Lissajous OCT-A [Fig. 13(b)]. Hence, the application of the presented Lissajous scan and the motion correction algorithm to commercial devices will further improve OCT-A imaging quality.

5. Conclusion

We have demonstrated motion-free en face OCT-A imaging using a specialized Lissajous scan. A standard Lissajous scanning pattern was modified for compatibility with OCT-A measurements. A motion correction algorithm, which was tailored for the modified Lissajous scan, was designed to obtain motion-corrected en face OCT-A images. By validating both the motion correction ability and its repeatability, we conclude that this motion-free en face OCT-A method can provide accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo.

Appendix A. List of symbols

The descriptions of the symbols used in Section 2 are listed in Table 1.

Tables Icon

Table 1. Table 1. Descriptions of symbols.

Funding

Japan Society for the Promotion of Science (JSPS) (KAKENHI 15K13371); Ministry of Education, Culture, Sports, Science and Technology (MEXT) through Local Innovation Ecosystem Development Program; Korea Evaluation Institute of Industrial Technology.

Acknowledgments

The research and project administrative work of Tomomi Nagasaka from the University of Tsukuba is gratefully acknowledged. We thank David MacDonald, MSc, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript. We also thank Deepa Kasarago, PhD, for her help in editing the manuscript.

Disclosures

YC, YJH: Topcon (F), Tomey Corporation (F), Nidek (F), KAO (F). SM, YY Topcon (F), Tomey Corporation (F, P), Nidek (F, P), KAO. YJH is currently employed by Koh Young Technology.

References and links

1. H. R. Novotny and D. L. Alvis, “A Method of Photographing Fluorescence in Circulating Blood in the Human Retina,” Circulation 24(1), 82–86 (1961). [CrossRef]   [PubMed]  

2. S. L. Owens, “Indocyanine green angiography,” Br. J. Ophthalmol. 80(3), 263–266 (1996). [CrossRef]   [PubMed]  

3. R. P. C. Lira, C. L. d. A. Oliveira, M. V. R. B. Marques, A. R. Silva, and C. d. C. Pessoa, “Adverse reactions of fluorescein angiography: a prospective study,” Arq. Bras. Oftalmol. 70(4), 615–618 (2007). [CrossRef]   [PubMed]  

4. U. Karhunen, C. Raitta, and R. Kala, “Adverse reactions to fluorescein angiography,” Acta Ophthalmol. (Copenh.) 64(3), 282–286 (1986). [CrossRef]  

5. M. Hope-Ross, L. A. Yannuzzi, E. S. Gragoudas, D. R. Guyer, J. S. Slakter, J. A. Sorenson, S. Krupsky, D. A. Orlock, and C. A. Puliafito, “Adverse Reactions due to Indocyanine Green,” Ophthalmology 101(3), 529–533 (1994). [CrossRef]   [PubMed]  

6. R. Benya, J. Quintana, and B. Brundage, “Adverse reactions to indocyanine green: A case report and a review of the literature,” Cathet. Cardiovasc. Diagn. 17(4), 231–233 (1989). [CrossRef]   [PubMed]  

7. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

8. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer Science & Business Media, 2008). [CrossRef]  

9. S. Makita, Y. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 14(17), 7821–7840 (2006). [CrossRef]   [PubMed]  

10. S. Makita, K. Kurokawa, Y.-J. Hong, M. Miura, and Y. Yasuno, “Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography,” Biomed. Opt. Express 7(4), 1525–1548 (2016). [CrossRef]   [PubMed]  

11. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

12. G. Liu, L. Chou, W. Jia, W. Qi, B. Choi, and Z. Chen, “Intensity-based modified Doppler variance algorithm: application to phase instable and phase stable optical coherence tomography systems,” Opt. Express 19(12), 11429–11440 (2011). [CrossRef]   [PubMed]  

13. G. Liu, W. Qi, L. Yu, and Z. Chen, “Real-time bulk-motion-correction free Doppler variance optical coherence tomography for choroidal capillary vasculature imaging,” Opt. Express 19(4), 3657–3666 (2011). [CrossRef]   [PubMed]  

14. D. X. Hammer, R. D. Ferguson, N. V. Iftimia, T. Ustun, G. Wollstein, H. Ishikawa, M. L. Gabriele, W. D. Dilworth, L. Kagemann, and J. S. Schuman, “Advanced scanning methods with tracking optical coherence tomography,” Opt. Express 13(20), 7937–7947 (2005). [CrossRef]   [PubMed]  

15. M. Pircher, B. Baumann, E. Götzinger, H. Sattmann, and C. K. Hitzenberger, “Simultaneous SLO/OCT imaging of the human retina with axial eye motion correction,” Opt. Express 15(25), 16922–16932 (2007). [CrossRef]   [PubMed]  

16. K. V. Vienola, B. Braaf, C. K. Sheehy, Q. Yang, P. Tiruveedhula, D. W. Arathorn, J. F. d. Boer, and A. Roorda, “Real-time eye motion compensation for OCT imaging with tracking SLO,” Biomed. Opt. Express 3(11), 2950–2963 (2012). [CrossRef]   [PubMed]  

17. R. D. Ferguson, D. X. Hammer, L. A. Paunescu, S. Beaton, and J. S. Schuman, “Tracking optical coherence tomography,” Opt. Lett. 29(18), 2139–2141 (2004). [CrossRef]   [PubMed]  

18. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3(6), 1182–1199 (2012). [CrossRef]   [PubMed]  

19. M. F. Kraus, J. J. Liu, J. Schottenhamml, C.-L. Chen, A. Budai, L. Branchini, T. Ko, H. Ishikawa, G. Wollstein, J. Schuman, J. S. Duker, J. G. Fujimoto, and J. Hornegger, “Quantitative 3d-OCT motion correction with tilt and illumination correction, robust similarity measure and regularization,” Biomed. Opt. Express 5(8), 2591–2613 (2014). [CrossRef]   [PubMed]  

20. H. C. Hendargo, R. Estrada, S. J. Chiu, C. Tomasi, S. Farsiu, and J. A. Izatt, “Automated non-rigid registration and mosaicing for robust imaging of distinct retinal capillary beds using speckle variance optical coherence tomography,” Biomed. Opt. Express 4(6), 803–821 (2013). [CrossRef]   [PubMed]  

21. P. Zang, G. Liu, M. Zhang, C. Dongye, J. Wang, A. D. Pechauer, T. S. Hwang, D. J. Wilson, D. Huang, D. Li, and Y. Jia, “Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram,” Biomed. Opt. Express 7(7), 2823–2836 (2016). [CrossRef]   [PubMed]  

22. Y. Chen, Y.-J. Hong, S. Makita, and Y. Yasuno, “Three-dimensional eye motion correction by Lissajous scan optical coherence tomography,” Biomed. Opt. Express 8(3), 1783–1802 (2017). [CrossRef]   [PubMed]  

23. Y.-J. Hong, Y. Chen, E. Li, M. Miura, S. Makita, and Y. Yasuno, “Eye motion corrected OCT imaging with Lissajous scan pattern,” Proceedings of SPIE 9693, 96930 (2016). [CrossRef]  

24. M. J. Ju, Y.-J. Hong, S. Makita, Y. Lim, K. Kurokawa, L. Duan, M. Miura, S. Tang, and Y. Yasuno, “Advanced multi-contrast Jones matrix optical coherence tomography for Doppler and polarization sensitive imaging,” Opt. Express 21(16), 19412–19436 (2013). [CrossRef]   [PubMed]  

25. S. Sugiyama, Y.-J. Hong, D. Kasaragod, S. Makita, S. Uematsu, Y. Ikuno, M. Miura, and Y. Yasuno, “Birefringence imaging of posterior eye by multi-functional Jones matrix optical coherence tomography,” Biomed. Opt. Express 6(12), 4951–4974 (2015). [CrossRef]   [PubMed]  

26. A. Bazaei, Y. K. Yong, and S. O. R. Moheimani, “High-speed Lissajous-scan atomic force microscopy: Scan pattern planning and control design issues,” Rev. Sci. Instrum. 83(6), 063701 (2012). [CrossRef]   [PubMed]  

27. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal Surface Segmentation in Volumetric Images-A Graph-Theoretic Approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef]   [PubMed]  

28. M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D Intraretinal Layer Segmentation of Macular Spectral-Domain Optical Coherence Tomography Images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). [CrossRef]   [PubMed]  

29. X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abramoff, and M. Sonka, “Segmentation Three-Dimensional of Abnormalities Fluid-Associated in OCT RetinalThree-Dimensional Segmentation of Fluid-Associated Abnormalities in Retinal OCT: Probability Constrained Graph-Search-Graph-Cut,” IEEE Trans. Med. Imaging 31(8), 1521–1531 (2012). [CrossRef]   [PubMed]  

30. R. Estrada, C. Tomasi, M. T. Cabrera, D. K. Wallace, S. F. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2(10), 2871–2887 (2011). [CrossRef]   [PubMed]  

31. S. R. Sternberg, “Biomedical Image Processing,” Computer 16(1), 22–34 (1983). [CrossRef]  

32. T. Klein, W. Wieser, L. Reznicek, A. Neubauer, A. Kampik, and R. Huber, “Multi-MHz retinal OCT,” Biomed. Opt. Express 4(10), 1890–1908 (2013). [CrossRef]   [PubMed]  

33. L. Ginner, C. Blatter, D. Fechtig, T. Schmoll, M. Gröschl, and R. A. Leitgeb, “Wide-Field OCT Angiography at 400 KHz Utilizing Spectral Splitting,” Photonics 1(4), 369–379 (2014). [CrossRef]  

34. P. J. Rosenfeld, M. K. Durbin, L. Roisman, F. Zheng, A. Miller, G. Robbins, K. B. Schaal, and G. Gregori, “ZEISS Angioplex™Spectral Domain Optical Coherence Tomography Angiography: Technical Aspects,” in “OCT angiography in retinal and macular diseases,”, vol. 56 of Developments in OphthalmologyF. Bandello, E. Souied, and G. Querques, eds. (S. Karger AG, 2016), pp. 18–29. . [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Process diagram of OCT-A image reconstruction from Lissajous-scanned OCT signal. The boxes and circular nodes represent the processes and the data, respectively. In the red region, the OCT and OCT-A data are represented by the acquisition time sequence, i.e., the data along the original Lissajous scan trajectory. In contrast, in the blue region, these data are presented as remapped data on a Cartesian grid. The notations used in the following sections are listed in Appendix A.
Fig. 2
Fig. 2 Example set of repeating scans in the modified Lissajous scanning pattern. The probe beam scans the green trajectory multiple times and the scan set is called the “repeat-cycle-set.” The red line indicates the trajectory that connects the repeating cycles, and is called the “margin.”
Fig. 3
Fig. 3 Temporal profiles of the phase of the modified Lissajous scanning pattern, where the number of repeats (M) is 4. Red regions indicate the scan margin.
Fig. 4
Fig. 4 Example of OCT-A map (a) before and (b) after rapid eye motion artifact removal. The vertical and horizontal directions in the images represent the indexes of the repeat-cycle-set and the A-line in the repeat-cycle-set, respectively.
Fig. 5
Fig. 5 Whole depth en face OCT-A images of (a), (b) the optic nerve head and (c), (d) the macula. (a) and (c) show images without motion correction, while (b) and (d) show the images with motion correction.
Fig. 6
Fig. 6 Estimated lateral shifts with First motion estimation step. (a) Estimated motion in ONH imaging [Figs. 5(a) and 5(b)]. (b) Estimated motion in macular imaging [Figs. 5(c) and 5(d)].
Fig. 7
Fig. 7 En face OCT-A image (yellow) overlaid on a scanning laser ophthalmoscope (SLO) image (cyan). Because the SLO image is regarded as a motion-free reference standard, this image demonstrates that our method provides a sufficient motion correction capability.
Fig. 8
Fig. 8 (c) Checkerboard image created from the two motion-corrected en face images that were shown in (a) and (b), where the brighter squares are from (a) and the darker squares are from (b). (d) shows a set of magnified images of (c), in which the colored boxes indicate the magnified region.
Fig. 9
Fig. 9 En face OCT-A images of three retinal vascular plexuses. (a) Superficial plexus; from the ILM to 24.8 µm above the IPL-INL boundary. (b) Intermediate plexus; ±24.8 µm around the IPL-INL boundary. (c) Deep plexus; from 24.8 µm below the IPL-INL boundary to the OPL-HFL boundary.
Fig. 10
Fig. 10 (a)–(c) show slab OCT-A images at different depths and correspond to Figs. 9(a)–(c), respectively. These images are color-coded and are then combined as shown in (d).
Fig. 11
Fig. 11 Examples of non-Cartesian remapped OCT (a) and OCT-A (b). The blink appears as the dark horizontal line in the OCT image and as the white (highly decorrelated) line in the OCT-A image. The blink region is removed from the OCT-A image at the same time as the rapid motion artifacts (c).
Fig. 12
Fig. 12 (a) Motion-corrected en face OCT-A image in blinking case. (b) Comparison with corresponding SLO image.
Fig. 13
Fig. 13 Color-coded slab en face OCT-A images with (a) raster scan and (b) modified Lissajous scan with motion correction. In both cases, the macular region scanned over an area of 3 × 3 mm2. The same location was scanned by Cirrus HD-OCT Model 5000 (Carl Zeiss). The color-coded AngioPlex image is shown in (c).

Tables (1)

Tables Icon

Table 1 Table 1. Descriptions of symbols.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

x ( t ) = A x cos ( 2 π t / T x )
y ( t ) = A y cos ( 2 π t / T y ) ,
x ( t ) = A x cos [ 2 π χ x ( t ) / T x ]
y ( t ) = A y cos [ 2 π χ y ( t ) / T y ]
χ x ( t ) = { t ( k n ) Δ T , for k T c n Δ T t k T c n Δ T + T x   for k T c n Δ T + T x < t < ( k + 1 ) T c n Δ T ( k + 1 ) T x     and m < M 1
χ y ( t ) = t ( k n ) ( T c T y )
n = t M T c Δ T .
k = t + n Δ T T c .
Δ T δ T A ,
x i   = A x cos [ 2 π χ x , i / T x ]
y i   = A y cos [ 2 π χ y , i / T y ]
χ x , i = i T A
χ y , i = i T A + ( k n ) ( T y T x )
x i   = A x cos [ 2 π ( l / L x + m + M n ) ] ,
y i   = A y cos [ 2 π { ( l + L x n ) T A / T y + m + ( M 1 ) n } ] .
g n ( T c ; l , z , m , p ) = [ g ( l , n , z , m , p ) g ( l , n , z , m + 1 , p ) ] ,
E l , n = z { 1 M [ r n ( T c ; l , z ) ] } ,
M [ a ] = { 1 , if a 1 < 1 , a , otherwise . .
h ( x , y ; θ , f , σ x , σ y ) = 1 2 π σ x σ y e 1 2 ( ξ 2 / σ x 2 + υ 2 / σ y 2 ) cos ( 2 π f ξ ) ,
ξ = x cos θ + y sin θ ,
υ = x sin θ + y cos θ .
f = 1 / s .
h ( x , y ; θ , s ) = 18 π s 2 e 18 ( ξ 2 + υ 2 ) / s 2 cos ( 2 π s ξ ) .
E ( x , y ) = max θ , s [ E ( x , y ) h ( x , y ; θ , s ) ] ,
α = 1 r .
R j = r j b = 1 j 1 α b
r j ( x , y ) [ E j   ( x , y ) B j ( x , y ) ] max x , y , j [ E j   ( x , y ) B j ( x , y ) ] ,
E ( x , y ) = max j [ R j ( x , y ) C j ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.