Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Comprehensive automatic processing and analysis of adaptive optics flood illumination retinal images on healthy subjects

Open Access Open Access

Abstract

This work presents a novel fully automated method for retinal analysis in images acquired with a flood illuminated adaptive optics retinal camera (AO-FIO). The proposed processing pipeline consists of several steps: First, we register single AO-FIO images in a montage image capturing a larger retinal area. The registration is performed by combination of phase correlation and the scale-invariant feature transform method. A set of 200 AO-FIO images from 10 healthy subjects (10 images from left eye and 10 images from right eye) is processed into 20 montage images and mutually aligned according to the automatically detected fovea center. As a second step, the photoreceptors in the montage images are detected using a method based on regional maxima localization, where the detector parameters were determined with Bayesian optimization according to manually labeled photoreceptors by three evaluators. The detection assessment, based on Dice coefficient, ranges from 0.72 to 0.8. In the next step, the corresponding density maps are generated for each of the montage images. As a final step, representative averaged photoreceptor density maps are created for the left and right eye and thus enabling comprehensive analysis across the montage images and a straightforward comparison with available histological data and other published studies. Our proposed method and software thus enable us to generate AO-based photoreceptor density maps for all measured locations fully automatically, and thus it is suitable for large studies, as those are in pressing need for automated approaches. In addition, the application MATADOR (MATlab ADaptive Optics Retinal Image Analysis) that implements the described pipeline and the dataset with photoreceptor labels are made publicly available.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Corrections

22 May 2023: A minor correction was made to the text.

1. Introduction

High-resolution imaging of the retina is impeded by optical aberrations in the eye. Adaptive optics (AO) compensates for monochromatic aberrations in the optical path between the retina and the detector using active optical elements and thus achieves the transverse optical resolution of the retina around 2 µm [1]. Already, the first combination of AO and a conventional fundus camera in 1997 [2] demonstrated the possibility of a single cell microscopic structure visualisation with such resolution. Later research has been extended by connecting the AO technique with other modalities. AO has been integrated into multiple existing retinal imaging systems, including flood illumination ophthalmoscopy (AO-FIO), scanning laser ophthalmoscopy (AO-SLO) and, most recently, optical coherence tomography (AO-OCT) [1,3,4]. The latter two are typically performed by scanning the retina. On the other hand, AO-FIO captures reflected light from a flood illuminated patch of the retina by a 2D detector and therefore avoids the distortion introduced by sinusoidal scanning pattern [5]. AO-FIO delivers images that are not affected by the scanning process but only by eye motion during the exposure time. This allows for simpler inter-frame image alignment of follow-up images (cell to cell) and the monitoring of small pathological changes over time. The rtx1 AO-FIO has been designed with a 5 mm pupil to enhance usability in a clinical setting and thus no bite bar is necessary and measurements of cataract patients are possible. Flood illumination ophthalmoscopy also allows for a larger single field of view compared to AO-SLO (typically 4°versus 2°) [6], which makes it more appropriate in applications, where larger total field of view is required. Additionally, AO-FIO may provide contrast on additional features (melanin deposition) [6].

AO-FIO technique enables examination of retinal vasculature and thus measurement of the wall-to-lumen vessel ratio [7,8], imaging of lamina cribrosa [3,9], observation of drusen [8,10] and, last but not least, photoreceptor distribution [1114]. The density and morphology of photodetectors are significantly related to the disease process and characterize the retinal condition. Therefore, the properties are valuable biomarkers for clinical evaluation of therapeutic potential and efficacy [1517]. Further applications of AO-FIO also offer the possibility of retinal observation, which can lead to the detection of retinal conditions related to age-related macular degeneration [8,16], inherited retinal disease [1,15], diabetic retinopathy [17,18] or arterial hypertension [7].

Despite application expansion and development, there is still a lack of a fully automated and complex analysis of photoreceptor density distribution, particularly for the flood illumination technique. This work aims to establish a retinal photoreceptor assessment pipeline that enables the full automatic investigation of larger areas of the retina. This concept faces two image processing challenges at once, (1) image montage providing the basis for (2) photoreceptor analysis.

(1) AO-FIO retinal imaging has a limited field of view (FOV). Typically, it is possible to capture around 0.1 % of the retinal surface by performing a single image acquisition at a specific location. However, a larger area of the retina is necessary to investigate for a comprehensive retinal examination. Thus, pairwise image registration is needed to determine the exact position of particular images and create the final montage with a larger FOV.

There are many publications dealing with the registration of retinal AO images. Several authors have dealt with the registration of AO-FIO images from the same retinal position. These temporal sequences are mainly affected by minor eye movements, and different approaches were used to cope with this problem, for example, the cross-correlation approach [19,20] or the Bayesian framework by a Maximum A Posteriori estimation [21]. However, some approaches focus on spatial image montaging; however, all of them employ the AO-SLO modality. These approaches are mostly based on feature points determined by the principal component analysis scale invariant feature transform (PCA-SIFT) [22], SIFT [23,24], or constellation features [25] and are followed by the random sample consensus algorithm (RANSAC) for outlier feature elimination [22,23,25]. However, the AO-SLO images differ from the AO-FIO images. The AO-FIO have larger FOV, lower spatial resolution, and spatially variant blur (i.e., higher blurring near image borders). To our knowledge, only one research group [26] describes the registration of AO-FIO images, where the registration is based on overlapping common features.

(2) The second challenge is to measure retinal photoreceptor properties that are derived mainly from the exact location of the photoreceptors. Since manual photoreceptor location grading is subjective and in particular time-consuming, several automated methods for cone detection in AO images have been developed. Most often local maxima-based detection is used [1113,2729], other techniques are based on correlation with cone model [30], circular Hough transform [19,31], convolutional neural network (CNN) based detection [32], Hessian-LoG filtering [33], histogram analysis [26], graph theory and dynamic programming [34,35], or retinal mosaic estimation [36,37].

Several authors combined image montage with photoreceptor density counting. However, as mentioned above, many of these approaches rely on manual effort and commercial graphics packages such as Photoshop (Adobe Systems, Mountain View, California) and ImageJ (National Institutes of Health, Bethesda, Maryland) for image montaging [14,27,38] and cone density measurements [14,38,39]. Both are highly time-consuming, subjective, and labor-intensive. The only automatic commercially available package for image montage is i2k Retina (DualAlign,LLC, Clifton Park, NY, USA) and for photoreceptor counting in a locally restricted sub-area AO detect (Imagine Eyes, France). However, the latter cannot be combined to follow-up the montage; therefore, an automatic extensive analysis of the entire montage image (or even a single image) is not possible [28,40,41].

This paper describes a novel automated system processing pipeline for comprehensive processing and analysis of retinal images. The proposed approach enables one to construct a montage-overall image from individual images. In addition, individual photoreceptors are detected, and the density of the photoreceptors is estimated. Furthermore, 2D averaged density maps are created and compared with histologically obtained results [42] and AO detect. This software is an application for supervised segmentation of retinal images provided with the rtx1 AO retinal camera. We also compare our results with other published papers in the field of photoreceptor density measurement [28,40,43].

2. Methods

In this section, a description of the proposed image processing pipeline is provided. Processing and analysis consists of several steps, which are shown in Fig. 1. First, AO-FIO images are acquired (Section 2.1). The acquired single images are thereafter registered by a combination of phase correlation and feature-based detection. The overall image montage is created based on the registration results (Section 2.2). The montage image is analyzed using the photoreceptor detection algorithm and the photoreceptor density is determined (Section 2.3).

 figure: Fig. 1.

Fig. 1. The flowchart depicting proposed methodology. Image acquisition: Initial center positions of acquired images (Section 2.1). Image registration and montage: Descriptive schema of montage creation (Section 2.2). Photoreceptor detection: An example of detected photoreceptor in image cutout (Section 2.3). Photoreceptor density evaluation: Photoreceptor density maps (Section 3.3). Fovea detection: An example of detected fovea center from processed central image (Section 2.4).

Download Full Size | PDF

Independently of previous steps, the fovea location is detected in each central single image of a montage. This knowledge is used to determine the horizontal and vertical retinal meridians, and the retinal montage images are also mutually aligned according to their fovea position. Thus, a quantitative estimate of the density of multiple retinas is possible. The gained density estimate can be further utilized for comparison with anatomical standards determined by histological measurement, comparison with other detecting methods, or, more practical, for retinal assessment and medical diagnosis of pathological retinal conditions.

The designed framework is implemented in Matlab 9.11 (The Mathworks, Inc., Natick, Massachusetts). In addition, a graphical user interface is added, which enables easy usage by other authors. The application MATADOR (MATlab ADaptive Optics Retinal tool) is publicly available at https://github.com/evavalterova/MATADOR.git.

2.1 Image acquisition and data description

AO-FIO images were obtained from both eyes of ten healthy subjects of average age 32 (SD: 3.6; range 26 to 37). Distance refraction was objectively established by wavefront-based autorefraction (iProfiler, Carl Zeiss Vision, Aalen, Germany). Refraction was averaged for the spherical equivalent (SE) in diopters (D); this was computed by SE= Sph + 0.5 * cyl. For the right eye, the mean refraction was SE: −1.05 D (SD: 1.19 D), for the left eye it was SE: −0.98 D (SD: 1.27 D). Vision was investigated using a computerized letter chart (iPolaTest, Carl Zeiss Vision, Aalen, Germany) using Landolt rings with settings based on the European norm EN ISO 8596 as recommended by the German Ophthalmological Society with logMAR acuity at 6 m. Participants presented with best corrected visual acuity (BCVA) of −0.075 (SD: 0.11 ; range: −0.18 to 0.14) logMAR for the right eye and −0.097 (SD: 0.09; range: −0.22 to 0.06) logMAR for the left eye. For binocular viewing, BCVA was −0.128 logMAR (SD: 0.09 ; range: −0.24 to 0.04). The subjects’ ocular biometry was measured with a Lenstar 900 (Haag Streit, Switzerland), these parameters are presented in Appendix, Table 1. The study adhered to the tenets of the Declaration of Helsinki and approval for the study was obtained from the Ethics Committee of the Medical Faculty of Leipzig University (209 /18-ek).

AO-FIO images were captured by flood illumination camera (rtx1, Imagine Eyes, Orsay, France) with a field of view $4^{\circ }{\times }4^{\circ }$ sampled at 1500$\times$1500 pixels. Data is captured as a sequence of 40 raw images which are compiled into a single image as part of the acquisition process. The imaging wavelength is 850 nm. Subjects were instructed to fixate on the centre of a movable colored fixation cross, according to which the measurement location was shifted. Thus, the images correspond to different retinal locations, preliminarily defined by the target position. Nevertheless, fixation accuracy depends on the subject’s fixation ability and is also affected by fixational minor eye movements (e.g. microsaccades). Therefore, a subsequent image registration is necessary for an accurate image montage. Ten images of each eye were acquired. One image corresponds to the foveal location; hereafter three images capture the retina in the temporal direction from the fovea with an angular step of $2^{\circ}$, and two further images are acquired in each meridian away from the fovea in the nasal, inferior, and superior directions with the same step size. The schema of default image center positions is shown in Fig. 2, labeled by filled circles. The image’s Z-depth was adjusted during image acquisition to provide the sharpest focus at the photoreceptor level. In total, 200 images were acquired. Each image overlaps its direct neighbors by at least an area of around $2^{\circ} \times 1^{\circ} \simeq 750\times 375$ pixels. This spacious overlap allows us to reach high alignment precision.

 figure: Fig. 2.

Fig. 2. The charts represent the center position of captured images for right and left eye, respectively.

Download Full Size | PDF

To perform a photoreceptor detection evaluation, manual grading of the extracted regions was carried out. Two groups of cropped regions of size $200 \times 200$ pixels were extracted from the set of 200 acquired images. The cropped regions were selected from the central image (40 foveal patches) and areas further from the fovea (40 peripheral patches in area 2°- 8°from the fovea), where the image quality did not significantly decrease by blur. The photoreceptor center labels were marked by three independent evaluators who labeled all photoreceptors within cropped regions, the manual labeling is considered as gold-standard data for photoreceptor detection. We will refer to these two groups as foveal set of patches and peripheral group of patches.

The entire dataset of 200 images of 10 healthy subjects and also 40 foveal and 40 peripheral patches with manually labeled data is publicly available at [61].

2.2 Retinal image montaging

The goal of the montage algorithm is to find an accurate alignment of multiple images (i.e. ten for this study) within an eye and merge them into the montage image. The image capturing the foveal location is used as a reference image, and the center of this image is set as the origin of the coordinate system. The remaining nine images are iteratively aligned into the montage one after the other according to their preliminary distance from the origin of the coordinate system, defined during measurement. Images with the same distance from the origin are selected clockwise, one after another. The parameters of the alignment transformation are estimated from the overlapping parts with the central image or other adjacent images already aligned in the previous iteration.

Image registration by the phase correlation scale invariant feature transform (PC-SIFT) method for the alignment of a single retinal image is based on the two-stage registration proposed in our previous work [44]. Phase correlation is used as the first coarse registration stage, which has the advantages of being independent of the correct identification and matching of key points. Therefore, phase correlation is robust to noise and illumination variation, can handle large shifts and rotations, and is also computationally undemanding, which implies low computational time (irrespective of the content of images) and ease of parallel processing. The images aligned according to the first stage (phase correlation) proceed to the second stage (SIFT algorithm). Opposite to the first stage, the second stage registration ensures minor image transformation estimations including all affine transformations. Before this second fine stage, AO images are filtered by an averaging filter with window size 15 to suppress the effect of spatially variant image blur, which would complicate the key point determination. AO images contain structures of different properties, such as single photoreceptors, arterioles, venules, and lighter and darker areas corresponding to groups of photoreceptors [4547]. We observed that these darker and lighter areas are barely affected by blur and mutually corresponding in overlapping image parts. Therefore, these structures are a suitable source of key points and are enhanced by histogram equalization. After this preprocessing, the key points are detected and matched by SIFT. The incorrect pairs are eliminated according to the Euclidean distance between the matched key points [48]. The final alignment transformation is estimated from the correct matched key points and applied to the images previously transformed in the first stage. See our previous work [44] for more detailed description and alignment evaluation.

After alignment, the images are merged into the montage image. In areas where two or more single images ($n$) overlap (see Fig. 3), each pixel corresponds to the pixel of the image with higher image quality in the pixel neighborhood $L$. The local image quality $Q_n$ for each overlapping image is assessed by

$$Q_n = \frac{s_{L_n}}{\mu_{L_n}},$$
where $s_{L_n}$ is the spatial standard deviation in the pixel neighborhood $L$ and $\mu _{L_n}$ is the spatial average computed for each pixel neighborhood. Figure 3 interprets the created montage image from single overlapping images (marked by colors, respectively). The border between different image areas can cause defacement in the montage image; hence, areas smaller than 5000 pixels (e.g. orange area highlighted by an arrow in Fig. 3) were assigned to the surrounding area (green color in Fig. 3).

 figure: Fig. 3.

Fig. 3. Left: The schema displaying the selection of each individual pixel according to its quality in neighborhood $L_n$ from ovelapping images $I_n$. Right: The montage image with assigned pixels to single images. The yellow arrow points to the area, which is assigned to its surrounding due to its minor size.

Download Full Size | PDF

2.3 Photoreceptor analysis

Our photoreceptor detection focuses on cone detection (CD), since they are highly reflective in the AO-FIO images and the rods are of minor occurrence and also of minor size in examined locations near the fovea, and thus are below the AO-FIO resolution [13,42]. The principle of the CD method is based on Li & Roorda [11] technique for the detection of local maxima. The original image is filtered by a Gaussian filter with a standard deviation specified by the parameter $\sigma$. All regional maxima in the filtered image whose height is less than $H$ are suppressed. Hereafter, the regional maxima are detected, and a value of the box-filtered original image with window size 3 is assigned to each of them. Maxima with an assigned value greater than $S$ are considered to be photoreceptor center locations. The optimal set of parameters in their ranges $\sigma \langle 0.2,15 \rangle, H \langle 0,0.5 \rangle$ and $S \langle 0,0.5 \rangle$ is found by Bayesian optimization seeking the maximum of the Dice coefficient function. The Dice coefficient $D$ for two sets $A$ and $B$ is the ratio between the cardinalities of the set intersection to the single sets. In this way, the Dice coefficient can be derived between different evaluators (see Fig. 4), between evaluators and the cone detection algorithm (see Fig. 5), and between evaluators, the cone detection algorithm and the AO detect algorithm (see Fig. 8). In detail, the Dice coefficient is computed as follows

$$D = \frac{2 |A \cap B|}{|A| + |B|}.$$

 figure: Fig. 4.

Fig. 4. A) The Dice coefficient for three evaluators (E1, E2 and E3) after 4-fold cross-validation for fovea and periphery; Particular ranges for B) parameter $\sigma$ C) parameter $H$ D) parameter $S$ for each evaluator and set of patches.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. A) Schema of agreement among cone detection algorithm (CD) and evaluators (E1, E2, E3), where green values correspond to Dice coefficient of foveal areas and orange values correspond to Dice coefficient of periphery areas. B) An example of evaluation in selected cropped part of foveal patch image. The arrows of same color highlight the photoreceptor location disagreement between evaluators in identical areas.

Download Full Size | PDF

The K-fold cross-validation approach was used to determine the optimal values of the parameters of the photoreceptor detection method ($\sigma, H, S$) for each evaluator with K=3 ($75\%$ of the data used for training and $25\%$ for testing) and each set of patches. The Dice coefficients for three evaluators and for the foveal and peripheral set of patches are depicted in Fig. 4(A) and the optimized sets of parameters are depicted in Fig. 4(B-D).

Cross-validation showed low variability of the parameters. The agreement between two sets of determined photoreceptor centers is expressed by the average Dice coefficient, which is in the range of 0.72-0.81 for the agreement among the CD algorithm and the evaluators (see Fig. 4). The mutual agreement among the evaluators is shown in Fig. 5 - A by the value of the links between the evaluators (green for the foveal area and orange for the peripheral area). The diagram shows lower mutual agreement of the evaluators (0.64-0.75) compared to the agreement between the CD algorithm and each of the evaluators. The lower mutual agreement among the evaluators is caused by the subjectivity of the evaluators, which is documented in the examples in Fig. 5(B) by the blue and orange arrows.

2.4 Fovea detection

The quantitative analysis and comparison require the determination of the location of the fovea. The detection is based on our previously suggested method in [49], which utilizes the typical blur of the AO-FIO image in the foveal center. The blur is caused by the higher density of much smaller photoreceptors in the foveola compared to the rest of the retina, which are thus below AO-FIO spatial resolution. Then, the fovea location is determined by the maximum ratio between local standard deviation and local average value in the middle of the central image, where the fovea is expected. The foveal center localizations estimated based on this methodology differ from the manually graded fovea locations by $32 \pm 27$ pixels of Euclidean distance [49]. This difference is below the size of the sampling window, typically used for cone density estimation.

The fovea location can be utilized for meridians measurement exploitable for photoreceptor density evaluation, e.g. in longitudinal studies or for inter-subject montage image alignment. It should also be noted that reliable fovea localization should also be used in medical studies, because its position can influence the cone density in the corresponding meridians and can have a direct impact on reproducibility [50].

3. Experiments and results

3.1 Retinal image montaging

A montage of ten images was created for each eye of ten subjects. In total, 180 images (excluding the 20 foveal reference images in the center of each montage) were registered by the PC-SIFT method, compared to manual alignment, and evaluated using the normalized mutual information (NMI) metric. The NMI is given by the equation

$$NMI = \frac{H(X) + H(Y) - H(X,Y)}{\sqrt{H(X)H(Y)}},$$
where $H(X)$, $H(Y)$ and $H(X,Y)$ are marginal and the joint probability density functions of the overlapping parts of the images $X$ and $Y$.

Manual alignment is based on the transformation estimation from manually selected corresponding points in the overlapping areas of each image with the central image or other adjacent image closer to the central image than the currently registered image. The average NMI of all manually registered images is $0.088 \pm 0.016$.

The example of the registration results is shown in Fig. 6, where A and B denote the overlapping parts of the reference and moving image determined according to their preliminary defined position. Hereafter, C shows images after manual alignment and D after PC-SIFT alignment. Gray regions in the composite images represent the area where the two images have the same intensities. The magenta and green regions show where the intensities are different. Thus, this colored image representation enables to visually evaluate the registration results. The PC-SIFT method proved a high level of image alignment accuracy according to visual assessment, and the average NMI of the PC-SIFT method $0.089 \pm 0.022$ is higher than manual alignment.

 figure: Fig. 6.

Fig. 6. A) Cropped overlapping parts of reference and B) moving image and the registration results of C) manual and D) PC-SIFT registration of A and B. The alignment precision in C and D is highlighted by RGB visualisation, where gray regions represents the area of the two registered images with same intensity and magenta and green regions show different image intensities. The example images shown here have 300 $\times$ 300 pixels (approximately $\sim 0.05$ mm2). The histogram equalization was applied on images for image differences highlighting.

Download Full Size | PDF

The images, accurately aligned by the PC-SIFT method, are merged into one montage image, based on the quality assessment described above. The average value of the surrounding areas was inserted on the border between the individual image areas for a smoother transition between the individual images. One of the 20 created montages is presented in Fig. 7, with a red framed zoomed area for visualization of details. This area also corresponds to the registration example shown in Fig. 6. The reddish and bluish regions of the zoomed area show where the boundary between two aligned images is located and thus demonstrate that the border between two aligned images in the final montage is visually imperceptible. This montage image undergoes further photoreceptor analysis.

 figure: Fig. 7.

Fig. 7. Final montage image created after single images registration by PC-SIFT of 10 images from right eye of a subject with zoomed area denoted by red frame. The colored version of the same area show, where is the boundary location between two registered images, where one of them is presented by reddish area and the second by bluish color.

Download Full Size | PDF

3.2 Photoreceptor analysis

The proposed CD algorithm was optimized using Bayes optimization according to manually labeled data, including the three evaluators. The parameters were optimized for the foveal area to $\sigma = \ 5.1145, H\ = 0.0006$ and $S\ = 0.1955$ and for the peripheral area to $\sigma = 4.7, H\ = 0.0147$ and $S\ = 0.2202$. The algorithm with these optimized values is compared with the software AO detect for automatic photoreceptor detection in Fig. 8. The Dice coefficient of agreement between the CD algorithm and AO detect is 0.82 for the foveal area and 0.91 for the peripheral area. The Dice coefficient in this figure also highlights the lower agreement between AO detect results and the evaluators than between the CD algorithm and the evaluators. The CD algorithm resulted in a higher agreement with the evaluators than the AO detect, which indicates its ability to achieve at least comparable results with this automatic method.

 figure: Fig. 8.

Fig. 8. A) Schema of agreement among proposed cone detection algorithm (CD) and evaluators (E1, E2, E3). B) Schema of agreement among AO detect algorithm (AO) and evaluators (E1, E2, E3) and its connection describing the agreement between CD and AO detect. The green values correspond to Dice coefficient of foveal areas and orange values correspond to Dice coefficient of peripheral areas.

Download Full Size | PDF

Parameters were also jointly optimized for the foveal and peripheral area, for future use without reference to the foveal distance. The optimized values of the parameters correspond to $\sigma = 6.2533, H = 0.0003$ and $S = 0.17863$. These values were further used for the cone detection for consecutive cone density analysis.

3.3 Photoreceptor density estimation

The important result of the current study is the ability to compute the density of photoreceptors over a large area (established by a large set of images in the montage, e.g., 10 in our example) and its applicability to any large group of examined eyes (e.g. 20 in our example). This complex processing of the retinal analysis and its availability to the reader is made possible by the presented application in Fig. 12. The estimation of the photoreceptor density is achieved by the following steps. The CD method was applied to 20 montaged images to estimate the photoreceptor density. The areas of the montage image with low image quality (e.g. burdened by blur) were omitted. The image quality was assessed by Eq. (1), where the threshold was set to 0.25. This threshold was selected based on the relation between the image quality and Dice coefficient, where we considered the Dice coefficient above 0.7 as a sufficient result of cone detection (see Fig. 16 in appendix). The photoreceptor locations were utilized for partitioning of the montage image into Vonoroi polygons, where the spacing of the individual photoreceptors corresponds to the individual Vonoroi polygons. Further, the photoreceptor density is an inverted value of the Vonoroi polygon area. Figure 9 shows (A) an example of the detected photoreceptors in the cropped part of the image and (B) the corresponding Vonoroi polygons, whose color can be assigned to the photoreceptor spacing (left color bar) or the density of photoreceptors (right color bar). In this example, the photoreceptors detected near the edge of the cropped part were excluded from the Vonoroi analysis.

 figure: Fig. 9.

Fig. 9. A) An example of detected photoreceptors by the proposed CD method. B) Corresponding Vonoroi’s polygons with a colorbar for photoreceptor spacing (left) in mm2 and photoreceptor density (right) in photoreceptors per mm2.

Download Full Size | PDF

To assess the photoreceptor density, a map of the total examined area is considered. An example of a density map is shown in Fig. 12 (as part of the presented application). The map depicts the characteristic increase of the photoreceptor density toward the fovea. In addition, blood vessels are visible as dark tubular areas, which is due to the lower number of detected photoreceptors underneath the location of the blood vessels. Regions with low image quality are omitted from the analysis (black regions); therefore, the map includes inner parts which were not analyzed. The map demonstrates the ability to perform a straightforward and extensive retinal assessment based on acquired single images.

Further analysis considered 10 density maps of the subjects’ left eyes and 10 density maps of the right eyes. These density maps were mutually aligned per eye according to their fovea locations. These fovea locations were estimated based on the methodology mentioned above. The aligned images were averaged and the resulting averaged density maps are shown in Fig. 10 for the left and right eyes, respectively. The results are presented in degrees and also in millimeters, where each montage image was converted from degrees to millimeters according to Bennett’s formula [51] and the axial length of an eye before averaging across the group. Concentric circles denote the distance from the fovea. Density maps undoubtedly indicate a decrease in the photoreceptor density in dependence on the foveal distance. Toward the fovea, the density increases up to the immediate vicinity of the fovea (almost up to 2$^{\circ }$ or 0.6 mm, respectively). Interestingly, the density map presented in Fig. 10 deceivingly suggests a lower photoreceptor density in the very center of the fovea, which slightly increases up to the immediate vicinity of the fovea (almost up to 2$^{\circ }$ or 0.6 mm, respectively). However, this phenomenon is caused by the limited spatial resolution of the AO-FIO camera, resulting in a lower photoreceptor detection ability in the foveal area, where the images are blurred. Therefore, the very central area should be neglected in further retinal analysis. Notwithstanding, the maps clearly indicate the distribution of the photoreceptor density across the retina. The increase in photoreceptor density is typically presented as an elliptical shape around the fovea, with the photoreceptor density ranging up to 40,000 photoreceptors per mm2.

 figure: Fig. 10.

Fig. 10. A) Right eye density map aligned according its size in degrees. B) Left eye density map aligned according its size in degrees. C) Right eye density map aligned according its size in millimeters. D) Left eye density map aligned according its size in millimeters.

Download Full Size | PDF

Figure 11 shows the cross section of processed density maps for comparison with histological data. These cross sections were obtained from the resulting density maps after spatial averaging with a filter of size 65 pixels, which corresponds to 50 µm with an average axial length of the subjects $23.44$ mm. The size of the sampling window was chosen on the basis of previous studies [28,31]. The graphs present the cross-section of retinal horizontal ((A) for right eye, (B) for the left eye) and vertical meridian ((C) for right eye, (D) for the left eye), where the blue curves represent the averaged densities and standard deviation (surrounding bluish shade). As was already mentioned above, the area around the very central position of the fovea typically suffers from blur because of the resolution limit of AO-FIO images. Therefore, density data at a distance of 0.6 mm from the center of the fovea for the horizontal meridian and 0.5 mm for the vertical meridian are excluded from comparison and are presented by a red dotted line only for additional knowledge. The histological data presented by Curcio et al. [42] on the cone density are included in this figure and are marked by orange color. The measured density of our study follows the trend of non-linear decrease with distance from the fovea. The photoreceptor density of AO-FIO is presented in Appendix, Tables 2 and 3 for 0.5 mm increments.

 figure: Fig. 11.

Fig. 11. The graphs represent photoreceptor density as a function of foveal distance. The blue line represents the retinal cross section of processed density maps with its standard deviation. The dotted red line denotes the cropped foveal area. The orange dot markers correspond to histological data and are connected by linear interpolation. A) Right eye horizontal meridian. B) Left eye horizontal meridian. C) Right eye vertical meridian. D) Left eye vertical meridian.

Download Full Size | PDF

The photoreceptor density based on the AO-FIO images is compared to the histology data in Fig. 11 and, as can be seen, the valuable histological data provided by Curcio et al. [42] generally present a lower cone density at each measured location. However, that data have a higher sampling distance compared to the AO-FIO data presented. In both cases, the photoreceptor density decreases as a function of the retinal eccentricity away from the fovea.

3.4 Application

Based on the results obtained, we present an application with a graphical user interface, the MATlab ADaptive Optics Retinal tool (MATADOR). The MATADOR enables automatic analysis of acquired image data including all the processing steps described above. The user can upload images from rtx1 camera measured in arbitrary distance from the fovea. The information about the preliminary acquisition position is encoded in the filenames; thus in the next step, images are automatically registered and the montage image is created. Afterwards, it is possible to choose the type of analysis, i.e., photoreceptor detection, spacing, or density estimation from a selected part of the image or from the entire montage image. The MATADOR interface preview is presented in Fig. 12 and the application is freely available at https://github.com/evavalterova/MATADOR.git.

 figure: Fig. 12.

Fig. 12. A preview of MATADOR. The density map correspond to the montage image shown in Fig. 7.

Download Full Size | PDF

4. Discussion

This work carried out a complex analysis of adaptive optics images of the foveal region, which is highly needed in clinical research. We describe a comprehensive pipeline for AO image analysis including image montaging, photoreceptor detection and photoreceptors density analysis; additionally we present fovea detection, which is needed to perform inter-subjects comparison. We present our results on a set of AO-FIO images acquired from subjects with normal healthy eyes. The implementation of our approach is made publicly available as a standalone application MATADOR. In addition, we also provide the complete set of our data, including manual annotations of photoreceptor positions by three evaluators.

First, the AO-FIO analysis tool uses automatic registration and image montage. The registration results showed a high level of alignment accuracy in terms of NMI [44]. In comparison to the literature, the majority of existing registration approaches utilize flexible transformation because they are applied on AO-SLO images. Here, we consider the affine transformation because flexible distortion is not expected due to single-shot image acquisition. This simplifies the task (a lower number of transformation parameters must be determined); however, the AO-FIO images are typically blurred in the foveal region, which complicates the detection and matching of the corresponding points. Furthermore, AO-FIO images can be slightly blurred near the borders, which also limits the accuracy of the registration. Compared to published works, we solved this problem by combination of phase correlation and SIFT method. The image before the second stage of registration is blurred by filter; therefore, the SIFT uses corresponding points that are not related to individual cones, although many works utilize single cones detection for matching [2326]. This blurring keeps the darker and lighter areas, which mutually corresponds in overlapping image parts. Our image montaging process also contains the fusion of aligned images, which is often not mentioned in research papers. Here, we show that the image quality metrics can be efficiently used for fusion and the final montage images do not suffer from discontinuities between overlapping image parts. Visual assessment of our final montage images proves high image quality, given by image contrast with clearly visible and detectable photoreceptors, smooth coupling of the vascular system between particular images, and almost unrecognizable borders between single images. Such a montage image is thus appropriate for further photoreceptor detection analysis.

MATADOR focuses on photoreceptor density measurements based on the full montage image. For the cone detection (CD) part, an optimization and comparison with manual photoreceptor grading was carried out first. Here, the manual detection of the photoreceptors by three evaluators showed a relatively high disagreement between the labeled photoreceptor centers (see Fig. 5). This partially influences the optimization part of the CD method, as there is no default approach how to utilize this multiple-graded labeling to create a gold-standard dataset. However, the cropped part presented in Fig. 5 is of high image quality, as characterized by low blur and relatively high photoreceptor contrast. The degree of disagreement between the evaluators was even higher in blurred image regions, as shown in Fig. 13. The blue rectangle depicts the cropped part shown in Fig. 5(B), where the image quality is relatively high, while the red rectangle depicts a blurred image area. The detection success of both the manual and the automatic approach is decreased by such low image quality. This is demonstrated in Fig. 13(B), where the large disagreement between the evaluators is obvious. Thus, the final image analysis is strongly dependent on image acquisition quality, which could be affected by the operator (i.e., Z-depth selection, correct instruction to the subject) and also by the subject (cooperation, ability to fixate, major eye pathology).

 figure: Fig. 13.

Fig. 13. A) The image with highlighted areas. The blue rectangle denotes the analyzed part of high image quality and it is depicted in Fig. 5(B). The red rectangle denotes the blurred area and it corresponds to B) comparing detection among three evaluators, CD algorithm and AO detect .

Download Full Size | PDF

Our automatic photoreceptor detection method is based on an approach presented previously [11] with several extensions as described in Section 2.3. The final CD method is based on the setting of several parameters, which can be conveniently and quickly adjusted by efficient Bayesian optimization utilizing manually detected cones. Therefore, we optimized and evaluated our CD method on the manually labeled dataset by each expert separately and then on the entire dataset consisting of an individual labeled dataset by each expert. This makes our method adapted to high and low quality of the image data. However, it must be emphasized that setting of these parameters is highly dependent on the annotated dataset, as we showed and documented in Fig. 4. Thus, using a labeled dataset only from one expert could lead to biased results, even if the evaluation metric for that approach provides satisfactory numbers. This is also true for deep learning based techniques (e.g. [32]) where the final model can be as good as the quality of the labeled data. Additionally, thanks to the optimization of the parameters, the CD algorithm has the potential to be successfully applied also with AO-SLO data after parameters accommodation. This was preliminarily tested on the dataset published by Wang et al. [52] with very promising results. However, given the complexity of this problem, a systematic application and detailed evaluation is left for future work.

To evaluate one of the main diagnostic parameters, i.e. the photoreceptor density, we used the Bland-Altman plots. Figure 14 shows these plots for different combinations of evaluators (first row), AO detect with each evaluator (second row) and our approach (CD) with each evaluator (third row). It can be seen that our approach achieves higher agreement compared to AO detect, i.e. the mean values of the differences and the limits of agreement are lower for our approach. The cone density estimation detected by our approach also better matches each evaluator compared to AO detect with each evaluator.

 figure: Fig. 14.

Fig. 14. Bland-Altman plots represents the differences in photoreceptor densities between each evaluators E1, E2 and E3 (first row), differences between AO detect (AO) and each evaluators (second row) and differences between our approach (CD) and each evaluator (third row). The dashed horizontal lines represents limits of agreements and the solid line represents mean differences.

Download Full Size | PDF

Comparison of our cone density estimate with other published papers shows good agreement and provides interesting results (see Fig. 15), which will be presented in the next paragraphs. We have carried out statistical comparison with the cone counting results from histology (Curcio et al. [42]), AO-FIO (Legras et al. [40]), and AO-SLO (Feng et al. [28]). The data most commonly used for comparison in terms of histology are the results published by Curcio et al. [42]. We evaluated our cone counting results at eccentricities presented in Curcio et al. [42] by independent sample t-test analysis using sample size, mean, standard deviation and Bonferroni correction (alpha level of p=0.0125 because the comparisons were carried out for 4 eccentricities per retinal direction). Our results were not statistically significantly different from presented histology data for all but one eccentricity compared (see Table 4 in Appendix).

 figure: Fig. 15.

Fig. 15. The graphs represent photoreceptor density for the right eye in comparison with the literature. The left column (A,C) displays the dependency of density on foveal distance in degrees and the right column (B,D) displays the dependency in milimeters. Thus the correct comparison with available results (some of them are publicly available in degrees and the others in milimeters) was ensured. The upper row (A,B) corresponds to horizontal meridian and the second row (C,D) corresponds to vertical meridian. The blue curve represents our data (rtx1, 10 retinas, age range 26-37 years), the orange curve with circles represents Curcio’s data (histology, 8 retinas, age range 27-44 years) [42], the green curve with circles represents Legras’s data (rtx1, 109 retinas, age range 20-59 years) [40], the red curve with circles represents Feng’s data (rtx1, 33 retinas, age range 14-69 years) [28], the purple curve with circles represents Park’s data (AO-SLO, 192 retinas, age range 10-50 and more years) [43]

Download Full Size | PDF

However, discrepancies between histological and image processing-based results in various studies using AO-FIO and AO-SLO have been observed and recently summarized by Legras et al. [40]. The difference in cone density between the AO-FIO presented and the histological data of Fig. 15 could be due to a different approach for in-vivo and ex-vivo analysis - the AO-FIO imaging is performed on intact curved living eye of a subject, while histological analysis uses isolated, fixed, and flattened retina tissue prepared for cell counting. This might cause differences in the metric calculation of tissue area in mm2. Further differences may be caused by converting the metric dimension in the histological preparation into a visual angle (see formula in Curcio et al. [42]). The AO-FIO images were obtained under correction for refractive error and metric measurements are adjusted according to the axial length of the individual eye. Furthermore, differences in axial length impact cone density estimation position and size [40,53] expressed in units of area in mm but not in terms of the visual angle in degrees. The retina is stretched with eyeball elongation causing differences in cone density. Additional differences may be caused by refractive error potentially present in the data by Curcioet al. [42].

For comprehensive comparison, we present our density data alongside other recent studies using AO-based imaging techniques, including statistical evaluation (see Table 5 in Appendix). The recent study by Legras et al. is one of the largest study using the rtx1 camera on 109 healthy retinas [40]. We observe agreement between our results and results by Legras et al. in Fig. 15. Statistical comparisons (independent sample t-test analysis) with their data present no statistically significant differences for almost all eccentricities and retinal directions, except for $4^{\circ}$ temporally. While results by Legras and co-workers match ours, Feng and colleagues present lower cone density curves, see also Fig. 15. Feng et al. [28] also employ the rtx1 camera, they analyzed the influence of the sampling interval on the estimation of the density of the cones. In Fig. 15 we compare their results which were obtained inside a ’fixed interval’, sampled for 33 healthy retinas. As presented in Table 5 in the Appendix, our cone counting results did not agree with the data by Feng et al. [28]. The same was true for a statistical comparison between Feng et al. and Legras et al., where cone counting was statistically significantly different for all examined locations [40,28]. A comparison to AO-SLO-based data was also carried out in Fig. 15. We show available data from the largest study using AO-SLO by Park et al. [43] from 192 retinas. Differences in cone density between our data and the results by Park and co-workers are smallest for distances close to the fovea. The possible source of differences could include some of the reasons mentioned below.

Aforementioned inconsistency can be attributed to various factors. Many studies confirmed large inter-individual variability of cone densities ([4042,5457]). This seems to play a role in observed differences between studies, as formerly the coefficient of density variation was reported between 10% to 20% [53]). Furthermore, recent AO-FIO based studies also showed lower [13,28] or higher cone densities [40] compared to histological data. Other cone density data, measured by AO-SLO, were also larger or smaller compared to Curcio et al. [42] ([27,43,53]). Such variations might be caused by differences in measurement and technical equipment or the cone detection strategy. Further, rod presence is increasing with higher retinal eccentricities, and although they are minor in size, they could be imaged by rtx1 AO-FIO at larger distances from the fovea. Nevertheless, only further detailed analysis focused on the rigorous one-to-one comparison made between AO-SLO and AO-FIO on the same eye can help clarify the degree of rod brightness participation in AO-FIO images in dependency on foveal distance. The sampling window strategy can also have substantial impact on the cone counting result - one region of interest can produce a range of cone densities depending on the size and placement of the sampling window [28].

The current study presents the photoreceptor density data across the central 5 mm of the retina (2.5 mm temporal; 2 mm nasal, superior and inferior), this corresponds to a measurement area of 8 degrees temporally and 6 degrees nasally, superiorly and inferiorly. Our dataset of 20 measured eyes is sufficient for method development and for testing its complexity. Extensive comparison of the cone counting results themselves should ideally be carried out on larger cohorts. It should be noted that our approach has been tested and evaluated on images taken for eccentricities below approximately 7$^{\circ }$, apart from the temporal location, where we measure up to 8$^{\circ }$. We did not investigate if the bimodal and multimodal intensity profiles of the cones at higher eccentricities, which can affect the estimation of the cone density estimation map. These multimodal cone profiles can be observed mainly for eccentricities above 7$^{\circ }$, as reported by Liu et al. [58], and thus can influence the cone counting due to similar intensity profiles of double or multiple intensity lobes. Therefore, the analysis of cone density by our algorithm for higher eccentricities needs to be investigated in future on an appropriate dataset. Nonetheless, we offer some comparison with other data as part of the current study. Furthermore, the results of the current study are reported on younger healthy eyes and thus can serve as a reference for future work. In addition, for the MATADOR pipeline, we also provide all images and manual labeling data to enable future work using this dataset. Research to build on the current data will pave the way for normative AO-FIO data in the near future.

Prior to the current study, retinal analysis was only possible by scanning the retinal area of interest part by part in multiple sequences using automated software or, at worst, manually. Previously, a compact overview across the retina on the basis of multiple images was only partly possible using manual registration via commercially available graphic packages combined with photoreceptor detection available only in the cropped part of a single image. Our proposed method and software enable us to generate AO-based photoreceptor density maps for all measured locations fully automatically. This can be easily created for all single subjects measured and, finally, enables an average map across study subjects. This connection of processing steps will enable comprehensive retinal evaluation for future medical applications in both the detection of retinal abnormalities and long-term subject monitoring.

5. Conclusion

Our study addresses the need for comprehensive processing and analysis of AO-FIO images. We created a novel processing pipeline that consists of several image processing steps. Our registration results show high alignment accuracy in comparison with manual alignment and also by NMI metric evaluation. Photoreceptor detection was compared between manually labeled data and automated software and showed a high detection accuracy as indicated by the Dice coefficient. The detected positions of the photoreceptors were utilized for the estimation of the photoreceptor density across the measured retina. The pipeline was tested on twenty retinas and provides valuable results for evaluation.

For the first time, fully automatically measured averaged photoreceptor density maps are presented. The density measurement across the meridians shows a steep decrease with the distance away from the fovea, as presented in other research. The maps demonstrate the averaged photoreceptor density across the measured retina and thus are a great source of new information for further analysis. Our approach and implementation provide a new valuable tool for retinal assessment as well as for longitudinal evaluation and, therefore, it will have further utilization as a research tool or in medical practice.

Appendix

A.1. Processing time requirements

Acquisition time of a single AO-FIO image (composite of 40 raw images) takes 2 seconds, acquisition time of multiple images as shown in our current work strongly depends on cooperation between operator and the subject, however the common time requirements are in order of minutes per eye. Further, the proposed methodology was implemented in the Matlab environment (version 9.11) currently without speed optimization. Nevertheless, the average computation time for one eye of a subject (10 single images) on the PC with installed AMD Ryzen 5 Processor, 3.2 GHz and 32 GB RAM is:

  • • Data loading (10 images): 1 sec
  • • Image registration (10 images): 35 sec
  • • Retinal image montaging (10 images): 33 sec
  • • Cone detection (1 large FOV image): 8.6 sec

The total computation time for one subject is typically below 80 seconds. It must be noted that mainly the registration part can be sped up using the advantage of parallel processing. Furthermore, implementation in lower-level programming languages (e.g. C, C++) could speed up the whole process in the range of a few hundred times [59].

A.2. Supplementary implementation details

The image montaging is implemented in Matlab as a sequential registration of pairs of particular images. We utilized the built-in Fourier transform function (fft) to correct for the large shift in the first step by phase correlation. Then we utilize the SIFT approach to fine-tune the registration with the help of open source VLfeat [60] library. For the geometric transformation and other basic image processing, we utilize functions from the Image Processing Toolbox (IPT). The fovea detection utilizes fundamental image processing functions from IPT. The CD algorithm is based on the brightness maximum detection (imregionalmax) and the detected maxima are reduced by parameters $H,S$ and $\sigma$, which are optimized by Bayes optimization (bayesopt).

A.3. Tables with measured values

Tables Icon

Table 1. Ocular biometry data. Ocular biometry was measured for 10 subjects with the Lenstar 900 (Haag Streit, Switzerland).

Tables Icon

Table 2. Mean cone density (n=10) for horizontal meridian measured at specific distances from fovea for left and right eye.

Tables Icon

Table 3. Mean cone density (n=10) for vertical meridian measured at specific distances from fovea for left and right eye.

A.4. Relationship between Dice coefficient and image quality

 figure: Fig. 16.

Fig. 16. The relationship between Dice coefficient and image quality. A group of three dots corresponds to one manually labeled patch, with some specific image quality, compared with the CD algorithm by Dice coefficient. There are three dots (i.e. three evaluators) for a patch as each of them was labeled by three evaluators. The yellow line represents the average with standard deviation (yellow region) and the black curve represents the fitted quadratic function.

Download Full Size | PDF

A.5. Statistical comparison

Tables Icon

Table 4. Table of p-values comparing the cone density measurements of the current study with histological results by Curcio et al. [42]. Statistical differences were evaluated at eccentricities, which were presented in Curcio et al. by independent sample t-test analysis, using sample size, mean, standard deviation and Bonferroni correction (alpha level of p=0.0125 because the comparisons were carried out for 4 eccentricities per retinal direction). The colors in each cell represent results of comparison for specific position. Green - the means of cone densities between studies are equal; red - the means of cone densities between studies are not equal; white - data not available

Tables Icon

Table 5. Table summarizing the comparison of cone density measurements of the current study with published results. Statistical differences were evaluated by comparing cone density at presented eccentricities for our results with the respective data from the literature by independent sample t-test analysis, using summary data from the studies depicted (taking into account sample size, mean and standard deviation for each eccentricity and retinal direction). The dots in each cell represent results of comparison for specific position between: current study and Legras et al. (first dot) [40], current study and Feng et al. (second dot) [28], Feng et al. and Legras et al. (third dot) [28,40]. The dots correspond to the results of statistical analysis based on independent sample t-test analysis (Bonferroni-corrected alpha level of p=0.0125; as comparisons were carried out for 4 eccentricities per retinal direction). Green dot - the means of cone densities between studies are equal; red dot - the means of cone densities between studies are not equal; white dot - data not available.

Funding

Grantová Agentura České Republiky (21-18578S).

Acknowledgement

The authors would like to thank Imagine Eyes, Orsay, France, especially Dr. Nicolas Chateau, Dr. Laurent Vabre, and Loic Le Kernevez for scientific and technical discussions on Adaptive Optics Flood Illumination Retinal Imaging. Sincere thanks are given to Dian Li, Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA and Anja Schirmer, Optometry and Ophthalmic Optics, Beuth University of Applied Sciences, Berlin, Germany, for carrying out the manual segmentations alongside Dr. Jan Darius Unterlauft for this paper. This work has been supported by the Foundation "Nadani Josefa, Marie a Zdenky Hlavkovych", Czech republic and by the Czech Science Foundation, project no. 21-18578S. Furthermore, we thank all subjects for their time to participate in this study.

Disclosures

The authors declare no conflicts of interest.

Data availability

Entire dataset of 200 images of 10 normal healthy subject and also the 40 foveal and 40 peripheral patches with manually labeled data are publicly available in Dataset [61].

References

1. J. S. Gill, M. Moosajee, and A. M. Dubis, “Cellular imaging of inherited retinal diseases using adaptive optics,” Eye 33(11), 1683–1698 (2019). [CrossRef]  

2. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]  

3. E. Akyol, A. M. Hagag, S. Sivaprasad, and A. J. Lotery, “Adaptive optics: principles and applications in ophthalmology,” Eye 35(1), 244–264 (2021). [CrossRef]  

4. S. A. Burns, A. E. Elsner, K. A. Sapoznik, R. L. Warner, and T. J. Gast, “Adaptive optics imaging of the human retina,” Prog. Retinal Eye Res. 68, 1–30 (2019). [CrossRef]  

5. R. F. Cooper, Y. N. Sulai, A. M. Dubis, T. Y. Chui, R. B. Rosen, M. Michaelides, A. Dubra, and J. Carroll, “Effects of intraframe distortion on measures of cone mosaic geometry from adaptive optics scanning light ophthalmoscopy,” Trans. Vis. Sci. Tech. 5(1), 10 (2016). [CrossRef]  

6. J.-A. Sahel, K. Grieve, C. Pagot, C. Authié, S. Mohand-Said, M. Paques, I. Audo, K. Becker, A.-E. Chaumet-Riffaud, L. Azoulay, E. Gutman, T. Léveillard, C. Zeitz, S. Picaud, D. Dalkara, and K. Marazova, “Assessing photoreceptor status in retinal dystrophies: From high-resolution imaging to functional vision,” Am. J. Ophthalmol. 230, 12–47 (2021). [CrossRef]  

7. E. Meixner and G. Michelson, “Measurement of retinal wall-to-lumen ratio by adaptive optics retinal camera: a clinical research,” Graefe’s Arch. Clin. Exp. Ophthalmol. 253(11), 1985–1995 (2015). [CrossRef]  

8. M. Paques, S. Meimon, F. Rossant, D. Rosenbaum, S. Mrejen, F. Sennlaub, and K. Grieve, “Adaptive optics ophthalmoscopy: Application to age-related macular degeneration and vascular diseases,” Prog. Retinal Eye Res. 66, 1–16 (2018). [CrossRef]  

9. C. Viard, K. Nakashima, B. Lamory, M. Pâques, X. Levecq, and N. Château, “Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera,” in Ophthalmic Technologies XXI, vol. 7885 (SPIE, 2011), pp. 66–75.

10. E. A. Rossi, N. Norberg, C. Eandi, C. Chaumette, S. Kapoor, L. Le, V. C. Snyder, J. N. Martel, J. Gautier, K. Gocho, K. K. Dansingani, J. Chhablani, A. Arleo, S. Mrejen, J.-A. Sahel, K. Grieve, and M. Paques, “A new method for visualizing drusen and their progression in flood-illumination adaptive optics ophthalmoscopy,” Trans. Vis. Sci. Tech. 10(14), 19 (2021). [CrossRef]  

11. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef]  

12. M. Lombardo, S. Serrao, P. Ducoli, and G. Lombardo, “Eccentricity dependent changes of density, spacing and packing arrangement of parafoveal cones,” Ophthalmic Physiol. Opt. 33(4), 516–526 (2013). [CrossRef]  

13. M. N. Muthiah, C. Gias, F. K. Chen, J. Zhong, Z. McClelland, F. B. Sallo, T. Peto, P. J. Coffey, and L. da Cruz, “Cone photoreceptor definition on adaptive optics retinal imaging,” Br. J. Ophthalmol. 98(8), 1073–1079 (2014). [CrossRef]  

14. J. I. Wolfing, M. Chung, J. Carroll, A. Roorda, and D. R. Williams, “High-resolution retinal imaging of cone–rod dystrophy,” Ophthalmology 113(6), 1014–1019.e1 (2006). [CrossRef]  

15. S. S. Choi, N. Doble, J. L. Hardy, S. M. Jones, J. L. Keltner, S. S. Olivier, and J. S. Werner, “In vivo imaging of the photoreceptor mosaic in retinal dystrophies and correlations with visual function,” Invest. Ophthalmol. Vis. Sci. 47(5), 2080–2092 (2006). [CrossRef]  

16. G. Querques, C. Kamami-Levy, A. Georges, A. Pedinielli, V. Capuano, R. Blanco-Garavito, F. Poulon, and E. H. Souied, “Adaptive optics imaging of foveal sparing in geographic atrophy secondary to age-related macular degeneration,” Retina 36(2), 247–254 (2016). [CrossRef]  

17. M. Lombardo, M. Parravano, S. Serrao, L. Ziccardi, D. Giannini, and G. Lombardo, “Investigation of adaptive optics imaging biomarkers for detecting pathological changes of the cone mosaic in patients with type 1 diabetes mellitus,” PLoS One 11(3), e0151380 (2016). [CrossRef]  

18. A. Zaleska-Żmijewska, P. Piątkiewicz, B. Śmigielska, A. Sokołowska-Oracz, Z. M. Wawrzyniak, D. Romaniuk, J. Szaflik, and J. P. Szaflik, “Retinal photoreceptors and microvascular changes in prediabetes measured with adaptive optics (rtx1™): a case-control study,” J. Diabetes Res. 2017, 1–9 (2017). [CrossRef]  

19. D. M. Bukowska, A. L. Chew, E. Huynh, I. Kashani, S. L. Wan, P. M. Wan, and F. K. Chen, “Semi-automated identification of cones in the human retina using circle hough transform,” Biomed. Opt. Express 6(12), 4676–4693 (2015). [CrossRef]  

20. G. Ramaswamy and N. Devaney, “Pre-processing, registration and selection of adaptive optics corrected retinal images,” Ophthalmic Physiol. Opt. 33(4), 527–539 (2013). [CrossRef]  

21. L. Blanco, L. M. Mugnier, A. M. Bonnefois, and M. Paques, “Registration and restoration of adaptive-optics corrected retinal images,” in 2014 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM), (2014), pp. 1–5.

22. H. Li, J. Lu, G. Shi, and Y. Zhang, “Automatic montage of retinal images in adaptive optics confocal scanning laser ophthalmoscope,” Opt. Eng. 51(5), 1–6 (2012). [CrossRef]  

23. M. Chen, R. F. Cooper, G. K. Han, J. Gee, D. H. Brainard, and J. I. W. Morgan, “Multi-modal automatic montaging of adaptive optics retinal images,” Biomed. Opt. Express 7(12), 4899–4918 (2016). [CrossRef]  

24. H. Li, H. Yang, G. Shi, and Y. Zhang, “Adaptive optics retinal image registration from scale-invariant feature transform,” Optik 122(9), 839–841 (2011). [CrossRef]  

25. M. Chen, R. F. Cooper, J. C. Gee, D. H. Brainard, and J. I. W. Morgan, “Automatic longitudinal montaging of adaptive optics retinal images using constellation matching,” Biomed. Opt. Express 10(12), 6476–6496 (2019). [CrossRef]  

26. B. Xue, S. S. Choi, N. Doble, and J. S. Werner, “Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera,” J. Opt. Soc. Am. A 24(5), 1364–1372 (2007). [CrossRef]  

27. T. Y. P. Chui, H. Song, and S. A. Burns, “Adaptive-optics imaging of human cone photoreceptor distribution,” J. Opt. Soc. Am. A 25(12), 3021 (2008). [CrossRef]  

28. S. Feng, M. J. Gale, J. D. Fay, A. Faridi, H. E. Titus, A. K. Garg, K. V. Michaels, L. R. Erker, D. Peters, T. B. Smith, and M. E. Pennesi, “Assessment of different sampling methods for measuring and representing macular cone density using flood-illuminated adaptive optics,” Invest. Ophthalmol. Vis. Sci. 56(10), 5751–5763 (2015). [CrossRef]  

29. R. Garrioch, C. Langlo, A. M. Dubis, R. F. Cooper, A. Dubra, and J. Carroll, “The repeatability of in vivo parafoveal cone density and spacing measurements,” Optom. Vis. Sci. 89(5), 632–643 (2012). [CrossRef]  

30. A. Turpin, P. Morrow, B. Scotney, R. Anderson, and C. Wolsley, “Automated identification of photoreceptor cones using multi-scale modelling and normalized cross-correlation,” in Image Analysis and Processing – ICIAP 2011, (Springer Berlin Heidelberg, Berlin, Heidelberg, 2011), pp. 494–503.

31. A. L. Chew, D. M. Sampson, I. Kashani, and F. K. Chen, “Agreement in cone density derived from gaze-directed single images versus wide-field montage using adaptive optics flood illumination ophthalmoscopy,” Trans. Vis. Sci. Tech. 6(6), 9–13 (2017). [CrossRef]  

32. D. Cunefare, L. Fang, R. F. Cooper, A. Dubra, J. Carroll, and S. Farsiu, “Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks,” Sci. Rep. 7(1), 6620 (2017). [CrossRef]  

33. A. Lazareva, P. Liatsis, and F. G. Rauscher, “Hessian-log filtering for enhancement and detection of photoreceptor cells in adaptive optics retinal images,” J. Opt. Soc. Am. A 33(1), 84–94 (2016). [CrossRef]  

34. S. J. Chiu, Y. Lokhnygina, A. M. Dubis, A. Dubra, J. Carroll, J. A. Izatt, and S. Farsiu, “Automatic cone photoreceptor segmentation using graph theory and dynamic programming,” Biomed. Opt. Express 4(6), 924–937 (2013). [CrossRef]  

35. L. Mariotti, N. Devaney, G. Lombardo, and M. Lombardo, “Understanding the changes of cone reflectance in adaptive optics flood illumination retinal images over three years,” Biomed. Opt. Express 7(7), 2807–2822 (2016). [CrossRef]  

36. R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013). [CrossRef]  

37. R. F. Cooper, G. K. Aguirre, and J. I. Morgan, “Fully automated estimation of spacing and density for retinal mosaics,” Trans. Vis. Sci. Tech. 8(5), 26 (2019). [CrossRef]  

38. J. Carroll, M. Neitz, H. Hofer, J. Neitz, and D. R. Williams, “Functional photoreceptor loss revealed with adaptive optics,” Proc. Natl. Acad. Sci. 101(22), 8461–8466 (2004). [CrossRef]  

39. A. M. Dubis, R. F. Cooper, J. Aboshiha, C. S. Langlo, V. Sundaram, B. Liu, F. Collison, G. A. Fishman, A. T. Moore, A. R. Webster, A. Dubra, J. Carroll, and M. Michaelides, “Genotype-dependent variability in residual cone structure in achromatopsia,” Invest. Ophthalmol. Vis. Sci. 55(11), 7303–7311 (2014). [CrossRef]  

40. R. Legras, A. Gaudric, and K. Woog, “Distribution of cone density, spacing and arrangement in adult healthy retinas with adaptive optics flood illumination,” PLoS One 13(1), e0191141 (2018). [CrossRef]  

41. K. Woog and R. Legras, “Distribution of mid-peripheral cones in emmetropic and myopic subjects using adaptive optics flood illumination camera,” Ophthalmic Physiol. Opt. 39(2), 94–103 (2019). [CrossRef]  

42. C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, “Human photoreceptor topography,” J. Comp. Neurol. 292(4), 497–523 (1990). [CrossRef]  

43. S. P. Park, J. K. Chung, V. Greenstein, S. H. Tsang, and S. Chang, “A study of factors affecting the human cone photoreceptor density measured by adaptive optics scanning laser ophthalmoscope,” Exp. Eye Res. 108, 1–9 (2013). [CrossRef]  

44. E. Valterova, F. G. Rauscher, and R. Kolar, “Robust automatic montaging of adaptive optics flood illumination retinal images,” in Annual Conference on Medical Image Understanding and Analysis, (Springer, 2021), pp. 503–513.

45. W. Gao, B. Cense, Y. Zhang, R. S. Jonnal, and D. T. Miller, “Measuring retinal contributions to the optical stiles-crawford effect with optical coherence tomography,” Opt. Express 16(9), 6486–6501 (2008). [CrossRef]  

46. R. S. Jonnal, J. R. Besecker, J. C. Derby, O. P. Kocaoglu, B. Cense, W. Gao, Q. Wang, and D. T. Miller, “Imaging outer segment renewal in living human cone photoreceptors,” Opt. Express 18(5), 5257–5270 (2010). [CrossRef]  

47. K. M. Litts, R. F. Cooper, J. L. Duncan, and J. Carroll, “Photoreceptor-based biomarkers in aoslo retinal imaging,” Invest. Ophthalmol. Vis. Sci. 58(6), BIO255 (2017). [CrossRef]  

48. E. Valterova, “Automatic adaptive optics retinal images montaging,” in In Proceedings of the 27th Conference STUDENT EEICT 2021, (Vysoké ucení technické v Brně, Fakulta elektrotechniky a komunikacních technologií, Brno, 2021).

49. E. Valterova, “Automated fovea center estimation in adaptive optics images,” in In Proceedings of the 26th Conference STUDENT EEICT 2020, (Vysoké ucení technické v Brně, Fakulta elektrotechniky a komunikacních technologií, Brno, 2020), pp. 442–446.

50. D. Roshandel, D. M. Sampson, D. A. Mackey, and F. K. Chen, “Impact of reference center choice on adaptive optics imaging cone mosaic analysis,” Invest. Ophthalmol. Vis. Sci. 63(4), 12 (2022). [CrossRef]  

51. A. G. Bennett, A. R. Rudnicka, and D. F. Edgar, “Improvements on littmann’s method of determining the size of retinal features by fundus photography,” Graefe’s Arch. Clin. Exp. Ophthalmol. 232(6), 361–367 (1994). [CrossRef]  

52. Y. Wang, N. Bensaid, P. Tiruveedhula, J. Ma, S. Ravikumar, and A. Roorda, “Human foveal cone photoreceptor topography and its dependence on eye length,” eLife 8, 1 (2019). [CrossRef]  

53. H. Song, T. Y. P. Chui, Z. Zhong, A. E. Elsner, and S. A. Burns, “Variation of Cone Photoreceptor Packing Density with Retinal Eccentricity and Age,” Invest. Ophthalmol. Vis. Sci. 52(10), 7376–7384 (2011). [CrossRef]  

54. J. B. Jonas, U. Schneider, and G. O. H. Naumann, “Count and density of human retinal photoreceptors,” Graefe’s Arch. Clin. Exp. Ophthalmol. 230(6), 505–510 (1992). [CrossRef]  

55. J. Jacob, M. Paques, V. Krivosic, B. Dupas, A. Erginay, R. Tadayoni, and A. Gaudric, “Comparing parafoveal cone photoreceptor mosaic metrics in younger and older age groups using an adaptive optics retinal camera,” Ophthalmic Surgery, Lasers Imaging Retin. 48(1), 45–50 (2017). [CrossRef]  

56. T. Y. P. Chui, H. Song, and S. A. Burns, “Individual variations in human cone photoreceptor packing density: variations with refractive error,” Invest. Ophthalmol. Vis. Sci. 49(10), 4679–4687 (2008). [CrossRef]  

57. A. E. Elsner, B. R. Walker, R. N. Gilbert, V. Parimi, J. A. Papay, T. J. Gast, and S. A. Burns, “Cone photoreceptors in diabetic patients,” Front. Med. 9, 1 (2022). [CrossRef]  

58. Z. Liu, O. P. Kocaoglu, T. L. Turner, and D. T. Miller, “Modal content of living human cone photoreceptors,” Biomed. Opt. Express 6(9), 3378–3404 (2015). [CrossRef]  

59. T. Andrews, “Computation time comparison between Matlab and c++ using launch windows,” Tech. Rep., California Polytechnic State University, St. Luis Obispo (2012).

60. A. Vedaldi and B. Fulkerson, “VLFeat: An open and portable library of computer vision algorithms,” VLFeat, 2008, http://www.vlfeat.org/.

61. E. Valterova, J. D. Unterlauft, M. Francke, T. Kirsten, R. Kolar, and F. G. Rauscher, “Comprehensive automatic processing and analysis of adaptive optics flood illumination retinal images,” Zenodo, 2022, https://doi.org/10.5281/zenodo.6635685.

Data availability

Entire dataset of 200 images of 10 normal healthy subject and also the 40 foveal and 40 peripheral patches with manually labeled data are publicly available in Dataset [61].

61. E. Valterova, J. D. Unterlauft, M. Francke, T. Kirsten, R. Kolar, and F. G. Rauscher, “Comprehensive automatic processing and analysis of adaptive optics flood illumination retinal images,” Zenodo, 2022, https://doi.org/10.5281/zenodo.6635685.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. The flowchart depicting proposed methodology. Image acquisition: Initial center positions of acquired images (Section 2.1). Image registration and montage: Descriptive schema of montage creation (Section 2.2). Photoreceptor detection: An example of detected photoreceptor in image cutout (Section 2.3). Photoreceptor density evaluation: Photoreceptor density maps (Section 3.3). Fovea detection: An example of detected fovea center from processed central image (Section 2.4).
Fig. 2.
Fig. 2. The charts represent the center position of captured images for right and left eye, respectively.
Fig. 3.
Fig. 3. Left: The schema displaying the selection of each individual pixel according to its quality in neighborhood $L_n$ from ovelapping images $I_n$. Right: The montage image with assigned pixels to single images. The yellow arrow points to the area, which is assigned to its surrounding due to its minor size.
Fig. 4.
Fig. 4. A) The Dice coefficient for three evaluators (E1, E2 and E3) after 4-fold cross-validation for fovea and periphery; Particular ranges for B) parameter $\sigma$ C) parameter $H$ D) parameter $S$ for each evaluator and set of patches.
Fig. 5.
Fig. 5. A) Schema of agreement among cone detection algorithm (CD) and evaluators (E1, E2, E3), where green values correspond to Dice coefficient of foveal areas and orange values correspond to Dice coefficient of periphery areas. B) An example of evaluation in selected cropped part of foveal patch image. The arrows of same color highlight the photoreceptor location disagreement between evaluators in identical areas.
Fig. 6.
Fig. 6. A) Cropped overlapping parts of reference and B) moving image and the registration results of C) manual and D) PC-SIFT registration of A and B. The alignment precision in C and D is highlighted by RGB visualisation, where gray regions represents the area of the two registered images with same intensity and magenta and green regions show different image intensities. The example images shown here have 300 $\times$ 300 pixels (approximately $\sim 0.05$ mm2). The histogram equalization was applied on images for image differences highlighting.
Fig. 7.
Fig. 7. Final montage image created after single images registration by PC-SIFT of 10 images from right eye of a subject with zoomed area denoted by red frame. The colored version of the same area show, where is the boundary location between two registered images, where one of them is presented by reddish area and the second by bluish color.
Fig. 8.
Fig. 8. A) Schema of agreement among proposed cone detection algorithm (CD) and evaluators (E1, E2, E3). B) Schema of agreement among AO detect algorithm (AO) and evaluators (E1, E2, E3) and its connection describing the agreement between CD and AO detect. The green values correspond to Dice coefficient of foveal areas and orange values correspond to Dice coefficient of peripheral areas.
Fig. 9.
Fig. 9. A) An example of detected photoreceptors by the proposed CD method. B) Corresponding Vonoroi’s polygons with a colorbar for photoreceptor spacing (left) in mm2 and photoreceptor density (right) in photoreceptors per mm2.
Fig. 10.
Fig. 10. A) Right eye density map aligned according its size in degrees. B) Left eye density map aligned according its size in degrees. C) Right eye density map aligned according its size in millimeters. D) Left eye density map aligned according its size in millimeters.
Fig. 11.
Fig. 11. The graphs represent photoreceptor density as a function of foveal distance. The blue line represents the retinal cross section of processed density maps with its standard deviation. The dotted red line denotes the cropped foveal area. The orange dot markers correspond to histological data and are connected by linear interpolation. A) Right eye horizontal meridian. B) Left eye horizontal meridian. C) Right eye vertical meridian. D) Left eye vertical meridian.
Fig. 12.
Fig. 12. A preview of MATADOR. The density map correspond to the montage image shown in Fig. 7.
Fig. 13.
Fig. 13. A) The image with highlighted areas. The blue rectangle denotes the analyzed part of high image quality and it is depicted in Fig. 5(B). The red rectangle denotes the blurred area and it corresponds to B) comparing detection among three evaluators, CD algorithm and AO detect .
Fig. 14.
Fig. 14. Bland-Altman plots represents the differences in photoreceptor densities between each evaluators E1, E2 and E3 (first row), differences between AO detect (AO) and each evaluators (second row) and differences between our approach (CD) and each evaluator (third row). The dashed horizontal lines represents limits of agreements and the solid line represents mean differences.
Fig. 15.
Fig. 15. The graphs represent photoreceptor density for the right eye in comparison with the literature. The left column (A,C) displays the dependency of density on foveal distance in degrees and the right column (B,D) displays the dependency in milimeters. Thus the correct comparison with available results (some of them are publicly available in degrees and the others in milimeters) was ensured. The upper row (A,B) corresponds to horizontal meridian and the second row (C,D) corresponds to vertical meridian. The blue curve represents our data (rtx1, 10 retinas, age range 26-37 years), the orange curve with circles represents Curcio’s data (histology, 8 retinas, age range 27-44 years) [42], the green curve with circles represents Legras’s data (rtx1, 109 retinas, age range 20-59 years) [40], the red curve with circles represents Feng’s data (rtx1, 33 retinas, age range 14-69 years) [28], the purple curve with circles represents Park’s data (AO-SLO, 192 retinas, age range 10-50 and more years) [43]
Fig. 16.
Fig. 16. The relationship between Dice coefficient and image quality. A group of three dots corresponds to one manually labeled patch, with some specific image quality, compared with the CD algorithm by Dice coefficient. There are three dots (i.e. three evaluators) for a patch as each of them was labeled by three evaluators. The yellow line represents the average with standard deviation (yellow region) and the black curve represents the fitted quadratic function.

Tables (5)

Tables Icon

Table 1. Ocular biometry data. Ocular biometry was measured for 10 subjects with the Lenstar 900 (Haag Streit, Switzerland).

Tables Icon

Table 2. Mean cone density (n=10) for horizontal meridian measured at specific distances from fovea for left and right eye.

Tables Icon

Table 3. Mean cone density (n=10) for vertical meridian measured at specific distances from fovea for left and right eye.

Tables Icon

Table 4. Table of p-values comparing the cone density measurements of the current study with histological results by Curcio et al. [42]. Statistical differences were evaluated at eccentricities, which were presented in Curcio et al. by independent sample t-test analysis, using sample size, mean, standard deviation and Bonferroni correction (alpha level of p=0.0125 because the comparisons were carried out for 4 eccentricities per retinal direction). The colors in each cell represent results of comparison for specific position. Green - the means of cone densities between studies are equal; red - the means of cone densities between studies are not equal; white - data not available

Tables Icon

Table 5. Table summarizing the comparison of cone density measurements of the current study with published results. Statistical differences were evaluated by comparing cone density at presented eccentricities for our results with the respective data from the literature by independent sample t-test analysis, using summary data from the studies depicted (taking into account sample size, mean and standard deviation for each eccentricity and retinal direction). The dots in each cell represent results of comparison for specific position between: current study and Legras et al. (first dot) [40], current study and Feng et al. (second dot) [28], Feng et al. and Legras et al. (third dot) [28,40]. The dots correspond to the results of statistical analysis based on independent sample t-test analysis (Bonferroni-corrected alpha level of p=0.0125; as comparisons were carried out for 4 eccentricities per retinal direction). Green dot - the means of cone densities between studies are equal; red dot - the means of cone densities between studies are not equal; white dot - data not available.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

Q n = s L n μ L n ,
D = 2 | A B | | A | + | B | .
N M I = H ( X ) + H ( Y ) H ( X , Y ) H ( X ) H ( Y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.