Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic cone photoreceptor segmentation using graph theory and dynamic programming

Open Access Open Access

Abstract

Geometrical analysis of the photoreceptor mosaic can reveal subclinical ocular pathologies. In this paper, we describe a fully automatic algorithm to identify and segment photoreceptors in adaptive optics ophthalmoscope images of the photoreceptor mosaic. This method is an extension of our previously described closed contour segmentation framework based on graph theory and dynamic programming (GTDP). We validated the performance of the proposed algorithm by comparing it to the state-of-the-art technique on a large data set consisting of over 200,000 cones and posted the results online. We found that the GTDP method achieved a higher detection rate, decreasing the cone miss rate by over a factor of five.

©2013 Optical Society of America

1. Introduction

Diagnosis, prognosis, and treatment of many ocular and neurodegenerative diseases require visualization of microscopic structures in the eye. Integration of adaptive optics (AO) into ocular imaging systems has made the visualization of living human photoreceptors possible [114]. More specifically, the AO scanning light ophthalmoscope (AOSLO) [2] has been a key instrument for analyzing the photoreceptor mosaic and revealing subclinical ocular pathologies missed by other modern ophthalmic imaging modalities [15]. Studies have been conducted on the photoreceptor mosaic to gather normative data on photoreceptor distribution [16,17], density [1820], spacing [8,21,22], directionality [23], and temporal changes [24,25]. Characterization of irregular mosaics in the presence of various retinal diseases such as cone-rod dystrophy has also been achieved [22,2638].

To generate quantitative metrics of the photoreceptor mosaic, identification of individual photoreceptors is often a required step. Since manual identification is extremely time-consuming, many groups have utilized some form of automation when studying the photoreceptor mosaic [9,12,14,17,18,27]. Cone identification algorithms have also been developed and validated for accuracy [3943]; the Garrioch et al. 2012 algorithm [44], for example, is a modified version of the Li & Roorda 2007 algorithm [39] and was thoroughly validated for repeatability on a large cone mosaic data set. Even so, manual correction was still necessary to identify missed photoreceptors [20,34].

In this work, we propose the use of graph theory and dynamic programming (GTDP), a framework we previously developed to segment layered [4547] and closed contour structures [48], to both identify and segment cone photoreceptors in AO ophthalmoscopy images (Section 2.3). We then validate our algorithm’s performance for cone identification (Section 3.2) and evaluate its reproducibility in cone density and spacing estimation (Section 3.3). Finally, the proposed algorithm is extended to segment an image containing both rod and cone photoreceptors (Section 3.4).

2. Methods

The methods for image acquisition, photoreceptor segmentation, and result validation are discussed in the following sections. Section 2.1 explains the image capture and pre-processing steps, while Section 2.2 describes the gold standard (target) for cone identification. Section 2.3 describes our method for cone segmentation, and Section 2.4 outlines the method for validation. Lastly, Section 2.5 introduces the preliminary rod-cone segmentation algorithm.

2.1 Image data set

We validated our algorithm on 840 images (150 × 150 pixels) from the Garrioch et al. study [44], where the methods for image acquisition and pre-processing are described in detail. To summarize, the right eye of 21 subjects (25.9 ± 6.5 years in age, 1 subject with deuteranopia) was imaged using a previously described AOSLO system [13,17] with a 775 nm super luminescent diode and a 0.96 × 0.96° field of view. Four locations 0.65° from the center of fixation (bottom left, bottom right, top left, and top right) were imaged, capturing 150 frames at each site. This process was repeated 10 times for each subject. Axial length measurements were also acquired with an IOL Master (Carl Zeiss Meditec, Dublin, CA) to determine the lateral resolution of the captured images.

Following image acquisition, pre-processing steps were taken in the Garrioch et al. study to generate a single registered image from each 150 image sequence. To do this, first any sinusoidal distortions from the resonant scanner were removed from individual frames. The frames from each sequence were then registered to a reference frame [49], and the top 40 frames with the highest normalized cross correlation to the reference were averaged together. This procedure was performed for all 21 subjects at each of the 4 locations and repeated 10 times over, resulting in a total of 840 images in the image data set. Finally, to ensure that each set of 10 repeated images captured the same patch of retina, the images were aligned using strip registration.

Since the image data set was used strictly for algorithm validation, we obtained a separate set of images to tune the algorithm. These training images were captured using the same imaging protocol, and patients from the test and validation data sets did not overlap.

2.2 Gold standard for cone identification

We defined the gold standard as the semi-automatically identified cone locations reported in the Garrioch et al. study, since the cone locations on all 840 images had been carefully reviewed and corrected by an expert grader. As described in the study, the initial cone coordinates were first automatically generated using the Garrioch et al. 2012 algorithm, a modified version of the Li & Roorda 2007 cone identification algorithm [39]. Any missed cones were then added manually. Automatically segmented cones were not removed or adjusted, as the Garrioch et al. 2012 algorithm exhibited a tendency towards false negatives rather than false positives.

2.3 GTDP cone segmentation algorithm

We developed a customized implementation of our generalized GTDP framework for closed contour structures [48] to segment cone photoreceptors in AOSLO images. In brief, we used maxima operators to obtain pilot estimates of prominent cones. We then used the quasi-polar transform [48] to map the closed contour cone estimates from the Cartesian domain into layers in the quasi-polar domain. The layered structures were then segmented utilizing our classic GTDP method [45]. By applying the inverse quasi-polar transform, the segmentation lines were carried back into the Cartesian space. Finally, we performed additional iterations to find any missed cones. These steps are described in details in the following.

We first brightened dim photoreceptors by applying Eq. (1) to the 150 × 150 pixel image Icorig (subscript c denotes the Cartesian domain), where normalize(X,y,z) indicates a linear normalization of the elements in matrix X to range from y to z.

Icall=normalize(log(normalize(Icorig,0.1,0.9)),0,1)
The range 0.1 to 0.9 was chosen to increase the contrast between the dimmest and brightest pixels, as well as to avoid the log(0) and log(1) computations. The superscript all means all pixels were present in the image.

We then determined pilot estimates of the cones by finding local maxima using the imregionalmax(Icall,4) function in MATLAB, The MathWorks, Natick, MA. This resulted in the binary image Bcall, where values of 1 corresponded to pilot estimates of cones. Individual cones were then analyzed by order of decreasing intensity, where Icall and Bcall were cropped about the centroid of the cone’s pilot estimate to generate the 21 × 21 pixel images Ic and Bc; cropping the images enabled a faster computation time, and the ten pixel buffer on all sides of the centroid ensured that the target cone was not cropped out of Ic. Pilot estimates for other cones contained within Bc were removed, and the remaining cone estimate in Bc was refined using thresholding. The new pilot estimate consisted of connected pixels in Ic ranging from 0.95Tmax to Tmax in intensity, where Tmax was the maximum intensity in Ic that coincided with Bc=1, and 0.95Tmax was determined empirically to avoid thresholding adjacent cones.

To segment each cone, we first used our previously described quasi-polar transform [48] to transform Ic to Iq (q denotes the quasi-polar domain). To do this, we first transformed Ic and Bc (Figs. 1(a) and 1(b)) into the polar domain to create Ip and Bp (Figs. 1(c) and 1(d)). Next, we column-wise shifted Ip until the pilot estimate in Bp was flat, resulting in the quasi-polar images Iq and Bq (Figs. 1(e) and 1(f)). After obtaining Iq, we removed regions containing other pilot estimates and already-segmented cones from the search space, and used GTDP to find the shortest path across Iq with the following weight scheme:

wab=normalize((gaLD+gbLD),1,2)+normalize((gaDL+gbDL),0,0.1)+normalize(dab,0,0.05)+wmin,
where wab is the edge weight connecting nodes a and b, gnLD and gnDL are the vertical light-to-dark and dark-to-light gradients [45] of the image at node n, respectively, dab is the Euclidian distance from node a to b, and wmin=0.00001. The vertical light-dark gradient comprised the majority of the weight, since it was the primary indicator for the boundary of the central cone. A smaller weight was given to the dark-light gradient to segment boundaries of dimmer cones adjacent to brighter cones (Fig. 1(c), left). Finally, a vertical distance penalty was added to discourage the segmented line from including adjacent cones. Specific values for weight ranges were determined empirically.

 figure: Fig. 1

Fig. 1 Cone photoreceptor segmentation using the quasi-polar transform. (a) Cartesian image containing the cone to segment. (b) Pilot estimate of the cone in (a). (c,d) Polar transformation of (a) and (b), respectively. The black regions in (c) are invalid points that lie outside the image in the Cartesian domain. (e,f) Images (c) and (d) column-wise shifted until the pilot estimate in (d) was flat. (g) Segmentation of (e) using GTDP (magenta). (h) Transformation of the segmentation in (g) back into the Cartesian domain (magenta).

Download Full Size | PDF

We then transformed the shortest path from the quasi-polar domain (Fig. 1(g)) back into the Cartesian domain to obtain the final segmentation of the cone (Fig. 1(h)), keeping it only if the mean radius was greater than one pixel. This entire process was then repeated for all subsequent cone estimates.

At this stage of the algorithm, the cones identified and segmented by the GTDP method (Fig. 2(b), black) may be similar to those detected by previous methods, since local maxima were used to initialize the cone locations. To further identify any missed cones, we obtained pilot estimates of the cones using a second method: image deblurring using maximum likelihood blind deconvolution [5052] (deconvblind function in MATLAB) with a Gaussian point spread function of half the mean radius of already segmented cones, followed by locating all regional maxima with a pixel connectivity of eight. Any pilot estimates lying outside already-segmented cone locations (Figs. 2(a) and 2(b), white) were segmented using the same quasi-polar GTDP technique, with the modification to the weighting matrix as shown in Eq. (3). In this weighting scheme, the vertical dark-light gradient was assigned a higher weight since cones detected during this section iteration were typically dimmer and adjacent to brighter cones. The vertical distance penalty was also removed since adjacent cones were already segmented and thus removed from the search region.

 figure: Fig. 2

Fig. 2 Identification of cones missed by local maxima. (a) AOSLO image in log scale with missed cones shown inside the white boxes. (b) Cone photoreceptors segmented using local maxima initialization in black, and pilot estimates of missed cones found using deconvolution and local maxima are shown in white asterisks.

Download Full Size | PDF

wab=normalize((gaLD+gbLD),1,2)+normalize((gaDL+gbDL),1,1.5)+wmin

2.4 Statistical validation

We validated our GTDP algorithm by comparing its performance to the Garrioch et al. 2012 algorithm and to the gold standard generated by the Garrioch et al. paper [44]. To perfectly replicate the Garrioch et al. study, all images were cropped to a 55 µm × 55 µm region about the image center to remove any boundary effects.

To evaluate the performance in cone identification, we compared both fully automatic methods (GTDP and Garrioch et al. 2012) to the gold standard using two metrics: # of true positives, those detected by both the fully automatic and gold standard techniques, and # of false positives, those detected by the fully automatic method but not by the gold standard. A cone was considered to be a true positive if it was within a 1.75 µm Euclidian distance from a gold standard cone. This value was chosen since the mean cone spacing reported in the Garrioch et al. study was approximately 3.50 µm; half this value was therefore a reasonable estimate for the cone radius. If an automatically identified cone did not have any gold standard cones within the 1.75 µm distance, then it was tagged as a false positive. Furthermore, more than one automatically identified cone could not be matched to a single gold standard cone, thus yielding the following relationships:

Nautomatic  cones  identified=Ntrue positive+Nfalse positiveandNgold  standard  cones  identified=Ntrue positive+Nfalse negative,
where Nfalse negative was the number of cones detected by the gold standard but not by the fully automatic method. The proportion of true and false positives were then estimated with 95% confidence intervals (CI) across all patients and all quadrants using a generalized estimating equation (GEE) model with log link [53].

The reproducibility of each method was assessed by the comparing cone density (number of cones per mm2) and cone spacing (mean distance from each cone to its nearest neighbor) measurements output by each method at each quadrant. The variability in cone density and spacing measurements (characterized by the variance Vtotal) stemmed from two sources: 1) variability in measurements taken on the same subject, resulting from the method used (within-subject variability; variance Vwithin), and 2) variability in true values between subjects, resulting from biological variation between subjects (between-subjects variability; variance Vbetween). Thus, Vtotal=Vwithin+Vbetween. The reproducibility was characterized using two components: 1) within-subject coefficient of variation (CV), and 2) intra-class (intra-subject) correlation coefficient (ICC). The within-subject CV was defined as the ratio of the square root of Vwithin to the overall mean measurement, where a lower CV indicates a better the method. ICC was defined as the ratio of Vbetween to Vtotal, thus a ratio closer to 1 indicates a better method.

2.5 Preliminary GTDP rod-cone segmentation algorithm

To illustrate the potential of this algorithm to segment images containing both rods and cones, we modified the cone segmentation algorithm described in Section 2.3 to segment a rod and cone photoreceptor image (originally 250 × 250 pixels, scaled to 578 × 578 pixels at 0.186 µm/pixel) captured using the new generation of AOSLO systems [17,54]. In this modified version of the algorithm, photoreceptors were segmented with weights determined by Eq. (5), where in is the intensity of the image at node n, and rn is the distance of node n from the top of the image Iq. These additional weights were included to target the location of minimum intensity rather than maximum gradient, and to penalize peripheral photoreceptors from being segmented.

wab=normalize((gaLD+gbLD),1,2)+normalize(ia+ib,0.1,0.2)+normalize(ra+rb,0,0.05)+normalize(dab,2,2.1)+wmin

Segmentations with radii less than 3.72 µm were considered to isolate rods, and the rest were re-segmented with the weighting scheme in Eq. (6) to isolate cones. The rn distance penalty was removed since cones have larger radii than rods, and the gnLD weights were removed to delineate the prominent hypo-reflective region surrounding cones on AOSLO rather than the high gradient boundary.

wab=normalize(ia+ib,0.2,1)+normalize(dab,0,0.1)+wmin

3. Results

Section 3.1 discusses the segmentation results of our method, while Sections 3.2 and 3.3 show quantitative results comparing the performance of our method against the state-of-the-art for cone identification and cone density and spacing reproducibility, respectively. Finally, Section 3.4 shows a preliminary segmentation result for an image containing both rod and cone photoreceptors.

3.1 Cone segmentation result

Figure 3(b) (top) is a representative segmentation result generated by our GTDP algorithm to segment cone photoreceptors in AOSLO images, and Fig. 3(c) (top) shows the centroid of each segmented cell. While the GTDP algorithm delineated the perceived cone boundaries, we used the result in Fig. 3(c) to validate our algorithm against other cone identification techniques. Figure 3 (bottom) shows the segmentation result for an image of lower quality.

 figure: Fig. 3

Fig. 3 Qualitative GTDP segmentation result. Top row: (a) Higher quality AOSLO image of cone photoreceptors in log scale, (b) fully automatic segmentation result of (a) using GTDP for closed contour structures, and (c) centroid of each fully automatically segmented cone from (b). Bottom row: Lower quality AOSLO image (a) and its segmentation (b) and centroid (c) result.

Download Full Size | PDF

The entire validation data set and the corresponding GTDP, Garrioch et al. 2012, and gold standard segmentation results are available at http://www.duke.edu/~sf59/Chiu_BOE_2013_dataset.htm. The fully automated algorithm was coded in MATLAB (The MathWorks, Natick, MA) and had an average computation time of 1.56 seconds per image (150 × 150 pixels, an average of 300 cones per uncropped image) using 8-thread parallel processing on a laptop computer with a 64-bit operating system, Core i7-820QM CPU at 1.73 GHz (Intel, Mountain View, CA), 7200 rpm hard drive, and 16 GB of RAM. This time included the overhead required for reading and writing operations.

3.2 Cone identification performance

The performance in cone identification for each of the methods is shown in Table 1. This table shows that after taking into consideration all correlated data, our GTDP method correctly detected 99.0% of the cones, compared to the Garrioch et al. 2012 method which detected 94.5% of the gold standard cones; this difference was found to be significant (Z = 15.0, p<0.0001). In addition, 1.5% of the cones found by the GTDP method were not in the gold standard. False positive cones could not be detected by the Garrioch et al. 2012 method since the gold standard was based off of the Garrioch et al. 2012 algorithm (see Section 2.2). Lastly, the mean distance error from the true positive GTDP cones to the gold standard cones was 0.20 ± 0.26 µm.

Tables Icon

Table 1. Cone Identification Performance of Fully Automatic Methods Compared to the Gold Standard Across All 840 Images

Figure 4 is an illustrative example of the cone identification results, where the middle row shows the mean cone identification performance for both automatic algorithms, while the top and bottom rows show the performance approximately one standard deviation above and below the mean. The middle column displays the Garrioch algorithm et al. 2012 results, with true positives in yellow and false negatives in green. The right column shows the GTDP results, with true positives in magenta, false negatives in green, and false positives in blue. The performance (% true positive by Garrioch et al. 2012; % true positive by GTDP; % false positive by GTDP) for the top, middle, and bottom rows of Fig. 4 were (100; 98.4; 0), (99.1; 94.4; 2.1), and (97.5; 90.4; 3), respectively.

 figure: Fig. 4

Fig. 4 Variable performance of the fully automatic cone identification algorithms. Left column: AOSLO image of the cone mosaic in log scale. Middle column: Garrioch et al. 2012 algorithm results (yellow: true positives; green: false negatives). Right column: GTDP algorithm results (magenta: true positives; green: false negatives; blue: false positives). Middle row: Typical (mean) performance by both algorithms. Top and bottom rows: Performance one standard deviation above and below the mean for both algorithms, respectively.

Download Full Size | PDF

Finally, Fig. 5 takes a closer look at the results from Fig. 4(b) (right). The black box highlights a “false positive” cone added by the GTDP algorithm per the gold standard, however inspection of the original image in Fig. 5(a) indicates that a cone is indeed present at that location. In contrast, the white boxes in Fig. 5 highlight “false negative” cones missed by the algorithm per the gold standard. By inspecting Fig. 5(a), however, these locations do not seem to exhibit hyper reflectivity.

 figure: Fig. 5

Fig. 5 A closer look at the performance of the GTDP algorithm. (a) AOSLO image corresponding to Fig. 4(b) (left), and (b) automatic GTDP segmentation result (magenta: true positives; green: false negatives; blue: false positives). White boxes: locations where the algorithm “missed” a cone, even though there appears to be no cone present. Black box: location where the algorithm “erroneously added” a cone, although the original image seems to contain an added cone not identified by the gold standard.

Download Full Size | PDF

3.3 Reproducibility results

Table 2 shows the mean, ICC, and within-subject CV values for the cone density and spacing metrics as measured by the Garrioch, GTDP, and gold standard methods separated by image quadrant. The average GTDP cone density ICC of 0.989 indicates that on average, 98.9% of the total variability in the measurements was due to the variability between subjects, while only 1.1% was due to the GTDP algorithm. The average GTDP within-subject CV of 0.0146 indicates that the error in reproducing the same measurement for the same subject was within 1.46% of the mean.

Tables Icon

Table 2. Reproducibility Comparison of Cone Density and Spacing Measurements

3.4 Preliminary rod and cone segmentation result

Figure 6(a) shows an example rod and cone photoreceptor image [17,54] accompanied by the GTDP segmentation result in Fig. 6(b) and its associated centroids in Fig. 6(c). Figure 6(d) shows a histogram of the number of photoreceptors at various sizes based on the segmentation from Fig. 6(b), and Fig. 6(e) demonstrates a simple classification of rod and cone photoreceptors using a size threshold of 27.7 µm2.

 figure: Fig. 6

Fig. 6 Fully automatic identification of rods and cone photoreceptors. (a) AOSLO image of rods and cone photoreceptors in log scale (image taken from [54]). (b,c) Fully automatic segmentation (b) and identification (c) of rods and cones using GTDP for closed contour structures. (d) Histogram of the segmentations from (b). (e) Threshold of 27.7 µm2 used to classify the photoreceptors from (d) into rods (magenta) and cones (green).

Download Full Size | PDF

4. Discussion and conclusion

We developed a fully automatic algorithm using graph theory and dynamic programming to segment cone photoreceptors in AOSLO images of the retina and validated its performance. We were able to achieve a higher cone detection rate, more accurate cone density and spacing measurements, and comparable reproducibility compared to the Garrioch et al. 2012 algorithm. Furthermore, the segmentation-based approach enabled identification and classification of rods and cones within a single image. This is highly encouraging for large-scale ophthalmic studies requiring an efficient and accurate analysis of the photoreceptor mosaic.

We obtained the data set from the Garrioch et al. study [44] to validate the performance of our algorithm on a large untrained data set. We compared the performance of our fully automatic cone segmentation algorithm to the state-of-the-art technique, and found that our GTDP method decreased the Garrioch et al. 2012 cone miss rate by a factor of 5.5 (Table 1, 1.0% vs. 5.5% false positives). One point five percent of the cones not identified by the gold standard were also found using our technique. While this implies that our algorithm falsely identified these cones, Fig. 5 shows that in some cases, our GTDP method was able to identify cones not found by the gold standard; such observations, while not the norm, are likely due to the resource intensive nature of semi-automatic cone identification.

The mean results in Table 2 indicate that the cone density and spacing metrics extracted by the GTDP method were more accurate on average than the Garrioch et al. 2012 algorithm, despite the bias where the Garrioch et al. results were used as the starting point for generating the gold standard. While an unbiased comparison could have been conducted, this would have required a fully manual identification of nearly 256,000 cones. Table 2 also shows that the GTDP method generated more reproducible cone density measurements (mean 0.0146 CV) than the other automated method (mean 0.0254 CV). To be consistent with previous publications, we also compared reproducibility in cone spacing, which showed that the Garrioch et al. 2012 method produced more reproducible results (mean 0.0086 CV) compared to both the GTDP method (mean 0.0130 CV) and the gold standard. This is because both the GTDP method as well as the gold standard detected more cones; these were typically the harder and more irregularly spaced cones, and thus resulted in more variable cone spacing. As a result, cone spacing reproducibility might not be the most reliable quantitative measure of performance. Nevertheless, all three methods had a very good within-subject CV, showing that the within-subject standard error (error due to method) ranged by only 0.74% to 2.73% from the mean. Furthermore, all three methods had a very good ICC, showing that 95% to 99.5% of the total variability in the measurements was due to variability between subjects, while only 0.5% to 5% was due to the method. This high ICC was a result of the pre-processing image alignment performed in the Garrioch et al. study (Section 2.1) to ensure that the same patch of retina was imaged.

A notable difference and novelty of the GTDP algorithm as compared to existing en face cone segmentation algorithms, is its use of segmentation to identify cones. While the most common technique for cone identification is to locate points of maximal intensity, such a method only locates cone centers. In contrast, our technique delineates cone boundaries, resulting in added information about the size and shape of the segmented object. This information may be helpful for applications such as studying how the multimodal structure of larger cones changes with time or wavelength. However, it is of importance to note that in the context of AO photoreceptor imaging, cone sizes may be near the resolution limit, especially towards the foveal center. Furthermore, estimation of photoreceptor size depends on the wavelength of the imaging modality (e.g. fundus camera, SLO, OCT) and even varies over time based on intensity fluctuations. As a result, extracting size and shape information about the cones, while helpful, may not be an accurate indication of its true morphologic state.

Another advantage of using segmentation is that it enables a higher cone detection rate. By keeping track of the entire area of a cone rather than only its centroid, we can look for added cones in regions where cones have not yet been found (Fig. 2(b)). Our technique also provides an advantage for isolating rods and cones within a single image (Fig. 6(e)), as we can readily distinguish between the two types of photoreceptors based on their segmented area in normal retinae. Since accurate photoreceptor classification depends on correctly segmented photoreceptors, however, the rods improperly segmented as cones in Fig. 6(b) resulted in misclassification. A more accurate and robust rod-cone segmentation algorithm moving forward will be essential to improving this preliminary classification result.

A limitation of this study is its rather optimistic validation on higher quality images of normal retina. The AO images taken from diseased retinae, however, are often low in quality and plagued with diverse pathological features. This paper is the first step in introducing a conceptually simple yet robust framework adaptable to incorporating the mathematical and algorithmic innovations necessary for segmenting the more challenging real-world, clinical AOSLO images. Future steps include validation of our rod and cone segmentation algorithm, as well as extension and application of our framework to segment more complicated images of photoreceptors in disease states.

Acknowledgments

We would like to thank Robert Garrioch, Christopher Langlo, and Robert F. Cooper for their work on the Garrioch et al. study [44], including image acquisition and pre-processing, the repeatability study design, and providing the gold standard for cone identification. We would also like to thank Kaccie Y. Li and Austin Roorda for their work on developing the cone identification algorithm [39] used in the Garrioch study. S. Farsiu has support by BrightFocus Foundation, NIH grant 1R01EY022691-01, and Research to Prevent Blindness (Duke’s 2011 Unrestricted Grand Award). J. Carroll is supported by the Foundation Fighting Blindness, NIH grants R01EY017607 and P30EY001931, and an unrestricted departmental grant from Research to Prevent Blindness. A. Dubra holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and is the recipient of a Career Development Award from Research to Prevent Blindness (RPB). S. Chiu was supported by the John T. Chambers Scholarship and NIH grant 1R01EY022691-01.

References and links

1. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]   [PubMed]  

2. A. Roorda, F. Romero-Borja, W. Donnelly III, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]   [PubMed]  

3. Y. Zhang, J. Rha, R. Jonnal, and D. Miller, “Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina,” Opt. Express 13(12), 4792–4811 (2005). [CrossRef]   [PubMed]  

4. R. J. Zawadzki, S. M. Jones, S. S. Olivier, M. Zhao, B. A. Bower, J. A. Izatt, S. Choi, S. Laut, and J. S. Werner, “Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging,” Opt. Express 13(21), 8532–8546 (2005). [CrossRef]   [PubMed]  

5. D. Merino, C. Dainty, A. Bradu, and A. G. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express 14(8), 3345–3353 (2006). [CrossRef]   [PubMed]  

6. Y. Zhang, B. Cense, J. Rha, R. S. Jonnal, W. Gao, R. J. Zawadzki, J. S. Werner, S. Jones, S. Olivier, and D. T. Miller, “High-speed volumetric imaging of cone photoreceptors with adaptive optics spectral-domain optical coherence tomography,” Opt. Express 14(10), 4380–4394 (2006). [CrossRef]   [PubMed]  

7. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007). [CrossRef]   [PubMed]  

8. M. Pircher, R. J. Zawadzki, J. W. Evans, J. S. Werner, and C. K. Hitzenberger, “Simultaneous imaging of human cone mosaic with adaptive optics enhanced scanning laser ophthalmoscopy and high-speed transversal scanning optical coherence tomography,” Opt. Lett. 33(1), 22–24 (2008). [CrossRef]   [PubMed]  

9. C. Torti, B. Povazay, B. Hofer, A. Unterhuber, J. Carroll, P. K. Ahnelt, and W. Drexler, “Adaptive optics optical coherence tomography at 120,000 depth scans/s for non-invasive cellular phenotyping of the living human retina,” Opt. Express 17(22), 19382–19400 (2009). [CrossRef]   [PubMed]  

10. M. Mujat, R. D. Ferguson, N. Iftimia, and D. X. Hammer, “Compact adaptive optics line scanning ophthalmoscope,” Opt. Express 17(12), 10242–10258 (2009). [CrossRef]   [PubMed]  

11. R. D. Ferguson, Z. Zhong, D. X. Hammer, M. Mujat, A. H. Patel, C. Deng, W. Zou, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking,” J. Opt. Soc. Am. A 27(11), A265–A277 (2010). [CrossRef]   [PubMed]  

12. M. Mujat, R. D. Ferguson, A. H. Patel, N. Iftimia, N. Lue, and D. X. Hammer, “High resolution multimodal clinical ophthalmic imaging system,” Opt. Express 18(11), 11607–11621 (2010). [CrossRef]   [PubMed]  

13. A. Dubra and Y. Sulai, “Reflective afocal broadband adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(6), 1757–1768 (2011). [CrossRef]   [PubMed]  

14. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]   [PubMed]  

15. K. E. Stepien, W. M. Martinez, A. M. Dubis, R. F. Cooper, A. Dubra, and J. Carroll, “Subclinical photoreceptor disruption in response to severe head trauma,” Arch. Ophthalmol. 130(3), 400–402 (2012). [CrossRef]   [PubMed]  

16. A. Roorda and D. R. Williams, “The arrangement of the three cone classes in the living human eye,” Nature 397(6719), 520–522 (1999). [CrossRef]   [PubMed]  

17. A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(7), 1864–1876 (2011). [CrossRef]   [PubMed]  

18. T. Y. Chui, H. Song, and S. A. Burns, “Adaptive-optics imaging of human cone photoreceptor distribution,” J. Opt. Soc. Am. A 25(12), 3021–3029 (2008). [CrossRef]   [PubMed]  

19. T. Y. Chui, H. Song, and S. A. Burns, “Individual variations in human cone photoreceptor packing density: variations with refractive error,” Invest. Ophthalmol. Vis. Sci. 49(10), 4679–4687 (2008). [CrossRef]   [PubMed]  

20. K. Y. Li, P. Tiruveedhula, and A. Roorda, “Intersubject variability of foveal cone photoreceptor density in relation to eye length,” Invest. Ophthalmol. Vis. Sci. 51(12), 6858–6867 (2010). [CrossRef]   [PubMed]  

21. Y. Kitaguchi, K. Bessho, T. Yamaguchi, N. Nakazawa, T. Mihashi, and T. Fujikado, “In vivo measurements of cone photoreceptor spacing in myopic eyes from images obtained by an adaptive optics fundus camera,” Jpn. J. Ophthalmol. 51(6), 456–461 (2007). [CrossRef]   [PubMed]  

22. D. Merino, J. L. Duncan, P. Tiruveedhula, and A. Roorda, “Observation of cone and rod photoreceptors in normal subjects and patients using a new generation adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 2(8), 2189–2201 (2011). [CrossRef]   [PubMed]  

23. A. Roorda and D. R. Williams, “Optical fiber properties of individual human cones,” J. Vis. 2(5), 404–412 (2002). [CrossRef]   [PubMed]  

24. M. Pircher, J. S. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. K. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef]   [PubMed]  

25. O. P. Kocaoglu, S. Lee, R. S. Jonnal, Q. Wang, A. E. Herde, J. C. Derby, W. Gao, and D. T. Miller, “Imaging cone photoreceptors in three dimensions and in time using ultrahigh resolution optical coherence tomography with adaptive optics,” Biomed. Opt. Express 2(4), 748–763 (2011). [CrossRef]   [PubMed]  

26. J. Carroll, M. Neitz, H. Hofer, J. Neitz, and D. R. Williams, “Functional photoreceptor loss revealed with adaptive optics: an alternate cause of color blindness,” Proc. Natl. Acad. Sci. U.S.A. 101(22), 8461–8466 (2004). [CrossRef]   [PubMed]  

27. S. S. Choi, N. Doble, J. L. Hardy, S. M. Jones, J. L. Keltner, S. S. Olivier, and J. S. Werner, “In vivo imaging of the photoreceptor mosaic in retinal dystrophies and correlations with visual function,” Invest. Ophthalmol. Vis. Sci. 47(5), 2080–2092 (2006). [CrossRef]   [PubMed]  

28. J. I. Wolfing, M. Chung, J. Carroll, A. Roorda, and D. R. Williams, “High-resolution retinal imaging of cone-rod dystrophy,” Ophthalmology 113(6), 1014–1019.e1 (2006). [CrossRef]   [PubMed]  

29. R. C. Baraas, J. Carroll, K. L. Gunther, M. Chung, D. R. Williams, D. H. Foster, and M. Neitz, “Adaptive optics retinal imaging reveals S-cone dystrophy in tritan color-vision deficiency,” J. Opt. Soc. Am. A 24(5), 1438–1447 (2007). [CrossRef]   [PubMed]  

30. J. L. Duncan, Y. Zhang, J. Gandhi, C. Nakanishi, M. Othman, K. E. Branham, A. Swaroop, and A. Roorda, “High-resolution imaging with adaptive optics in patients with inherited retinal degeneration,” Invest. Ophthalmol. Vis. Sci. 48(7), 3283–3291 (2007). [CrossRef]   [PubMed]  

31. J. Carroll, S. S. Choi, and D. R. Williams, “In vivo imaging of the photoreceptor mosaic of a rod monochromat,” Vision Res. 48(26), 2564–2568 (2008). [CrossRef]   [PubMed]  

32. S. S. Choi, R. J. Zawadzki, M. A. Greiner, J. S. Werner, and J. L. Keltner, “Fourier-domain optical coherence tomography and adaptive optics reveal nerve fiber layer loss and photoreceptor changes in a patient with optic nerve drusen,” J. Neuroophthalmol. 28(2), 120–125 (2008). [CrossRef]   [PubMed]  

33. M. K. Yoon, A. Roorda, Y. Zhang, C. Nakanishi, L. J. Wong, Q. Zhang, L. Gillum, A. Green, and J. L. Duncan, “Adaptive optics scanning laser ophthalmoscopy images in a family with the mitochondrial DNA T8993C mutation,” Invest. Ophthalmol. Vis. Sci. 50(4), 1838–1847 (2008). [CrossRef]   [PubMed]  

34. J. Carroll, R. C. Baraas, M. Wagner-Schuman, J. Rha, C. A. Siebe, C. Sloan, D. M. Tait, S. Thompson, J. I. Morgan, J. Neitz, D. R. Williams, D. H. Foster, and M. Neitz, “Cone photoreceptor mosaic disruption associated with Cys203Arg mutation in the M-cone opsin,” Proc. Natl. Acad. Sci. U.S.A. 106(49), 20948–20953 (2009). [CrossRef]   [PubMed]  

35. S. Ooto, M. Hangai, A. Sakamoto, A. Tsujikawa, K. Yamashiro, Y. Ojima, Y. Yamada, H. Mukai, S. Oshima, T. Inoue, and N. Yoshimura, “High-resolution imaging of resolved central serous chorioretinopathy using adaptive optics scanning laser ophthalmoscopy,” Ophthalmology 117(9), 1800–1809, e1–e2 (2010). [CrossRef]   [PubMed]  

36. Y. Kitaguchi, S. Kusaka, T. Yamaguchi, T. Mihashi, and T. Fujikado, “Detection of photoreceptor disruption by adaptive optics fundus imaging and Fourier-domain optical coherence tomography in eyes with occult macular dystrophy,” Clin. Ophthalmol. 5, 345–351 (2011). [CrossRef]   [PubMed]  

37. S. Ooto, M. Hangai, K. Takayama, N. Arakawa, A. Tsujikawa, H. Koizumi, S. Oshima, and N. Yoshimura, “High-resolution photoreceptor imaging in idiopathic macular telangiectasia type 2 using adaptive optics scanning laser ophthalmoscopy,” Invest. Ophthalmol. Vis. Sci. 52(8), 5541–5550 (2011). [CrossRef]   [PubMed]  

38. S. Ooto, M. Hangai, K. Takayama, A. Sakamoto, A. Tsujikawa, S. Oshima, T. Inoue, and N. Yoshimura, “High-resolution imaging of the photoreceptor layer in epiretinal membrane using adaptive optics scanning laser ophthalmoscopy,” Ophthalmology 118(5), 873–881 (2011). [CrossRef]   [PubMed]  

39. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef]   [PubMed]  

40. B. Xue, S. S. Choi, N. Doble, and J. S. Werner, “Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera,” J. Opt. Soc. Am. A 24(5), 1364–1372 (2007). [CrossRef]   [PubMed]  

41. D. H. Wojtas, B. Wu, P. K. Ahnelt, P. J. Bones, and R. P. Millane, “Automated analysis of differential interference contrast microscopy images of the foveal cone mosaic,” J. Opt. Soc. Am. A 25(5), 1181–1189 (2008). [CrossRef]   [PubMed]  

42. K. Loquin, I. Bloch, K. Nakashima, F. Rossant, P.-Y. Boelle, and M. Paques, “Automatic photoreceptor detection in in-vivo adaptive optics retinal images: statistical validation,” in Image Analysis and Recognition, A. Campilho and M. Kamel, eds. (Springer Berlin / Heidelberg, 2012), pp. 408–415.

43. X. Liu, Y. Zhang, and D. Yun, “An automated algorithm for photoreceptors counting in adaptive optics retinal images,” Proc. SPIE 8419, 84191Z, 84191Z–5 (2012). [CrossRef]  

44. R. Garrioch, C. Langlo, A. M. Dubis, R. F. Cooper, A. Dubra, and J. Carroll, “Repeatability of in vivo parafoveal cone density and spacing measurements,” Optom. Vis. Sci. 89(5), 632–643 (2012). [CrossRef]   [PubMed]  

45. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

46. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef]   [PubMed]  

47. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

48. S. J. Chiu, C. A. Toth, C. Bowes Rickman, J. A. Izatt, and S. Farsiu, “Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming,” Biomed. Opt. Express 3(5), 1127–1140 (2012). [CrossRef]   [PubMed]  

49. A. Dubra and Z. Harvey, “Registration of 2D images from fast scanning ophthalmic instruments,” in Biomedical Image Registration, B. Fischer, B. Dawant, and C. Lorenz, eds. (Springer Berlin / Heidelberg, 2010), pp. 60–71.

50. D. S. Biggs and M. Andrews, “Acceleration of iterative image restoration algorithms,” Appl. Opt. 36(8), 1766–1775 (1997). [CrossRef]   [PubMed]  

51. R. J. Hanisch, R. L. White, and R. L. Gilliland, “Deconvolution of hubble space telescope images and spectra,” in Deconvolution of Images and Spectra, 2nd ed., P. A. Jansson, ed. (Academic Press, 1997).

52. T. J. Holmes, S. Bhattacharyya, J. A. Cooper, D. Hanzel, V. Krishna-murthi, W. C. Lin, B. Roysam, D. Szarowski, and J. Turner, “Light microscopic images reconstructed by maximum likelihood deconvolution,” in Handbook of Biological Confocal Microscopy, J. B. Pawley, ed. (Plenum Press, 1995), pp. 389–402.

53. K.-Y. Liang and S. L. Zeger, “Longitudinal data analysis using generalized linear models,” Biometrika 73(1), 13–22 (1986). [CrossRef]  

54. R. F. Cooper, A. M. Dubis, A. Pavaskar, J. Rha, A. Dubra, and J. Carroll, “Spatial and temporal variation of rod photoreceptor reflectance in the human retina,” Biomed. Opt. Express 2(9), 2577–2589 (2011). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Cone photoreceptor segmentation using the quasi-polar transform. (a) Cartesian image containing the cone to segment. (b) Pilot estimate of the cone in (a). (c,d) Polar transformation of (a) and (b), respectively. The black regions in (c) are invalid points that lie outside the image in the Cartesian domain. (e,f) Images (c) and (d) column-wise shifted until the pilot estimate in (d) was flat. (g) Segmentation of (e) using GTDP (magenta). (h) Transformation of the segmentation in (g) back into the Cartesian domain (magenta).
Fig. 2
Fig. 2 Identification of cones missed by local maxima. (a) AOSLO image in log scale with missed cones shown inside the white boxes. (b) Cone photoreceptors segmented using local maxima initialization in black, and pilot estimates of missed cones found using deconvolution and local maxima are shown in white asterisks.
Fig. 3
Fig. 3 Qualitative GTDP segmentation result. Top row: (a) Higher quality AOSLO image of cone photoreceptors in log scale, (b) fully automatic segmentation result of (a) using GTDP for closed contour structures, and (c) centroid of each fully automatically segmented cone from (b). Bottom row: Lower quality AOSLO image (a) and its segmentation (b) and centroid (c) result.
Fig. 4
Fig. 4 Variable performance of the fully automatic cone identification algorithms. Left column: AOSLO image of the cone mosaic in log scale. Middle column: Garrioch et al. 2012 algorithm results (yellow: true positives; green: false negatives). Right column: GTDP algorithm results (magenta: true positives; green: false negatives; blue: false positives). Middle row: Typical (mean) performance by both algorithms. Top and bottom rows: Performance one standard deviation above and below the mean for both algorithms, respectively.
Fig. 5
Fig. 5 A closer look at the performance of the GTDP algorithm. (a) AOSLO image corresponding to Fig. 4(b) (left), and (b) automatic GTDP segmentation result (magenta: true positives; green: false negatives; blue: false positives). White boxes: locations where the algorithm “missed” a cone, even though there appears to be no cone present. Black box: location where the algorithm “erroneously added” a cone, although the original image seems to contain an added cone not identified by the gold standard.
Fig. 6
Fig. 6 Fully automatic identification of rods and cone photoreceptors. (a) AOSLO image of rods and cone photoreceptors in log scale (image taken from [54]). (b,c) Fully automatic segmentation (b) and identification (c) of rods and cones using GTDP for closed contour structures. (d) Histogram of the segmentations from (b). (e) Threshold of 27.7 µm2 used to classify the photoreceptors from (d) into rods (magenta) and cones (green).

Tables (2)

Tables Icon

Table 1 Cone Identification Performance of Fully Automatic Methods Compared to the Gold Standard Across All 840 Images

Tables Icon

Table 2 Reproducibility Comparison of Cone Density and Spacing Measurements

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I c all =normalize(log(normalize( I c orig ,0.1,0.9)),0,1)
w ab =normalize(( g a LD + g b LD ),1,2)+ normalize(( g a DL + g b DL ),0,0.1)+ normalize( d ab ,0,0.05)+ w min ,
w ab =normalize(( g a LD + g b LD ),1,2)+ normalize(( g a DL + g b DL ),1,1.5)+ w min
N automatic  cones  identified = N true positive + N false positive and N gold  standard  cones  identified = N true positive + N false negative ,
w ab =normalize(( g a LD + g b LD ),1,2)+ normalize( i a + i b ,0.1,0.2)+ normalize( r a + r b ,0,0.05)+ normalize( d ab ,2,2.1)+ w min
w ab =normalize( i a + i b ,0.2,1)+ normalize( d ab ,0,0.1)+ w min
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.