Abstract

With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 μm (0.84 ± 0.41 pixels).

© 2016 Optical Society of America

1. Introduction

Fundus imaging and spectral domain-optical coherence tomography (SD-OCT) are two common types of imaging modalities that provide different information about the human retina. Fundus imaging is referred to as the process of acquiring a 2D representation of the 3D retina by means of reflected light. With this definition, the broad category of fundus imaging includes modalities/techniques such as red-free fundus photography, color fundus photography, stereo fundus photography, scanning laser ophthalmoscopy (SLO), and fluorescein angiography [1]. On the other hand, spectral-domain OCT, despite its recent appearance (the first SD-OCT device became commercially available less than 10 years ago [2]), has been the clinical standard of care for several eye diseases [1]. This is due to providing 3D information of retinal structures such as intraretinal layers and optic nerve head that are not available via fundus imaging. Both fundus and OCT imaging techniques are vastly utilized in diagnosis and management of eye diseases such as diabetic retinopathy, glaucoma, and age-related macular degeneration (AMD). From the clinical prospective, a better automated alignment of OCT-fundus images can directly provide clinicians with an insight into structures they need to follow/monitor for diagnosis or management. For example, in glaucoma it is important to monitor changes to the optic disc for both diagnosis and management of disease [3,4]. Moreover, studies have shown that combining complementary information from both sources is beneficial for automated segmentation of retinal structures such as blood vessels [5], optic disc and cup boundaries [6,7], and intraretinal surfaces (e.g. internal limiting membrane (ILM)) [8,9]. However, the performance of these multimodal segmentation approaches is dependent on the quality of the registration. For instance, in [5], a few scans were excluded from the test set due to relatively large registration errors. Therefore, combining complementary information from different imaging modalities not only could benefit physicians in their diagnosis and monitoring the ophthalmic diseases, but also is advantageous for automated techniques that are utilized for processing imaging data.

As mentioned above, there are various techniques for retinal imaging each of which produces different types of images (i.e. with different size, resolution, and intensity profile) from the retina. There has been a great deal of effort through a variety of techniques on registering retinal images generated by different modalities [10–19]. Some of the previous works focused on stitching (mosaicing) images of the same modality with the aim of obtaining a broader field of view [12, 17, 20]. In addition, there are works that attempted to register multimodal retinal images including fluorescence angiogram and red-free fundus pairs [14,15,21], SLO and color fundus photographs [22,23], SD-OCT and color fundus photographs [7,22,24–26]. The focus of the current work is on multimodal registration of fundus and SD-OCT modalities.

Generally, the pixel intensities between multimodal retinal image pairs might be different; however, compared to other types of multimodal retinal imaging, the intensity profiles of color fundus photographs and SD-OCT images are substantially different. Hence, in order to benefit from the most dominant structural information that both modalities share (i.e. retinal blood vessels), the current color fundus and OCT registration methods [7, 24–26] include a vessel segmentation step as part of their algorithms. The retinal vasculatures are the best candidates for identifying the corresponding points (e.g. blood vessel bifurcations and crossing points or blood vessel ridges) between two modalities.

However, the vessel segmentation errors, in either modality, could potentially introduce some errors to the registration process as the corresponding points between image pairs are identified from the blood vessel maps. For instance, the method proposed in [27] for blood vessel segmentation from SD-OCT modality could produce false positives due to the presence of the optic nerve head region [5]. Additionally, segmenting the blood vessels from both modalities is a time-consuming task which necessitates additional considerations (e.g. parameter tuning) when the dataset contains different fundus photographs (stereo and non-stereo fundus photographs) with different scales and sizes such as the one used in this work.

Therefore, we propose a robust feature-based registration framework that is capable of aligning fundus (stereo and non-stereo) and SD-OCT modalities without requiring blood vessel segmentation with the aim of speeding up the registration process. Feature-based registration methods have been used for registering a variety of images [18] and they have also demonstrated successful results in registering unimodal retinal images [17,19,20]. Control points detection is a very important step in feature-based registration algorithms as the final landmarks that are used for computing the registration transformation are selected from the CPs. In our feature-based framework, we propose to identify the CPs from the actual images (not their vessel maps) by detecting the corners (i.e. points for which there are two different dominant edge directions in a local neighborhood of the point) in the images using the features from accelerated segment test (FAST) corner detection approach [28, 29] which, as its name suggests, is very fast and computationally efficient. Additionally, in order to find the correspondence between two sets of points, we extract the local structural information in the neighborhood of each CP by computing the histogram of oriented gradients (HOG) [30] feature and identify the best matching feature descriptors.

In particular, the proposed method starts with a few preprocessing steps including: 1) creating a 2D projection image from the 3D SD-OCT volume, 2) enhancing the contrast of the images, and 3) rescaling the fundus photographs. Next, the control points, which are identified by FAST corner detection, are represented by a descriptor. More specifically, as previously mentioned, in order to avoid the use of intensity information and benefit from the structural features (i.e. retinal vasculature) without attempting to segment the blood vessels, histograms of oriented gradients is employed as the CP’s descriptors. The approximate nearest neighbor method [31] is utilized in a forward-backward fashion to identify the matching descriptors. Finally, after removing the incorrect matches, the registration transformation is calculated using the random sample consensus (RANSAC) method [32].

2. Methods

The overall flowchart of the proposed method is depicted in Fig. 1 and can be summarized in five major steps as follows: 1) a preprocessing step including SD-OCT projection image computation, contrast enhancement, and fundus image rescaling (Section 2.1), 2) identifying the control points (Section 2.2), 3) computing gradient-based features (Section 2.3), 4) feature matching (Section 2.4), and 5) calculating the transformation (Section 2.5).

 figure: Fig. 1

Fig. 1 Flowchart of the proposed method.

Download Full Size | PPT Slide | PDF

2.1. Preprocessing

In order to be able to register the 2D fundus photographs to the 3D SD-OCT volumes, a 2D projection image of the volume is required. The OCT projection image is obtained using the method proposed in [34] where a multi-resolution graph-theoretic approach is employed to segment the intraretinal surfaces within the 3D SD-OCT volumes [33,34]. In order to obtain the projection image, two intraretinal surfaces are segmented: the junction of the inner and outer segments of photoreceptors (IS/OS) and the outer boundary of the retinal pigment epithelium (RPE), also called the Bruch’s membrane (BM) surface. A thin-plate spline is fitted to the BM surface from which the OCT volume is flattened to obtain a consistent optic nerve head shape across patients [33]. The SD-OCT projection image is computed by averaging the voxel intensities in the z-direction between the IS/OS junction and BM surfaces (Fig. 2).

 figure: Fig. 2

Fig. 2 An example of intraretinal surface segmentation. (a) The central OCT B-scan and the segmented surfaces: blue is the IS/OS junction, yellow is the BM surface, and pink is the thin-plate spline fitted to the BM surface. (b) The 3D view of the segmented surfaces. (c) The flattened OCT B-scan. (d) The corresponding OCT projection image.

Download Full Size | PPT Slide | PDF

Two types of color fundus photographs exist in the dataset used in this work: 1) stereo fundus images (Fig. 3a covering almost 20 degrees field of view), where the optic nerve head region of the retina is imaged from two different angles and placed side by side, and 2) ONH-centered non-stereo fundus photographs (Fig. 3b), which cover a broader field of the retina (35 degrees). For the stereo fundus photograph pairs, the image that has higher quality and less imaging artifact is selected to be considered for the registration. In addition, there is extra information included on the fundus photographs (e.g. dates, text, and color bars) which produce strong corners which could distract the registration process and so were automatically removed from the images. Hence, a binary mask indicating the region of interest of each image was produced by thresholding the images followed by a morphological opening operator (Fig. 3).

 figure: Fig. 3

Fig. 3 Example preprocessing steps on two types of fundus photographs in the dataset. The interfering details included on the images are shown with green arrows. Dates are covered for privacy. (a) Stereo fundus photographs containing large imaging artifact. The left-side photo was selected for further processing in (c). (b) A low-contrast regular fundus photograph. (c) The binary masks that remove the interfering details, the selected fundus image, the green channel, and the enhanced-contrast images corresponding to the examples shown in (a) and (b).

Download Full Size | PPT Slide | PDF

Furthermore, the blood vessels have the highest contrast in the green channel of the fundus photographs; hence only the information of the green channel was used in our method. The control points in the images were detected by looking for the corners, which are sensitive to the pixel intensities; therefore, in order to increase the chance of finding the best matching points, the number of CPs needs to be maximized. Consequently, the contrast of both fundus photographs (green channel) and the OCT projection images were enhanced and normalized using the contrast limited adaptive histogram equalization (CLAHE) method [35].

Since the images are from different modalities, they differ in size and resolution. Moreover, the size and resolution of two types of fundus photographs in the dataset are completely different from each other. In order to bring all the images to a similar scale and resolution, the fundus photographs are scaled such that their optic disc has a similar size as the optic disc in their corresponding OCT projection image. Since the optic disc has roughly a circular shape, the location and size of optic disc in both modalities is approximated using a circular Hough transform. First, a grayscale morphological closing operator with a ball-shaped structuring element is applied to both enhanced images in order to remove the blood vessels (attenuate the dark features in the images). Subsequently, the gradient of the closed image is computed and the circular Hough transform is applied to the gradient magnitude image. The center and radius of the most dominant circle in the fundus (cf, rf) and OCT (co, ro) images estimates the optic disc location and size in both modalities, respectively (Fig. 4). Since the resolution of the OCT projection images are consistent in the entire dataset, both stereo and non-stereo fundus photographs are scaled (according to their corresponding OCT projection images) such that rf = ro.

 figure: Fig. 4

Fig. 4 An example of optic disc localization using circular Hough transform. (a) From left to right are the enhanced OCT projection image, the blue circle representing the optic disc overlaid on top of the closed image, and the Hough map from which the dominant circle is identified, respectively. (b) The same sequence of images as in (a) showing identifying the optic disc from the fundus photograph.

Download Full Size | PPT Slide | PDF

2.2. Control point detection

A control point (aka interest point) is a pixel which has a well-defined position and can be robustly detected. Two properties of interest points are having high local information content and repeatability between different images. Identifying a sufficient number of CPs in images is a key in feature-based registration methods as a lack of a sufficient number of CPs increases the risk of unsuccessful registration or decreases the robustness of the method. Bifurcations are reasonable candidates to be utilized as CPs due to the fact that the blood vessel’s structure remains unchanged between modalities. However, obtaining bifurcations requires segmenting the blood vessels from both modalities which could be challenging in poor quality images. Hence, instead of looking for bifurcations, we proposed to utilize corners in images as the CPs. Features from accelerated segment test (FAST) [28, 29] was employed to detect corners in the image as this method has a high accuracy and robustness and is able to find the corners very fast. Consequently, there is no need for vessel segmentation and by detecting corners, most of the bifurcations are also detected as they resemble corners in the images.

The FAST corner detection algorithm determines whether a pixel is a corner utilizing its neighboring pixel intensities. More specifically, consider an image I and a query pixel p, which is to be identified as a corner or not, with the intensity of Ip and also consider a Bresenham circle of radius 3 containing 16 pixels surrounding the pixel p [29]. The pixel p is identified as a corner if the intensities of N contiguous pixels out of the 16 are either above (I{N} > Ip + T) or below (I{N} < IpT) the intensity of the query pixel, Ip. T is a predefined threshold value and I{N} is the intensity of N contiguous pixels where N ∈ {9, 10, 11, 12}. The algorithm quickly rejects the pixels that are not a corner by comparing the intensity of pixels 1, 5, 9 and 13 of the circle with Ip (Fig. 5). If the intensities of at least three of these four locations are not above Ip + T or below IpT, then p is not a corner, otherwise, the algorithm checks all 16 points. This procedure repeats for all pixels in the image. In order to avoid the distraction caused by the magnified background noise (produced in the contrast enhancement step) and obtaining so many corners on the background (especially for OCT projection image), a smoothing Gaussian filter is applied to the images before corner detection (Fig. 6).

 figure: Fig. 5

Fig. 5 Illustration of Bresenham circle containing 16 pixels (the red boxes) around the query point p. An example of N contiguous pixels (for N = 9) is shown with the cyan dashed line [28].

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 An example of control point (corner) detection from (a) OCT projection and (b) fundus images using FAST corner detection method.

Download Full Size | PPT Slide | PDF

2.3. Gradient-based feature computation

The method proposed in this work for extracting features has similarities with SIFT descriptors and is inspired by the descriptor proposed in [30] for human detection. The basic idea behind the feature computation method is characterizing the local appearance of each CP’s neighborhood by distribution of local intensity gradients or edge directions. More specifically, the neighborhood of size MN × MN around each control point is defined using a small spatial block which is divided into N × N smaller cells of size M × M. The gradient direction and magnitude of all pixels inside each block are computed and for each cell in the block, a histogram of gradient directions (i.e. edge orientation) are computed such that the gradient directions are weighted by their corresponding gradient magnitudes. The gradient directions are limited to [0°, 180°) and binned into 8 bins of [0°, 22.5°, . . . , 157.5°] (Fig. 7). Constraining the directions to 180° instead of 360° causes the histogram to be less distinctive, but at the same time, more robust to the intensity change as is quite possible between multimodal images. The histograms from all cells in the block are concatenated to form a 1-D vector of size 8 × N × N. In order to further the invariance to affine changes in illumination and contrast, all histograms in a block are normalized such that the concatenated feature vector has a unit size. Therefore, the normalized concatenated vector, which includes the components of all normalized cell histograms in a block, is called the histograms of oriented gradient (HOG) descriptor and represents the local shape characteristics (e.g. gradient structure) of each CP’s neighborhood.

 figure: Fig. 7

Fig. 7 An example of HOG descriptor computation from (a) OCT projection and (b) fundus images for a block size of 4 × 4 and a cell size of 4 × 4. The four strongest control points and their corresponding HOG blocks are shown on the left and for better visualization a zoomed-in illustration of one of the blocks with its corresponding CP (in blue) is shown on the right.

Download Full Size | PPT Slide | PDF

2.4. Feature matching

In order to find the best matching CPs between a pair of multimodal images, the method in [31] for identifying the approximate nearest neighbors in high dimensions was employed. The method eliminates ambiguous matches in addition to using the match threshold. A match is considered ambiguous when it is not remarkably better than the second best match. Assume Hf = {hf,1, hf,2, . . . , hf,N} and Ho = {ho,1, ho,2, . . . , ho,M} represent the sets of HOG feature vectors from fundus and OCT images, respectively. Here is how the best matching feature from Ho corresponding to hf,1 is identified:

  • First, the sum of squared differences between hf,1 and all vectors in Ho is computed as in Eq. (1). The feature vectors having a distance larger than a match threshold (here 0.2) are eliminated from further investigation.
    DFO(1,i)=j=1128[hf,1(j)ho,i(j)]2,i=1,2,,M.
  • The ambiguous match ratio is calculated by dividing the distance of the first nearest neighbor feature vector by the distance of the second nearest neighbor feature vector.
  • If the match ratio between the two distances is greater than a predefined ratio threshold, the match is considered ambiguous and eliminated.

The method iterates over Hf until all feature vectors are examined. Even though this approximate nearest neighbor method produces more reliable matches, if the images contain repeating patterns (which is not the case for retinal images), the corresponding matches are likely to be eliminated as ambiguous. In order to be more conservative, the ratio threshold was set to 0.8 in this study.

Identifying the match pairs utilizing the method described above had the potential to result in assigning a feature vectors from Ho to multiple feature vector from Hf due to the fact that the iterations are performed independently from each other. In order to solve this issue and identify unique matches, a forward-backward search is performed. Hence, in the backward mode, the same procedure applies to Ho and the best matching feature vectors from Hf are identified. The set of final matching pairs, 𝒮, includes only the unique matches which are common between forward and backward modes (Fig. 8).

 figure: Fig. 8

Fig. 8 Illustration of feature vector matching using approximate nearest neighbor method in forward (blue) and backward (red) modes. The final matching feature vectors set (green) only includes the common pairs between forward and backward modes.

Download Full Size | PPT Slide | PDF

2.5. Transformation computation

The set of matching CP pairs detected in Sec. 2.4 are utilized to compute the transformation matrix. However, the algorithm does not guarantee 100% accuracy in matching CPs which results in possible incorrect matches. Therefore, the incorrect matching pairs are identified using the geometrical distribution of all matching pairs. Assume 𝒮L = {(pf(x1, y1), po(x1, y1)), . . . , (pf(xL, yL), po(xL, yL))}, the Euclidean distance between all CP pairs, distL, in the image domain as well as the mean, distm, and standard deviation, distsd, of the distance vector are computed. Consider a pair, (pf (xi, yi), po (xi, yi)), and its corresponding distance, dist(i). If the points are too close to each other (dist(i) < distmdistsd) or too far from each other (distm + distsd < dist(i)), the pair is marked as an incorrect matching pair (Fig. 9). Here, in order to be more conservative and keep the high quality matches, the points with the distance of more than one standard deviation from the average distance, distm, were marked as outliers. Identifying the incorrect matching pairs using this procedure is achievable under the assumption that the number of correct pairs are more than incorrect pairs, the images are in similar scales, there is no reflection involved, and the rotation needed to align the multimodal retinal images is minimal.

 figure: Fig. 9

Fig. 9 An example of incorrect pair removal. (a) Shows the yellow lines connecting the corresponding matching pairs between images identified using the approximate nearest neighbor described in Sec. 2.4. The incorrect pairs are eliminated in (b).

Download Full Size | PPT Slide | PDF

In addition to removing the incorrect matches, a refinement step is applied to the CP pairs which allows for small adjustments of CP locations within a small neighborhood of each CP (5 × 5 window). The refinement step exists for two reasons: 1) to account for possible errors in corner (CP) detection due to presence of noise, imaging artifacts and low contrast and 2) since the images come from two different modalities with significantly different intensity profiles, it is possible that pixels in the neighborhood of the CP are actually better matching candidates than the CP itself. Thus, the HOG feature vector is computed for all 25 pixels inside each CP’s neighborhood (in both modalities) and the two feature vectors with minimum distance in the feature space are identified and their corresponding pixels are considered as the new CPs. Note that, both, one, or none of the CPs could be updated through the refinement step.

In order to estimate the affine transformation, random sample consensus (RANSAC) method is utilized [32]. Despite removing the incorrect matches from the matching set, the chance of the presence of incorrect matching CPs is not zero. Therefore, the objective is to robustly calculate the transformation from 𝒮 which may contain outliers (i.e., low-quality or incorrect matches).

The algorithm performs as follows:

  1. Randomly select a subset of three pairs s from 𝒮 and instantiate the affine transformation from this subset. Here, the sampling is with replacement.
  2. Apply the transformation to the rest of pairs in the set and determine the set of pairs 𝒮i whose distance of the transformed control point in fundus image, cp^f, from its corresponding control point in OCT image, cpo, is less than a predefined threshold. The set 𝒮i is the consensus set of the sample and defines the inlier pairs of 𝒮.
  3. Repeat the previous two steps a large number of times and select the largest consensus set 𝒮i. The affine transformation is re-estimated utilizing all the CP pairs in the subset 𝒮i [32].

3. Experimental methods

3.1. Data

The performance of the proposed method was evaluated on a multimodal dataset including color fundus photographs and the SD-OCT volumes of of 44 open-angle glaucoma or glaucoma suspect patients. The optic nerve head (ONH)-centered SD-OCT volumes were acquired using a Cirrus HD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA) in one eye (per patient) at the University of Iowa. Each scan has a size of 200×200×1024 voxels (in the x-y-z direction, respectively) which corresponds to a volume of size 6×6×2 mm3 in the physical domain (20°), and the voxel depth was 8 bits in grayscale. Additionally, the optic disc region of each patient’s retina was also imaged using a fundus camera. Almost half of the patients (twenty-four) were imaged using a stereo-base Nidek 3-Dx stereo retinal camera (3072×2048 pixels corresponding to nearly 20°). The remaining twenty patients had a regular color fundus photograph acquired using a Topcon 50-DX camera (2392×2048 pixels corresponding to 35°). The pixel depth was 3 8-bit red, green and blue channels. Some of those pairs were taken at the same day, while others were taken months or even more than a year apart.

3.2. Experiments

Since we are registering multimodal images with completely different intensity profiles, in order to quantitatively evaluate the proposed method, the intensity-based metrics are avoided and the evaluation is performed using point-based metrics. The reference standard needed for the point-based evaluation is obtained by identifying a set of landmark pairs from the original images manually. In order to assure the collection of appropriate landmarks capable of a fair evaluation, we marked five pair of points that were not too close to each other and as much as possible were fairly distributed. The manual landmarks are mostly selected from the vasculature regions that create unique and recognizable points in both images such as corners and bifurcations. The manual registration was performed by computing the affine transformation using three randomly selected pairs from the set of landmarks identified for the evaluation purpose.

In order to present comparative results, in addition to the manual registration, the performance of the proposed method was also compared to our previous iterative closest point (ICP) registration approach reported in [7,36]. The ICP-based method does not use the intensity information; however, as part of the algorithm, blood vessels need to be extracted and the registration transformation is actually computed using the vessel maps. The registration accuracy was evaluated using root mean square (RMS) error which measures the amount of misalignments between the manual landmarks of OCT images and their corresponding transferred landmarks of fundus photographs:

RMS=15i=15po,ip^f,i2,
where po,i and f,i are the i-th manual point in OCT image and its corresponding transferred manual point in the fundus photograph, respectively. The mean, standard deviation, and the maximum of RMS errors of the manual and automated approaches were compared. Furthermore, the running time and the success rate of the registration methods were compared to each other. The registration was considered successful if the RMS error was less than or equal to the maximum error obtained using the manual registration approach (and so, by definition, the manual registration approach would have a 100% success rate). The running time of the manual registration includes the required time for manual landmark identification and transformation computation. All experiments were performed using a PC with Windows 7 64-bit OS, 64 GB RAM, and Intel(R) Xeon(R) CPU 3.70 GHz.

4. Results

Figure 10 shows the comparative results of registering two pairs of fundus (stereo and non-stereo) and OCT images using ICP, manual, and the proposed methods. The checkerboard images are provided for qualitative comparison of the registration results. Quantitatively, the mean, standard deviation, and the maximum RMS error calculated using the entire dataset is reported in Table 1. Based on the RMS values, the manual registration and the proposed method had significantly smaller errors than the ICP registration method (p < 0.05). However, the RMS errors of the manual registration were not significantly different from the proposed method (p < 0.05).

 figure: Fig. 10

Fig. 10 Examples of successful registration results using ICP [7], the manual, and the proposed methods. The green frame in (A) indicates the left image was selected for the registration. The orange box in (B) indicates which part of the images are shown in the checkerboards which are provided for qualitative comparison of the registration results. The corresponding RMS errors of the methods are also shown in the green boxes.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Quantitative evaluation of the registration using RMS error. All cases are included.

The success rate and the running time of the registration methods are reported in Table 2. Due to the definition of a successful registration, the manual registration has a 100% success rate. The proposed method achieved 98% success rate because of failing in registering one pair and the ICP registration method failed to successfully register eight cases. The running time of the proposed method was significantly lower than the manual and the ICP registration methods (p < 0.5). Similarly, the running time of the ICP registration method was significantly smaller than the manual registration (p < 0.05). Additionally, Fig. 11 shows ICP registration failures (i.e. the RMS error was greater than 0.127 mm = 4.23 pixels) due to having low imaging quality and presence of motion artifact in OCT projection image. The manual and the proposed methods did not fail; however, they produced slightly larger registration errors.

Tables Icon

Table 2. The success rate and running time (s) computation.

 figure: Fig. 11

Fig. 11 Examples of failed registration (RMS error > 4.43 pixels) using ICP method where the manual and proposed methods did not fail. Low imaging quality in (A) and the motion artifacts (located inside the red ovals) in (B) and (C) also caused a larger registration errors for the proposed methods. The corresponding RMS errors of the methods are also shown in the green boxes.

Download Full Size | PPT Slide | PDF

5. Discussion and conclusion

In this paper, we proposed a feature-based registration method for aligning optic nerve head-centered SD-OCT volumes and fundus photographs. Since the intensity of the images are substantially different, the registration needs to rely only on the structural features that the image pairs have in common. Whereas previously proposed existing fundus and SD-OCT registration approaches include a vessel segmentation step as part of their algorithms where the errors in vessel segmentation could potentially propagate into the registration process as well, in this work, we employed the histograms of oriented gradient features to capture the structural information in the images so as not to require the segmentation of blood vessels. Eliminating the vessel segmentation step is beneficial as it prevents propagating the possible segmentation errors (e.g. false positives in the vessel maps near the optic disc [5]) to the registration process. Additionally, removing the vessel segmentation step reduces the required time for registering the fundus and OCT image pairs.

In addition to significant intensity change between image pairs, which is one item that differentiates fundus/SD-OCT registration from other types of retinal image registration, existing very low-contrast fundus photographs and presence of extra text information on the stereo fundus photographs when the second pair is not available (Fig. 10(A)) and the presence of imaging artifacts in SD-OCT projection images cause the registration to be more challenging. Since acquiring SD-OCT volumes takes a few seconds, the OCT projection images could suffer from motion artifact (Fig. 11(B) and 11(C)). Volume truncation is another type of SD-OCT imaging artifact which appears as a black region in the projection image (Fig. 11(B)) and causes the registration to be difficult. However, since the transformation matrix is computed using RANSAC algorithm with enough number of matching CPs between two modalities, our proposed method was able to successfully manage the imaging artifacts.

Additionally, our proposed method needs on average less than 3 seconds to perform the registration which is considerably fast. The most time consuming part of typical feature-based registration algorithms is identifying the control points for which all pixels in both images need to be examined. However, utilizing FAST corner detection for identifying the control points in our proposed method has the advantage of quickly rejecting the pixels that are not corners using a computationally efficient test on the neighboring pixels of the query pixel.

Furthermore, the proposed method is capable of registering the macular-centered OCT volumes and fundus photographs which do not contain the optic nerve head region. Since the optic disc appears differently in OCT projection images and fundus photographs, the absence of the optic disc makes registering the macular-centered retinal images less challenging. Moreover, the applications of the proposed method could potentially be extended to retinal mosaicing and registering other multimodal retinal images such as fluorescein angiography, SLO, and red-free fundus photographs. For instance, we have successfully registered en-face optic-nerve-head-centered and macula-centered OCT images (where the overlapping region between the two images is only around 20%) using our proposed registration method (Fig. 12). Moreover, our proposed method could also be extended for the registration of other image pairs, such as corneal nerve images.

 figure: Fig. 12

Fig. 12 Examples of ONH-centered and macula-centered OCT stitching with the aim of obtaining a larger field of view using the proposed feature-based registration method. Note that the common area between each pair is around 20% of the size of each image.

Download Full Size | PPT Slide | PDF

Even though the histograms of orientated gradient features are not rotationally invariant, they were suitable for registering the multimodal retinal images in our dataset, due to the fact that both modalities were acquired in the optic nerve head-centered mode and therefore, significant rotations were not required for aligning image pairs. However, employing the proposed method in other applications, where rotation is necessary to register two images, would just require replacing HOG with relative HOG (RHOG) features which are rotationally invariant as they are computed with respect to the main orientation of the control points. The main direction of each CP is obtained by computing the resultant of gradient directions of all pixels inside the neighborhood of each CP using a 2D Gaussian kernel.

In summary, our proposed feature-based registration method was capable of registering stereo and color fundus photographs to their corresponding SD-OCT projection images. In particular, after creating the 2D projection image from the SD-OCT volume, the contrast of the both modalities were enhanced and the fundus photographs were scaled such that the size of optic discs, which was approximated using a circular Hough transform, in both images became similar. Next, FAST corner detection was utilized to identify the control points in both images. The histograms of oriented gradients were capable of capturing the structural profile of each CP’s neighborhood without segmenting the blood vessels. In order to identify the best matching CPs, an approximate nearest neighbor method was utilized in the forward-backward mode which determines the best matching CPs by calculating the distances descriptors in the feature space. After removing the incorrect matches and refining the CP locations, the best affine transform that registered the image pairs was calculated using RANSAC algorithm. Our feature-based registration method is very fast and outperformed our previous ICP registration method [7].

Funding

This work was supported, in part, by the Department of Veterans Affairs Rehabilitation Research and Development Division (IK2RX000728 and I01 RX001786); the National Institutes of Health (R01 EY018853 and R01 EY023279); and the Marlene S. and Leonard A. Hadley Glaucoma Research Fund.

References and links

1. M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010). [CrossRef]   [PubMed]  

2. J. S. Schuman, “Spectral domain optical coherence tomography for glaucoma (an AOS thesis),” Trans. Am. Ophthalmol. Soc. 106, 426–458 (2008).

3. B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).

4. W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015). [CrossRef]  

5. Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

6. M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013). [CrossRef]  

7. M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015). [CrossRef]  

8. M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

9. M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

10. E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987). [CrossRef]  

11. H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36. [CrossRef]  

12. M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009). [CrossRef]  

13. M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012). [CrossRef]  

14. J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010). [CrossRef]   [PubMed]  

15. C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010). [CrossRef]  

16. Y. Lin and G. Medioni, “Retinal image registration from 2D to 3D,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

17. P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192. [CrossRef]  

18. G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007). [CrossRef]  

19. J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011). [CrossRef]  

20. H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014). [CrossRef]  

21. G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004). [CrossRef]  

22. R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Proceedings of Biomedical Image Registration (2010), pp. 72–82. [CrossRef]  

23. Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

24. M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013). [CrossRef]  

25. Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011). [CrossRef]  

26. S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

27. M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008). [CrossRef]  

28. E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1508–1511.

29. E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (2006), pp. 430–443.

30. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

31. D. G. Lowe, “Distinctive Image features from scale-invariant keypoints,” Int. J. Comp. Vis. 60(2), 91–110 (2004). [CrossRef]  

32. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003), Chap. 4.

33. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009). [CrossRef]  

34. K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010). [CrossRef]  

35. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987). [CrossRef]  

36. M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012). [CrossRef]  

References

  • View by:

  1. M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
    [Crossref] [PubMed]
  2. J. S. Schuman, “Spectral domain optical coherence tomography for glaucoma (an AOS thesis),” Trans. Am. Ophthalmol. Soc. 106, 426–458 (2008).
  3. B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).
  4. W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
    [Crossref]
  5. Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).
  6. M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
    [Crossref]
  7. M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
    [Crossref]
  8. M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.
  9. M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).
  10. E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
    [Crossref]
  11. H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
    [Crossref]
  12. M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
    [Crossref]
  13. M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
    [Crossref]
  14. J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
    [Crossref] [PubMed]
  15. C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
    [Crossref]
  16. Y. Lin and G. Medioni, “Retinal image registration from 2D to 3D,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  17. P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
    [Crossref]
  18. G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
    [Crossref]
  19. J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
    [Crossref]
  20. H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
    [Crossref]
  21. G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
    [Crossref]
  22. R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Proceedings of Biomedical Image Registration (2010), pp. 72–82.
    [Crossref]
  23. Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).
  24. M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013).
    [Crossref]
  25. Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
    [Crossref]
  26. S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.
  27. M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
    [Crossref]
  28. E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1508–1511.
  29. E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (2006), pp. 430–443.
  30. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.
  31. D. G. Lowe, “Distinctive Image features from scale-invariant keypoints,” Int. J. Comp. Vis. 60(2), 91–110 (2004).
    [Crossref]
  32. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003), Chap. 4.
  33. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
    [Crossref]
  34. K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
    [Crossref]
  35. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
    [Crossref]
  36. M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
    [Crossref]

2015 (2)

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

2014 (2)

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

2013 (3)

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013).
[Crossref]

2012 (3)

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

2011 (2)

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

2010 (4)

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
[Crossref] [PubMed]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

2009 (2)

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

2008 (2)

J. S. Schuman, “Spectral domain optical coherence tomography for glaucoma (an AOS thesis),” Trans. Am. Ophthalmol. Soc. 106, 426–458 (2008).

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

2007 (1)

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

2004 (2)

D. G. Lowe, “Distinctive Image features from scale-invariant keypoints,” Int. J. Comp. Vis. 60(2), 91–110 (2004).
[Crossref]

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

1987 (2)

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
[Crossref]

Abràmoff, M. D.

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
[Crossref] [PubMed]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

Alward, W. L. M.

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

Amburn, E. P.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Asvestas, P. A.

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

Augliere, R. A.

E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
[Crossref]

Austin, J. D.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Badr, Y.

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Bay, H.

P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
[Crossref]

Bogunovic, H.

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

Burgoyne, C. F.

B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).

Burns, T. L.

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

Cattin, P. C.

P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
[Crossref]

Chauhan, B. C.

B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).

Chen, J.

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Chen, Q.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

Cromartie, R.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Dai, X.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

Dalal, N.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

de Sisternes, L.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

Delibasis, K. K.

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

Deng, K.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

Drummond, T.

E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1508–1511.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (2006), pp. 430–443.

El-Bendary, N.

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Fatemizadeh, E.

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

Garvin, M. K.

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
[Crossref] [PubMed]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

Geselowitz, A.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Ghassabi, Z.

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

Golabbakhsh, M.

M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013).
[Crossref]

Greer, T.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Gregori, G.

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003), Chap. 4.

Hassanien, A. E.

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Hu, Z.

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

Kemp, P.

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

Knighton, R. W.

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Kolar, R.

R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Proceedings of Biomedical Image Registration (2010), pp. 72–82.
[Crossref]

Kwon, Y. H.

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

Laine, A. F.

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Lee, K.

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

Lee, N.

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Li, C.-Y.

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

Li, Y.

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Lin, K.-S.

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

Lin, Y.

Y. Lin and G. Medioni, “Retinal image registration from 2D to 3D,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Longmuir, S. Q.

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

Lowe, D. G.

D. G. Lowe, “Distinctive Image features from scale-invariant keypoints,” Int. J. Comp. Vis. 60(2), 91–110 (2004).
[Crossref]

Lujan, B. J.

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Matsopoulos, G. K.

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

Medioni, G.

Y. Lin and G. Medioni, “Retinal image registration from 2D to 3D,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Miri, M. S.

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

Mouravliansky, N. A.

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

Niemeijer, M.

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

Niu, S.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

Peli, E.

E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
[Crossref]

Pizer, S. M.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Rabbani, H.

M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013).
[Crossref]

Robles, V. A.

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

Romeny, B. T. H.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Rosenfeld, P. J.

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Rosten, E.

E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1508–1511.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (2006), pp. 430–443.

Rubin, D. L.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

Russell, S. R.

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

Schuman, J. S.

J. S. Schuman, “Spectral domain optical coherence tomography for glaucoma (an AOS thesis),” Trans. Am. Ophthalmol. Soc. 106, 426–458 (2008).

Sedaghat, A.

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

Shanbehzadeh, J.

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

Shen, H.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

Smith, R. T.

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Snase, V.

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Sofka, M.

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

Sonka, M.

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
[Crossref] [PubMed]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

Stewart, C. V.

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

Székely, G.

P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
[Crossref]

Taha, H. M.

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Tasevsky, P.

R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Proceedings of Biomedical Image Registration (2010), pp. 72–82.
[Crossref]

Tian, J.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Timberlake, G. T.

E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
[Crossref]

Triggs, B.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

Tsai, C.-L.

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

van Ginneken, B.

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

Van Gool, L.

P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
[Crossref]

Wang, J.-K.

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

Wu, X.

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

Xu, M.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

Yang, G.

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

Zhang, X.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

Zheng, J.

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

Zimmerman, J. B.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003), Chap. 4.

Am. J. Ophthalmol. (1)

B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 31(10), 1900–1911 (2012).

Comp. Vis. Graph. Imag. Proc. (1)

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. T. H. Romeny, and J. B. Zimmerman, “Adaptive histogram equalization and its variations,” Comp. Vis. Graph. Imag. Proc. 39(3), 355–368 (1987).
[Crossref]

EURASIP J. Imag. Vid. Proc. (1)

Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, and E. Fatemizadeh, “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Imag. Vid. Proc. 25, 1–16 (2013).

IEEE Rev. Biomed. Eng. (1)

M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).
[Crossref] [PubMed]

IEEE Trans. Biomed. Eng. (1)

J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
[Crossref] [PubMed]

IEEE Trans. Inf. Tech. Biomed. (1)

J. Zheng, J. Tian, K. Deng, X. Dai, X. Zhang, and M. Xu, “Salient feature region: a new method for retinal image registration,” IEEE Trans. Inf. Tech. Biomed. 15(2), 221–232 (2011).
[Crossref]

IEEE Trans. Med. Imag. (8)

H. Bogunović, M. Sonka, Y. H. Kwon, P. Kemp, M. D. Abràmoff, and X. Wu, “Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography,” IEEE Trans. Med. Imag. 33(12), 2242–2253 (2014).
[Crossref]

G. K. Matsopoulos, P. A. Asvestas, N. A. Mouravliansky, and K. K. Delibasis, “Multimodal registration of retinal images using self organizing maps,” IEEE Trans. Med. Imag. 23(12), 1557–1563 (2004).
[Crossref]

M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imag. 28(9), 1436–1447 (2009).
[Crossref]

K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).
[Crossref]

C.-L. Tsai, C.-Y. Li, G. Yang, and K.-S. Lin, “The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence,” IEEE Trans. Med. Imag. 29(3), 636–649 (2010).
[Crossref]

E. Peli, R. A. Augliere, and G. T. Timberlake, “Feature-based registration of retinal images,” IEEE Trans. Med. Imag. MI-6(2), 272–278 (1987).
[Crossref]

Z. Hu, M. Niemeijer, M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography,” IEEE Trans. Med. Imag. 156(2), 218–227 (2014).

M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J.-K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imag. 34(9), 1854–1866 (2015).
[Crossref]

IEEE Trans. Patt. Anal. Mach. Intell. (1)

G. Yang, C. V. Stewart, M. Sofka, and C.-L. Tsai, “Registration of challenging image pairs: initialization, estimation, and decision,” IEEE Trans. Patt. Anal. Mach. Intell. 29(11), 1973–1989 (2007).
[Crossref]

IET Imag. Proc. (1)

M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Imag. Proc. 7(8), 768–776 (2013).
[Crossref]

Int. J. Comp. Vis. (1)

D. G. Lowe, “Distinctive Image features from scale-invariant keypoints,” Int. J. Comp. Vis. 60(2), 91–110 (2004).
[Crossref]

Ophthalmol. (1)

W. L. M. Alward, S. Q. Longmuir, M. S. Miri, M. K. Garvin, and Y. H. Kwon, “Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child,” Ophthalmol. 122(7), 1532–1534 (2015).
[Crossref]

Opt. Exp. (1)

Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Exp. 19(1), 7–16 (2011).
[Crossref]

Proc. SPIE (5)

M. Niemeijer, M. K. Garvin, B. van Ginneken, M. Sonka, and M. D. Abràmoff, “Vessel segmentation in 3-D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).
[Crossref]

M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images,” Proc. SPIE 8669, 86690O (2013).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, B. van Ginneken, M. D. Abràmoff, and M. Sonka, “Registration of 3D spectral OCT volumes using 3D SIFT feature point matching,” Proc. SPIE 7259, 72591I (2009).
[Crossref]

M. Niemeijer, K. Lee, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

M. Niemeijer, M. K. Garvin, K. Lee, M. D. Abràmoff, and M. Sonka, “Registration of 3-D spectral OCT volumes combining ICP with a graph-based approach,” Proc. SPIE 8314, 83141A (2012).
[Crossref]

Trans. Am. Ophthalmol. Soc. (1)

J. S. Schuman, “Spectral domain optical coherence tomography for glaucoma (an AOS thesis),” Trans. Am. Ophthalmol. Soc. 106, 426–458 (2008).

Other (11)

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal graph-theoretic approach for segmentation of the internal limiting membrane at the optic nerve head,” in Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015 (2015), pp. 57–64.

M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comp. Med. Imag. Graph., (to be published).

H. M. Taha, N. El-Bendary, A. E. Hassanien, Y. Badr, and V. Snase, “Retinal feature-based registration schema,” in Proceedings of Informatics Engineering and Information Science (2011), pp. 26–36.
[Crossref]

Y. Lin and G. Medioni, “Retinal image registration from 2D to 3D,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

P. C. Cattin, H. Bay, L. Van Gool, and G. Székely, “Retina mosaicing using local features,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (2006), LNCS 4191, pp. 185–192.
[Crossref]

E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1508–1511.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (2006), pp. 430–443.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003), Chap. 4.

S. Niu, Q. Chen, H. Shen, L. de Sisternes, and D. L. Rubin, “Registration of SD-OCT en-face images with color fundus photographs based on local patch matching,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop, OMIA 2014, Held in Conjunction with MICCAI 2014 (2014), pp. 25–32.

R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Proceedings of Biomedical Image Registration (2010), pp. 72–82.
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Flowchart of the proposed method.
Fig. 2
Fig. 2 An example of intraretinal surface segmentation. (a) The central OCT B-scan and the segmented surfaces: blue is the IS/OS junction, yellow is the BM surface, and pink is the thin-plate spline fitted to the BM surface. (b) The 3D view of the segmented surfaces. (c) The flattened OCT B-scan. (d) The corresponding OCT projection image.
Fig. 3
Fig. 3 Example preprocessing steps on two types of fundus photographs in the dataset. The interfering details included on the images are shown with green arrows. Dates are covered for privacy. (a) Stereo fundus photographs containing large imaging artifact. The left-side photo was selected for further processing in (c). (b) A low-contrast regular fundus photograph. (c) The binary masks that remove the interfering details, the selected fundus image, the green channel, and the enhanced-contrast images corresponding to the examples shown in (a) and (b).
Fig. 4
Fig. 4 An example of optic disc localization using circular Hough transform. (a) From left to right are the enhanced OCT projection image, the blue circle representing the optic disc overlaid on top of the closed image, and the Hough map from which the dominant circle is identified, respectively. (b) The same sequence of images as in (a) showing identifying the optic disc from the fundus photograph.
Fig. 5
Fig. 5 Illustration of Bresenham circle containing 16 pixels (the red boxes) around the query point p. An example of N contiguous pixels (for N = 9) is shown with the cyan dashed line [28].
Fig. 6
Fig. 6 An example of control point (corner) detection from (a) OCT projection and (b) fundus images using FAST corner detection method.
Fig. 7
Fig. 7 An example of HOG descriptor computation from (a) OCT projection and (b) fundus images for a block size of 4 × 4 and a cell size of 4 × 4. The four strongest control points and their corresponding HOG blocks are shown on the left and for better visualization a zoomed-in illustration of one of the blocks with its corresponding CP (in blue) is shown on the right.
Fig. 8
Fig. 8 Illustration of feature vector matching using approximate nearest neighbor method in forward (blue) and backward (red) modes. The final matching feature vectors set (green) only includes the common pairs between forward and backward modes.
Fig. 9
Fig. 9 An example of incorrect pair removal. (a) Shows the yellow lines connecting the corresponding matching pairs between images identified using the approximate nearest neighbor described in Sec. 2.4. The incorrect pairs are eliminated in (b).
Fig. 10
Fig. 10 Examples of successful registration results using ICP [7], the manual, and the proposed methods. The green frame in (A) indicates the left image was selected for the registration. The orange box in (B) indicates which part of the images are shown in the checkerboards which are provided for qualitative comparison of the registration results. The corresponding RMS errors of the methods are also shown in the green boxes.
Fig. 11
Fig. 11 Examples of failed registration (RMS error > 4.43 pixels) using ICP method where the manual and proposed methods did not fail. Low imaging quality in (A) and the motion artifacts (located inside the red ovals) in (B) and (C) also caused a larger registration errors for the proposed methods. The corresponding RMS errors of the methods are also shown in the green boxes.
Fig. 12
Fig. 12 Examples of ONH-centered and macula-centered OCT stitching with the aim of obtaining a larger field of view using the proposed feature-based registration method. Note that the common area between each pair is around 20% of the size of each image.

Tables (2)

Tables Icon

Table 1 Quantitative evaluation of the registration using RMS error. All cases are included.

Tables Icon

Table 2 The success rate and running time (s) computation.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

D FO ( 1 , i ) = j = 1 128 [ h f , 1 ( j ) h o , i ( j ) ] 2 , i = 1 , 2 , , M .
RMS = 1 5 i = 1 5 p o , i p ^ f , i 2 ,

Metrics