Abstract

A deep-learning (DL) based noise reduction algorithm, in combination with a vessel shadow compensation method and a three-dimensional (3D) segmentation technique, has been developed to achieve, to the authors best knowledge, the first automatic segmentation of the anterior surface of the lamina cribrosa (LC) in volumetric ophthalmic optical coherence tomography (OCT) scans. The present DL-based OCT noise reduction algorithm was trained without the need of noise-free ground truth images by utilizing the latest development in deep learning of de-noising from single noisy images, and was demonstrated to be able to cover more locations in the retina and disease cases of different types to achieve high robustness. Compared with the original single OCT images, a 6.6 dB improvement in peak signal-to-noise ratio and a 0.65 improvement in the structural similarity index were achieved. The vessel shadow compensation method analyzes the energy profile in each A-line and automatically compensates the pixel intensity of locations underneath the detected blood vessel. Combining the noise reduction algorithm and the shadow compensation and contrast enhancement technique, medical experts were able to identify the anterior surface of the LC in 98.3% of the OCT images. The 3D segmentation algorithm employs a two-round procedure based on gradients information and information from neighboring images. An accuracy of 90.6% was achieved in a validation study involving 180 individual B-scans from 36 subjects, compared to 64.4% in raw images. This imaging and analysis strategy enables the first automatic complete view of the anterior LC surface, to the authors best knowledge, which may have the potentials in new LC parameters development for glaucoma diagnosis and management.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Glaucoma, the second leading cause of blindness worldwide [1], is a group of optic neuropathies characterized by progressive degeneration of retinal ganglion cells and their axons, resulting in a distinct appearance of the optic nerve head (ONH) and corresponding visual field loss [2]. The lamina cribrosa (LC), a series of perforated collagenous plate in the ONH through which retinal ganglion cell axons exit the eye, has been implicated as the primary site of axonal damage in glaucoma [35]. Indeed, previous histologic and experimental studies have reported several morphological changes of the LC in glaucoma eyes such as thinning, thickening, and posterior deformation [58]. The recent advance of imaging technologies such as spectral-domain optical coherence tomography (SD-OCT), enhanced depth imaging (EDI), and swept-source OCT (SS-OCT) enabled in vivo visualization of the LC and other deep ONH structures. Using these technologies, previous studies have shown LC changes in glaucoma eyes such as thinning [9], focal defects [10,11], and posterior displacement [12,13]. In addition, other studies have shown the significant influence of LC morphologic changes on the progression of glaucoma [1416]. These reports have clearly shown the clinical relevance of the LC imaging for glaucoma practice. However, most of these previous studies rely on time-consuming and error-prone manual identification and measurements of the LC, precluding the widespread clinical application of the LC measurements in a clinical setting [17]. Existing automated segmentation method is limited to radial scans to achieve sufficient signal-to-noise ratio [18], and thus may not be applied to volumetric scans.

Quantitative analysis of the LC OCT images encounters multiple technical challenges. Speckle noise is inherent in OCT imaging in scattering media such as biological tissues [19] and decreases the visibility of the LC, especially in the area under the large blood vessels. One common practice to reduce speckle noise is to average multiple OCT scans acquired at the same location, which, while effective, is sensitive to eye movement and is practically limited to the use in 2D scans. Post-processing techniques utilizing wavelet filters [20,21] have also been explored with manually designed filters. In this paper, a data-driven deep-learning (DL) based noise reduction method has been developed utilizing the latest developments in machine learning [22]. Conventional DL-based noise reduction methods require noise-reduced images acting as ground truth teachers. These noise-reduced images are, by nature, difficult to acquire and subject to motion blur. In this study, a novel training method, as described by J. Lehtinen et al. [22], was used to train the DL algorithm to learn the statistical distribution of noise without the needs of noise-reduced images. This approach of deep learning from noisy single images is by nature less sensitive to motion blur and registration errors as experienced in the acquisition of ground truth teachers by OCT image averaging.

Other challenges for LC visualization and quantification are the significantly low OCT signal intensity and shadow artifacts. LC locates deep in the retina where the OCT signal is highly attenuated [23], and the large blood vessels merge at the optic nerve head (ONH) region leads to a shadow that can be severe [24]. In the present study, we also developed a shadow-compensation technique that aims to reduce the signal attenuation from blood vessels. By combining the advantages of the DL-based noise reduction and shadow compensation methods, we demonstrate that the visibility of the anterior LC border underneath blood vessels can be significantly enhanced. To improve the accuracy of the automatic 3D LC segmentation, an adaptive contrast enhancement method [25,26] is applied to enhance the visualization of the anterior LC border.

To our best knowledge, this is the first segmentation and analysis of OCT images using a DL-based noise reduction technique. Initial clinical study results give some evidence that the accuracy of 3D segmentation within slices is improved and good connectivity among volumes is observed. The combination of techniques presented in the paper has led to, for the first time to our best knowledge, the automatic complete view of the anterior LC surface.

In this study, three datasets are used separately for different purposes. The first dataset, which consists of 3D OCT images of 16 subjects acquired at different locations of the retina, are used for the training of the noise reduction algorithm. The second dataset, which consists of three pairs of single and 128x averaged scans acquired at ONH from distinct subjects, is used for the evaluation of the noise reduction algorithm. And the third dataset, which consists of 3D OCT scans of 36 subjects acquired at the ONH, is used for the evaluation of the segmentation algorithm. For manual evaluation of the segmentation, one eye per subject is randomly selected.

2. Enhanced OCT image visualization through deep learning based noise reduction

DL based algorithms have recently surpassed the performance of conventional methods in noise reduction tasks [2734], and the applications of DL based algorithm to retinal layer segmentation have been reported [3539]. Recent studies utilizing DL models for OCT image noise reduction have shown promising results [40,41]. However, to train the deep learning models, they all rely on paired images where one image is noisy and the other one is noise-reduced. Then the DL models are trained to reduce the noise of the noisy image that best matches the noise-reduced image. Mathematically, values of the parameters $\theta$ in the DL model are optimized by minimizing the total loss, defined as

$$\textrm{total loss} = \sum_{i} L(f_{\theta}(\hat{x}_i), y_i)$$
where $i$ iterates among the training images, $L$ is the loss function, $f_{\theta }$ is the DL model, $\hat {x}_i$ is the $i$-th noisy image, and $y_i$ is the corresponding $i$-th noise-reduced image.

Noise-reduced OCT images are typically generated by averaging multiple scans taken at the same location. This is not a trivial task due to the motion of the eye, and thus faces a dilemma where averaging a large number of repeated scans reduces the noise level but may result in blurry images and the processing is highly labor-intensive. Though averaging a fewer number of repeated scans is less subject to motion artifacts, the resulting image may still contain a high level of noise. To overcome these limitations, we look at a different approach.

Recent studies show that deep-learning-based noise reduction does not necessarily need noise-reduced images for training [22,42]. Replacing Eq. (1) with the following:

$$\textrm{total loss} = \sum_{i} L(f_{\theta}(\hat{x}_i), \hat{y}_i),$$
where $\hat {y}_i$ is instead the $i$-th noisy image, then the only requirement becomes that $\hat {y}_i$ is from the same location as $\hat {x}_i$ but contains different noise content drawn from the same noise distribution. In other words, $\hat {x}_i = x_i + \delta$ and $\hat {y}_i = x_i + \delta '$, where $x_i$ is the true noise-reduced image, and $\delta$ and $\delta '$ are the different noise manifestations from the same underlying distribution. As demonstrated in [22], the solution of minimizing Eq. 2 will result in $f(\hat {x}_i) = x_i$, which means that the DL model will generate the noise-reduced version of $\hat {x}_i$ without ever seeing noise-reduced images.

Paired images of $\hat {x}_i$ and $\hat {y}_i$ can be selected as the two repeated scans at the same location, where their difference may be caused by the noise fluctuation from the same distribution. This DL model significantly reduces the number of repeated scans needed for training. Thus, it may reduce the blurry effects caused by eye movement, as well as the error in the registration algorithm for multi-repeats averaging.

In our study, a U-net [43] based DL model is trained to denoise B-scan images of $992\times 512$ pixels. 3D OCT scans (DRI-OCT Triton; Topcon, Tokyo, Japan) at both the macular and ONH are included for training to improve the versatility of the model. In total, images from 16 subjects, resulting in 3,895 pairs of B-scans at different locations of the retina, are included for training.

The performance of the noise reduction algorithm is evaluated quantitatively using two metrics of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Both of these metrics are used to measure the similarities between a reference image $R$ which is considered as noise-reduced, and a noisy image $I$. Assume all images consist of $m \times n$ pixels, the mean squared error (MSE) between images $R$ and $I$ is defined as

$$\textit{MSE} = \frac{1}{m\times n} \sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\left[ R(i, j) - I(i, j)\right]^2.$$
And the PSNR (in dB) is defined as
$$\textit{PSNR} = 20\cdot \log_{10}{(\textit{MAX}_I)} - 10\cdot \log_{10}{(\textit{MSE})},$$
where $\textit {MAX}_I$ is the maximum possible pixel intensity in the images. In our study, 8-bit images are used and thus $\textit {MAX}_I = 255$. Overall, a larger PSNR indicates a higher similarity between the two images.

In contrast to the absolute errors estimated by MSE and PSNR, SSIM is designed by modeling any image distortion as a combination of three factors, namely the loss of correlation $s(R, I)$, the luminance distortion $l(R, I)$, and the contrast distortion $c(R, I)$ [44]. The SSIM is defined as

$$\textit{SSIM}(R, I) = l(R, I) \cdot c(R, I) \cdot s(R, I),$$
where
$$\begin{aligned} l(R, I) & = \frac{2\mu_R \mu_I + c_1}{\mu^2_R + \mu^2_I + c_1}, \\ c(R, I) & = \frac{2\sigma_R \sigma_I + c_2}{\sigma^2_R + \sigma^2_I + c_2}, \\ s(R, I) &= \frac{2\sigma_{RI} + c_3}{\sigma_R\sigma_I + c_3}, \end{aligned}$$
$c_1 = (0.01 \times 255)^2,~c_2 = (0.03\times 255)^2$ and $c_3 = c_2/2$, as defined in [45], are the small constants to stabilize the division with weak denominators. $\mu$ and $\sigma$ are the average and variance of images $I$ and $R$, respectively. $\sigma _{RI}$ is the correlation of images $I$ and $R$. By definition, SSIM ranges from 0 to 1, and only reaches to 1 when the two images are exactly the same.

The reference image $R$ is generated by registering and averaging $n$ repeated scans at the same location. A larger number of repeated scans $n$ typically result in a lower noise level. Both the evaluation metrics PSNR and SSIM are sensitive to the noise-level of $R$. For example, if $n$ is small, one may achieve high PSNR and SSIM scores, but the resulting image will still be noisy if $R$ itself is noisy. It is a much technically difficult task for one to achieve high scores in both PSNR and SSIM if $n$ is large.

In previous studies, $n$ ranges from 6 to 60 [40,41]. With the current commercially available high-speed swept-source OCT devices such as the Triton OCT device that is operated at 100 kHz/sec, it is able to capture and average 128 repeated scans at a single location. To compare with the cleanest image possible, our reference images $R$ are the line scan results (with $n$=128 repeats) acquired with a Triton OCT device. Image definitions are illustrated in Fig. 1, where the input noisy image $I_{\textrm {raw}}$ is one single B-scan out of the said 128 repeats, the noise reduced image $I_{\textrm {noise-reduced}}$ is the result of noise reduction in $I_{\textrm {raw}}$ using our DL model.

 

Fig. 1. Definition of different image types used in this study. Top left: B-scans repeated 128x at the same location; top right: registered and averaged B-scan of the 128x repeats; bottom left: one of the B-scan images from the 128 repeats; bottom right: DL noise-reduced image of the bottom left image.

Download Full Size | PPT Slide | PDF

To validate the DL noise reduction algorithm, a data acquisition protocol that requires both the 128x averaged image and the single scan images at the same location of the eye is employed. This protocol sets a limitation on the validation dataset size and excludes the subjects that have been used for training the DL algorithm from the validation. Validation data is acquired at three different locations of the ONH.

The results of quantitative evaluations of DL-based noise reduction are summarized in Table 1. An increase of 6.6 $\pm$ 0.3 dB in PSNR and an improvement of 0.65 $\pm$ 0.01 in SSIM have been achieved. For comparison, results of conventional noise-reduction method with a $5\times 5$ sized median filter, is shown in Table 1 as well.

Tables Icon

Table 1. PSNR and SSIM results from 3 scans at different locations of the eye.

Qualitative evaluation examples are shown in Fig. 2, where (A), (B), (C), and (D) depicts the original single B-scan, the median filtered image with a $5\times 5$ sized filter, the DL noise-reduced B-scan, and the 128x B-scan scans, respectively. For further detailed inspections, shown in (E), (F), (G), and (H) are the zoom-ins (highlighted by the green boxes) of (A), (B), (C), and (D), respectively. From Fig. 2 it can be observed that the OCT image information is well preserved in the DL noise-reduced images while noise has been significantly reduced to that comparable of the averaged 128x scans.

 

Fig. 2. Qualitative evaluation examples. (A) single B-scan; (B) median filtered image with a $5\times 5$ sized filter; (C) DL based noise reduced image; (D) registered and averaged image of B-scan repeated 128× at the same location. For detailed inspection, (E), (F), (G), and (H) show the zoom-ins of the areas highlighted by the green boxes in (A), (B), (C), and (D) respectively.

Download Full Size | PPT Slide | PDF

Noise reduced 3D OCT volumes can be generated from the individual noise reduced B-scans to create the en-face images for observation. An example is shown in Fig. 3, where a 3D raster scans of $6\times 6~mm^2$ were acquired at the macular. To better visualize the choroidal structures, the 3D volume is flattened with respect to the Bruch’s membrane. En-face images (B) and (C) are then extracted from the choroiocapillaris layer as highlighted by the yellow line in (A). En-face image (B) is reconstructed from the original scan volume while (C) is reconstructed from the noise-reduced volume. (D) and (E) are the zoom-ins (highlighted by the green boxes) of (B) and (C), respectively. It can be seen that the details of choriocapillaris can be identified with more confidence after noise reduction, and that the present DL-based noise reduction method does not rely on the DL of averaged images, which are generally subject to motion artifacts and registration errors, may provide the practical advantages in the visualization and identification of fine structures.

 

Fig. 3. Comparison of $6\times 6~mm^2$ en-face images of choroidal structures before and after noise reduction. (A) noise-reduced B-scan after flattening, the yellow line indicates the depths of en-face images (B) and (C); (B) en-face image extracted from the original volume corresponding to the depth in (A); (C) en-face image extracted from the noise-reduced volume corresponding to the depth in (A) with the red line indicating where B-frame (A) was extracted from; (D) zoom-in version of (B) for detail investigations; (E) zoom-in version of (C) for detail investigations.

Download Full Size | PPT Slide | PDF

Figure 4 shows another example that reveals the LC’s pores structure. 3D raster scans of $6\times 6~mm^2$ were performed at the ONH. (A) depicts one of the B-frames within the noise reduced volume, while the yellow line indicates where the subsequent en-face images are extracted from. (B) and (C) show, respectively, the en-face images from the original and the noise-reduced volumes. (D) and (E) are the zoom-in versions of (B) and (C) extracted from the corresponding green boxes. With less noise, it can be observed that the LC’s pores can be identified with more confidence in the noise reduced en-face image.

 

Fig. 4. Comparison of $6\times 6~mm^2$ en-face images of LC structures before and after noise reduction. (A) noise-reduced B-frame with the yellow line indicating where the subsequent en-face images are extracted from; (B) en-face images extracted from the original volume at depth highlighted in (A); (C) en-face images extracted from the noise-reduced volume at depth highlighted in (A) with the red line indicating where B-frame (A) was extracted from; (D) zoom-in version of (B) for detail investigations; (E) zoom-in version of (C) for detail investigations.

Download Full Size | PPT Slide | PDF

3. 3D shadow compensation and contrast enhancement

3.1 Shadow compensation

In the present study, shadow compensation is performed to recover the lost energy in the shadowed regions. There have been efforts to reduce the shadow artifact by compensating the energy [25,26,46]. The method by Fabritius et al.[46] requires segmenting the retinal pigment epithelium (RPE) layer, where shadow compensation is performed by adding an offset intensity to all the A-lines affected by the retinal vessels below RPE. This method keeps the original contrast of the scan but is based on time-consuming segmentation and may be subject to the associated error. The method by Girard et al.[25,26] does not require segmenting any retinal layers or identifying A-lines affected by vessels. Every pixel in an OCT image is compensated through the same equation. However, this compensation method may alter the contrast of the original image as deeper pixels get more compensation through the equation. Improvement has been made by setting a stopping point of compensation when cumulative intensity below a pixel is smaller than some threshold. While this prevents the over-saturation of deep pixels and improves the contrast in the main images, it still may introduce a contrast difference between neighboring frames and lead to new artifacts in the en face images.

In the present study of automated 3D segmentation, a new requirement arises that the shadow artifact shall be removed or compensated, as viewed from all directions of the volume, namely, the fast-axis B-scan, the slow-axis B-scan, and the en face image. The energy in each pixel of the A-line can be represented with its intensity level as follows [25,26].

$$E_{i,\;j}=I_{i,\;j}^n,\;(i=1,2,\ldots,N;\;j=1,2,\ldots,D)$$
where $i$ is the A-line number of a total of $N$ A-lines and $j$ is the depth index in each A-line with a total depth of $D$ [26,46]. For image enhancement, $n$ can be any number that is greater than 1. The total energy in each A-line can be expressed as
$$E_{i} = \sum_{j=1}^{D}E_{i,\;j}=\sum_{j=1}^{D}I^n_{i, j}.$$
Fig. 5(A) shows an OCT image with shadows observed under the blood vessels, and shown in Fig. 5(B) is the energy profile across the image. Besides the high-frequency fluctuations observed in the energy profile, which is possibly caused by speckle and random noises during the OCT image acquisition, energy dips are also identified at locations correlated to the shadows.

 

Fig. 5. (A) OCT image with shadows observed underneath the blood vessels (highlighted with arrows); (B) energy profile across B-scan, where the blue curve depicts the original energy profile containing high-frequency random noise and energy dip due to shadow, and the red curve shows low-pass-filtered energy profile.

Download Full Size | PPT Slide | PDF

Another observation can be made from the energy profile. As retinal OCT images correlate with retina morphology [47], without the presence of random noise or shadow artifact, the energy profile is expected to be smooth as the tissues are continuous and any changes would be gradual. Under these circumstances, the energy profile would ideally contain only the slow variations. In contrast, both the random noises and shadows mostly yield to high-frequency fluctuations observed in the energy profile. As such, applying low-pass filtering to the energy profile may effectively reduce the influence by shadows in the OCT image. As an example, a red curve is shown in Fig. 5(B) depicts the low-pass-filtered energy profile.

The normal thickness of human macula in OCT scans is measured to be around 200 $\mu m-$ 400 $\mu m$ [48] while a typical A-line covers more than 2 $mm$. Therefore, a large part of the A-line carries mostly the background noise. Since only the structure region needs to be enhanced and the noise level should be kept as close to original as possible, segmenting the structure region is needed. A simple calculation of the noise intensity level in the image can be used as a reference to find the starting and ending point of structure in each A-line. This process is illustrated by a sample result shown in Fig. 6. The starting and ending points of the structural region along the A-line is determined by detecting the intersections of the cutoff intensity and the moving average energy profile. This fast segmentation allows shadow compensation within the structure region without enhancing the noise.

 

Fig. 6. Detection of the structure region of an OCT A-line. Intersections of the cutoff level (purple line) and the moving average energy profile (red line) defines the structure region (shaded area).

Download Full Size | PPT Slide | PDF

To compensate the image, the pixel intensities within the structure region in each A-line are linearly scaled so that the final total energy matches the filtered energy level. Figure 7 shows the comparison between the original (A) and the compensated (B) images. The segmentation lines (yellow) for the structure region is also shown in the original image.

 

Fig. 7. Comparison between (A) the original and (B) compensated images. Yellow lines in the original image mark the segmented starting and ending point of the compensation. Narrower vessels (red, right-pointing arrow) are compensated better than wide vessel (green, left-pointing arrow).

Download Full Size | PPT Slide | PDF

It is worth noting that the compensation is more effective for the narrow vessels (red arrows in Fig. 7) than for the thick vessel (green arrow in Fig. 7), as there may be some low-frequency components left in the filtered energy profile. By the use of 3D volume data, this limitation can be mitigated by applying the compensation again along the slow axis. This will further improve the compensation, provided that the neighboring frames are aligned in the volume. Since the data in the study is already aligned, compensation is applied along both directions, enabling a 3D-compensation.

Figure 8 shows the comparison between noise-reduced B-scans of LC structure before (A) and after (B) shadow compensation. It can be observed that the shadow-compensation combined with noise-reduction further improves the visibility of LC border (highlighted with red arrows). However, deep LC structures are yet to be further enhanced.

 

Fig. 8. (A) A sample B-scan extracted from a 3D volume; (B) the shadow-compensated results of the B-scan.

Download Full Size | PPT Slide | PDF

3.2 Contrast enhancement of deep LC tissues

Adaptive contrast enhancement methods have been developed for the visualization of deep tissues underneath the anterior LC [25,26]. This method, however, may yield an artifact that appears as a bright band in the lower part of the OCT image below the compensation depth limit [25]. In the example shown in Fig. 9(A), where the structure is low in the image, the bright band cuts through the LC structure and obscure the anterior border (highlighted by the red box), adding difficulty for both human and the algorithm to detect the border. In this study, we utilize the advantages of our noise reduction method and set a linear contrast adjustment step to the noise-reduced image to minimize the appearance of noise floor at the deep layers before applying the adaptive contrast enhancement method. The resulting image is shown in Fig. 9(B), where it can be found that the bright-band-artifact has been significantly reduced while the contrast of the LC border is further improved.

 

Fig. 9. (A) Bright-band artifact (enclosed in red box) is common with the adopted compensation method [25], where the bright line can cut through LC structure and obscure the anterior border; (B) Adjusting contrast to reduce noise-level before the compensation reduces the bright-artifact and further improves the contrast of LC border.

Download Full Size | PPT Slide | PDF

4. 3D automated segmentation of anterior lamina cribrosa

The current study included subjects with open-angle glaucoma and glaucoma suspect who underwent a swept-source OCT ONH 3D scan. A Triton OCT device was used to image the ONH and the parapapillary area. We retrospectively surveyed 36 consecutive patients who underwent swept-source OCT imaging at Osaka University Hospital. Baseline clinical data such as age, intraocular pressure, type of disease, and lens status were collected from medical charts. Glaucoma was defined as having both funduscopic glaucomatous appearance of the ONH (localized or diffuse neuroretinal rim thinning and/or retinal nerve fiber layer defect) and a corresponding visual field damage. Eyes with fundoscopic glaucomatous appearance of the ONH and retinal nerve fiber layer defect in OCT image without evidence of a corresponding visual field damage were defined as glaucoma suspects. Patients with secondary glaucoma and angle-closure glaucoma were excluded. All procedures of the study conformed to the Declaration of Helsinki and were approved by the institutional review board of Osaka University Hospital. 72 eyes of 36 subjects were enrolled, including 33 eyes with normal tension glaucoma, 26 eyes with primary open angle glaucoma, 6 eyes with exfolication glaucoma, and 5 eyes with glaucoma suspect. The average $\pm$ standard deviation age of the participants was 64.3 $\pm$ 11.8. 22 patients were female.

In this study, optic disc images from a $6\times 6~mm^2$ area are acquired. Each volume scan consists of 256 B-scans each with $992\times 512$ pixels. For each subject, images from one eye are selected for analysis.

Border of LC is detected by using a two-round segmentation method. In this new approach, we first define the region of interest (ROI) from the automatically detected scleral ring disc area. The choose of a small ROI offers the advantage of reducing the amount of calculation and thus enables a fast 3D segmentation of the LC border. Five neighboring slices are averaged for signal-to-noise ratio (SNR) improvement. The 3D volume within the disc area is interpolated to exhibit isotropic resolutions in the horizontal directions. 2D and 3D canny edge detectors are then applied to ensure the connectivity of edges throughout the 3D ROI volume and maintain edges for detecting the local changes. Different weights are applied to the horizontal and vertical gradients of the canny edge detectors. The weights are based on the prior knowledge of the LC anatomy, and are chosen to detect mainly the horizontal edges corresponding to the anterior LC border.

The first round of our segmentation process is to detect the raw LC border and acquire the confidence level of the detection. For each pixel in a slice, the cost is calculated by combining the edge map with the weighted gradients. The accumulated cost map is then calculated by summing up the cost along the minimum path to each location. The points with the minimum accumulated cost for each A-line are treated to be the candidates forming the LC border. Due to the presence of blood vessel, local weak OCT signal intensity and edges from ILM and border tissue, some of the candidate points are removed from further calculation. A second-order weighted polynomial fitting is applied to search the tentative LC edges, and the confidence of the detection accuracy is then calculated.

The second round of our segmentation process starts with detecting the LC at the slices of low confidence, which is carried out by utilizing the LC border of their neighboring slices. The accumulated cost map calculation is the same as that of the first round, whereas candidate points are removed if their distance to the neighboring LC borders is large. Again, the updated LC border is calculated using a weighted polynomial fitting.

After the above two rounds of segmentation process, a final step is to apply a 3D smoothing to achieve a better smoothness across the 3D volume. A flow chart illustrating the LC anterior segmentation process is depicted in Fig. 10. Details of the two-round segmentation process are given below.

 

Fig. 10. A flow chart of the proposed two-round segmentation.

Download Full Size | PPT Slide | PDF

4.1 Edge map generation

To detect the edges for segmenting the possible LC anterior border, both a 2D and a 3D canny edge detector are utilized. The 2D canny edge detector is customized from a commercially available software that is used in retina OCT boundary segmentation [49,50]. A threshold of 0.92, and a kernel size based on the pixel resolution are chosen to detect the edges from dark to bright.

Since large blood vessels exist in the optic disc area, OCT signals in part of the anterior LC can be severely attenuated, causing possible loss of connectivity in the anterior LC border. A volumetric OCT scan provides the 3D information that can be useful for recovering the connectivity of the anterior LC border. Using a centered difference on the Gaussian smoothed images and a non-maximum suppression method to find the directional local maxima in a gradient array, 3D canny edges are detected within the ROI [51]. A horizontal standard deviation $\sigma _H=5$ and a vertical standard deviation $\sigma _V=8$ are used for gaussian smoothing in this work and the images are interpolated to achieve isotropic resolutions in both the horizontal directions.

The 2D canny edge detector may vary between the consecutive B-scans, while the 3D canny edge detector may detect unnecessary edges from neighboring B-scans. To address these possible issues, a refinement and selection of edges are performed, where the connected edges are separated into multiple edges if the angle is found to be larger than 30 degree, and all the vertical edges are removed based on the LC border anatomy. With this process, only the overlapped edges between 2D canny edge detector and 3D canny edge detector are retained, as they are more likely to correspond to the LC border.

4.2 Cost map

In addition to the edges, locations with large values of intensity gradient may serve to detect the LC border as well. By taking the horizontal derivative of the Gaussian filtered intensity image with a large kernel size, an intensity gradient map is calculated. The large kernel smooths the local variations and fills in the lost information by utilizing the neighboring pixels. This gradient map offers the advantages that it may provide additional search guidance in the case where the canny edges are not connected due to weak local signals. Here a cost map is created by a linear combination of the edge map with the intensity gradient values, defined as follows.

$$C(i, j, k) = w_1 \cdot \textit{Edge}(i, j, k) + w_2 \cdot \textit{Gradient}_V(i, j, k) + w_3 \cdot \textit{Gradient}_H(i, j, k)$$
where $C(i, j, k)$ is the cost for each pixel, $i,\;j,\;k$ are the indexes in the A-line, depth and slice direction, respectively. $w_1, w_2$ and $w_3$ are weights. $Edge$ represents the edge map, and $\textit {Gradient}_H(i, j, k)$ and $\textit {Gradient}_V(i, j, k)$ represents the intensity gradient values in horizontal and vertical directions, respectively. Since $w_3$ is only a fraction of $w_2$, the vertical gradient, i.e. horizontal edges, generally have the larger weights. All the weights $w_1, w_2$ and $w_3$ have negative values, and they are chosen to be −1, −0.1, −1, respectively in the present study. As such, the LC border is expected to exhibit a small cost, since the LC border generally has the large gradients and detected edges in edge map.

For each location $(i,\;j,\;k_0)$ in the slice $k_0$, the accumulated cost is the sum of the costs of the pixels along the path from the left of the ROI to $(i,\;j,\;k_0)$ [52].

$$\textit{acc}(i, j, k)|_{k=k_0} = \begin{cases} \infty, & j<1 ~\textrm{or}~ j > m.\\ C(i, j, k_0), & i = n.\\ \min\limits_{\substack{s=j-u:j+u}} \textit{acc}(i-1, s, k_0) + C(i, j, k_0), & \textrm{otherwise}. \end{cases}$$
where acc$(i,\;j,\;k)$ is the accumulated cost to reach $(i,\;j,\;k)$ with minimum cost along the path, $n$ is the number of pixels in A-line direction, and $m$ is the number of pixels in depth direction. And $u$ is the search range of neighboring pixels which determines the connectivity of the path. A value of $u=2$ is used in this work, so that 5 pixels in the neighboring A-line are searched to find the minimum cost to propagate the path within a slice.

4.3 Lamina cribrosa segmentation

The segmentation of anterior LC does not directly utilize the shortest path method as performed in retinal layer segmentation [49,50]. This is because the low SNR in a large blood vessel region may result in path error and may even propagate to regions without blood vessels. The present LC segmentation method utilizes the minimum locations across the accumulated cost map. The minimum locations for each A-line, which correspond to the points with minimum cost accumulated from left of the ROI to that A-line location, are the possible LC border locations and thus they are chosen to be the candidates for further detection. Due to the complicated intensity change within the disc area [53], some minimum location points are not suited for the LC detection. In the present method, LC border detection is conducted in two rounds. In the first round, points where the vertical distance to ILM is less than 5 pixels are removed, as they may be affected by the large gradient of the ILM. The leftmost and rightmost points (1/4 of ROI) are also removed if they are found to have vertical distance larger than 30 pixels with respect to the mean of the remaining points, indicating those points may correspond to the edges of the border tissue. For the remaining minimum location points, a weighted second-order polynomial fitting is applied.

The utilization of 3D volume data offers a practical advantage that information from neighboring frames may help to segment a more difficult frame where the LC border is unclear. To quantify the reliability of the segmentation, we define the confidence of the detection as the reciprocal of the averaged vertical distance between the polynomial fitted curve to the candidate points.

$$\textit{conf}_{k_0} = \frac{1}{\frac{1}{n} \cdot \sum\limits_p abs(j_p-j_f)}$$
where $\textit {conf}_{k_0}$ is the confidence of the slice $k_0$, $p$ is the candidate point, $f$ is the polynomial fitting curve at the same horizontal location as $p$, and $n$ is the number of candidate points after removal. The confidence lower than 0.05, namely average distance of larger than 20 pixels, is considered to be an unreliable detection and require a second round of detection.

In the second round of detection, the unreliable slices located closest to the reliable slices are the first ones to be re-detected and then marked as reliable detection. Then their neighboring unreliable slices are re-detected. This re-detection process iterates until all the unreliable slices are detected throughout the whole volume. The workflow of the second-round detection is similar to that of the first round, which includes generation of the cost map, calculation of the accumulated cost, and search of minimum location points to serve as the candidates of the LC border. Differently, the candidate points in the second-round detection are selected according to the distance to the neighboring LC border. Upon the completion of the two-round detection, a 3D smoothing with a median filter of the size $5\times 5$ (A-line$\times$ slice) is applied to the whole volume to correct the discontinuity that may exist in the segmentation line.

5. 3D lamina cribrosa segmentation results

OCT images with automatic LC segmentation were reviewed by glaucoma specialists. For each eye, 5 B-scans at the center of the ONH, 10 pixels superior and inferior with respect to the center of the ONH and 20 pixels superior and inferior with respect to the center of the ONH, were selected for review. The segmentation of each scan was scored as good, bad, or uncertain. Segmentation was scored as good if more than 2/3 of the anterior LC surface was accurately detected, bad if less than 2/3 of the anterior LC surface was correctly detected, and uncertain if the anterior LC surface was not able to identify even in a visual inspection by the expert.

Examples of LC segmentation results are shown in Fig. 11. In the image, the green dotted line is the manual segmentation results by medical experts and the red dash-dotted lines are the automated segmentation results from our algorithm.

 

Fig. 11. Comparison of LC segmentation results using (A) original and (B) enhanced B-scan images of the same subject. The green dotted line is the manual segmentation results by medical experts and the red dash-dotted lines are the automated segmentation results from our algorithm.

Download Full Size | PPT Slide | PDF

In total, 180 individual B-scans from 36 subjects are reviewed for their segmentation accuracy. Detailed results are shown in Table 2. LC detect-ability is defined as the percentage of B-scans where medical experts can confidently identify the anterior LC border. An improvement of over 25% in segmentation accuracy is achieved, and 98.3% of the enhanced B-scans were found detectable.

Tables Icon

Table 2. LC segmentation accuracy results

In the present study, the ground truth segmentation results are defined as the automatic segmentation boundaries that have been manually corrected by the medical experts. The average absolute differences between the automatic segmentation boundaries and ground truth segmentation boundaries measured after different enhancement stages are shown in Table 3.

Tables Icon

Table 3. The average absolute differences between the automatic segmentation boundaries and ground truth segmentation boundaries measured after different enhancement stages

In total, 177 B-scans (3 excluded for uncertainty of the anterior of LC) are manually corrected. The average absolute vertical difference of each pixel between the automatic segmentation results and the ground truth results are calculated after each enhancement stage. Table 3 summarizes the applied enhancement method and the corresponding average absolute differences. The sign "✓" indicates which enhancement method is applied and "-" indicates what method is skipped. For example, the first row in Table 3 shows the results using the original B-scan images without any enhancement, and the final row shows the results that have employed all the enhancement methods.

An example of the 3D segmentation of the anterior of LC is shown in Fig. 12, where (A) and (B) depict the LC anterior surface curvature. The pseudo color display indicates an inward bowing into the ONH. To the best knowledge of the authors, this is the first visualization of the whole LC anterior surface curvature.

 

Fig. 12. Example 3D LC anterior depth surface. (A) LC anterior surface from 3D segmentation, color coded with depth information; (B) 2D visualization of the depth map with blue indicating an inwards curvature into the ONH.

Download Full Size | PPT Slide | PDF

The LC tissue exhibits complex 3D structural changes during the development of glaucoma. Previous studies on the LC structure using OCT typically were limited on a few B-scans and quantified using a single number, which may be insufficient to describe the full complexity of the LC structural changes. A complete view of the anterior of LC as enabled by our newly developed technique may pave ways for the more comprehensive study.

6. Discussions and conclusion

In this study, a custom DL-based noise reduction algorithm in OCT images, a custom shadow compensation algorithm, and an edge-detection based segmentation algorithm were developed for the quantitative analysis of volumetric OCT scans. We presented the automatic 3D segmentation of the anterior of LC as an application example that benefits much from the combination of these algorithms. Though the results presented are of the 3D images acquired with a SS-OCT, our method can be applied to SD-OCT as well. Automatic 3D segmentation of the anterior surface of LC is technically challenging, as multiple factors that include the local weak signal intensity caused by the presence of large blood vessels, the complexity of the shape of LC, and LC defect may lead to segmentation error. In this study, a 3D canny edge detection and a two-round segmentation method that employs the neighboring information were deployed to take advantage of the morphological structure in a volumetric OCT scan. For the B-scans with clear LC border, the segmentation can match the local changes, whereas for the B-scans with uncertain LC border due to large blood vessels, an LC border closed to the neighboring LC border location is segmented with local adjustment. In this way, the present 3D segmentation algorithm improves the smoothness across the LC and segments out the fine structure of the anterior of LC. The significant improvements in SNR and image contrast by our DL-based noise reduction and shadow compensation methods have made the use of the spatial information of the volumetric scan more feasible and practical.

Our combined algorithm was validated using OCT images from 36 subjects and the results show that the detectability and segmentation accuracy of the anterior LC have been improved to 98.3% and 90.6%, respectively. Previous studies have shown that several morphological characteristics such as depth [12], curvature [13,16], thickness [9,15], focal defects [10,14,54], and global shape index [17,55] are strongly correlated with the presence, severity, or the progression of glaucoma. Their results clearly show the potential of LC morphological characteristics as biomarkers for diagnosing and monitoring glaucoma. However, identification of the LC in these studies depends on manual segmentation, which requires highly-trained experts and considerable time. Accurate, efficient, and reproducible detection of the LC is crucial for utilizing LC parameters in glaucoma diagnosis and management. It is our hope that our imaging/analysis approach may pave ways for the future development of new LC biomarkers.

Recent advancement in DL based algorithms and graph cut algorithms [56,57] may further improve the feasibility and accuracy of the automated 3D LC segmentation, and DL networks have been applied to segment multiple retinal layers [3539,58,59]. The development of deep learning models generally requires a large amount of training data. On the other hand, it is challenging even for medical experts to confidently identify the anterior of LC in a whole 3D volume in raw OCT images. The OCT noise reduction and enhancement techniques presented in this paper have been shown to be a useful tool to help medical doctors confidently identify the LC border through the whole 3D volume that enabled the development of automated 3D segmentation of the LC. Further advanced automated LC segmentation methods such as DL based methods may be explored in future.

The present pilot study exhibits one example application of our newly developed imaging/analysis approach. It may also be useful for the segmentation and visualization tasks for other 3D structures such as the choroid structure, anterior structure, and OCT angiography. While the present study reported the segmentation results using SS-OCT, our automatic 3D segmentation method can be applied to the measurements using SD-OCT as well.

Funding

Council for Science, Technology and Innovation Cross-ministerial Strategic Innovation Promotion Program (SIP), “Innovative AI Hospital System” (Funding Agency: National Institute of Biomedical Innovation, Health and Nutrition (NIBIOHN)); Charitable trust fund for ophthalmic research in commemoration of Santen Pharmeceutical's Founder.

Disclosures

ZM: Topcon Advanced Biomedical Laboratory (E), AM: Topcon Corporation (F), SM: Topcon Advanced Biomedical Laboratory (E), YD: Topcon Advanced Biomedical Laboratory (E), RK: Topcon Corporation (F), KN: Topcon Corporation (F), KC: Topcon Advanced Biomedical Laboratory (E).

References

1. H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006). [CrossRef]  

2. R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014). [CrossRef]  

3. D. R. Anderson, “Ultrastructure of human and monkey lamina cribrosa and optic nerve head,” Arch. Ophthalmol. 82(6), 800–814 (1969). [CrossRef]  

4. H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981). [CrossRef]  

5. H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983). [CrossRef]  

6. H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007). [CrossRef]  

7. J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007). [CrossRef]  

8. M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009). [CrossRef]  

9. R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009). [CrossRef]  

10. S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012). [CrossRef]  

11. A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015). [CrossRef]  

12. S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015). [CrossRef]  

13. S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017). [CrossRef]  

14. O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014). [CrossRef]  

15. E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015). [CrossRef]  

16. A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018). [CrossRef]  

17. S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015). [CrossRef]  

18. A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

19. J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999). [CrossRef]  

20. D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004). [CrossRef]  

21. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type ii fuzzy set,” Opt. Express 15(24), 15747–15758 (2007). [CrossRef]  

22. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

23. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

24. J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).

25. J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013). [CrossRef]  

26. M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011). [CrossRef]  

27. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

28. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018). [CrossRef]  

29. J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017). [CrossRef]  

30. W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

31. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

32. M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

33. S. Lefkimmiatis, “Universal denoising networks: A novel cnn architecture for image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 3204–3213.

34. X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.

35. M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

36. X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017). [CrossRef]  

37. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017). [CrossRef]  

38. A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.

39. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

40. Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cgan,” Biomed. Opt. Express 9(11), 5129–5146 (2018). [CrossRef]  

41. K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, “Retinal optical coherence tomography image enhancement via deep learning,” Biomed. Opt. Express 9(12), 6205–6221 (2018). [CrossRef]  

42. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).

43. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

44. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Pattern recognition (ICPR), 2010 20th international conference on, (IEEE, 2010), pp. 2366–2369.

45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

46. T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009). [CrossRef]  

47. C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997). [CrossRef]  

48. A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006). [CrossRef]  

49. Q. Yang, C. A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010). [CrossRef]  

50. Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011). [CrossRef]  

51. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986). [CrossRef]  

52. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

53. I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014). [CrossRef]  

54. A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014). [CrossRef]  

55. N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019). [CrossRef]  

56. H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014). [CrossRef]  

57. D. Kaba, Y. Wang, C. Wang, X. Liu, H. Zhu, A. Salazar-Gonzalez, and Y. Li, “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express 23(6), 7366–7384 (2015). [CrossRef]  

58. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]  

59. M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006).
    [Crossref]
  2. R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
    [Crossref]
  3. D. R. Anderson, “Ultrastructure of human and monkey lamina cribrosa and optic nerve head,” Arch. Ophthalmol. 82(6), 800–814 (1969).
    [Crossref]
  4. H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
    [Crossref]
  5. H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
    [Crossref]
  6. H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
    [Crossref]
  7. J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
    [Crossref]
  8. M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
    [Crossref]
  9. R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
    [Crossref]
  10. S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
    [Crossref]
  11. A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
    [Crossref]
  12. S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
    [Crossref]
  13. S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
    [Crossref]
  14. O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
    [Crossref]
  15. E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
    [Crossref]
  16. A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
    [Crossref]
  17. S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
    [Crossref]
  18. A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.
  19. J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
    [Crossref]
  20. D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004).
    [Crossref]
  21. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type ii fuzzy set,” Opt. Express 15(24), 15747–15758 (2007).
    [Crossref]
  22. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).
  23. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
    [Crossref]
  24. J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).
  25. J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
    [Crossref]
  26. M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
    [Crossref]
  27. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
    [Crossref]
  28. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
    [Crossref]
  29. J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
    [Crossref]
  30. W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).
  31. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).
  32. M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
    [Crossref]
  33. S. Lefkimmiatis, “Universal denoising networks: A novel cnn architecture for image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 3204–3213.
  34. X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.
  35. M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.
  36. X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
    [Crossref]
  37. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017).
    [Crossref]
  38. A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.
  39. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.
  40. Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cgan,” Biomed. Opt. Express 9(11), 5129–5146 (2018).
    [Crossref]
  41. K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, “Retinal optical coherence tomography image enhancement via deep learning,” Biomed. Opt. Express 9(12), 6205–6221 (2018).
    [Crossref]
  42. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).
  43. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.
  44. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Pattern recognition (ICPR), 2010 20th international conference on, (IEEE, 2010), pp. 2366–2369.
  45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  46. T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
    [Crossref]
  47. C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
    [Crossref]
  48. A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
    [Crossref]
  49. Q. Yang, C. A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010).
    [Crossref]
  50. Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011).
    [Crossref]
  51. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986).
    [Crossref]
  52. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).
  53. I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
    [Crossref]
  54. A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
    [Crossref]
  55. N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
    [Crossref]
  56. H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
    [Crossref]
  57. D. Kaba, Y. Wang, C. Wang, X. Liu, H. Zhu, A. Salazar-Gonzalez, and Y. Li, “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express 23(6), 7366–7384 (2015).
    [Crossref]
  58. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010).
    [Crossref]
  59. M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
    [Crossref]

2019 (1)

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

2018 (5)

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cgan,” Biomed. Opt. Express 9(11), 5129–5146 (2018).
[Crossref]

K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, “Retinal optical coherence tomography image enhancement via deep learning,” Biomed. Opt. Express 9(12), 6205–6221 (2018).
[Crossref]

2017 (5)

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

2015 (5)

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

D. Kaba, Y. Wang, C. Wang, X. Liu, H. Zhu, A. Salazar-Gonzalez, and Y. Li, “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express 23(6), 7366–7384 (2015).
[Crossref]

2014 (5)

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
[Crossref]

2013 (1)

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

2012 (1)

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

2011 (2)

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011).
[Crossref]

2010 (2)

2009 (4)

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

2007 (3)

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type ii fuzzy set,” Opt. Express 15(24), 15747–15758 (2007).
[Crossref]

2006 (2)

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006).
[Crossref]

2004 (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004).
[Crossref]

1999 (1)

J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
[Crossref]

1997 (1)

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

1991 (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

1986 (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986).
[Crossref]

1983 (1)

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

1981 (1)

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

1969 (1)

D. R. Anderson, “Ultrastructure of human and monkey lamina cribrosa and optic nerve head,” Arch. Ophthalmol. 82(6), 800–814 (1969).
[Crossref]

Abdulkadir, A.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

Abramoff, M. D.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.

Addicks, E. M.

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

Adler, D. C.

Aila, T.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Aittala, M.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Akagi, T.

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

Anderson, D. R.

D. R. Anderson, “Ultrastructure of human and monkey lamina cribrosa and optic nerve head,” Arch. Ophthalmol. 82(6), 800–814 (1969).
[Crossref]

Andreas, M.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Antony, B. J.

Araie, M.

Asai, T.

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

Aung, T.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
[Crossref]

Baskaran, M.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

Belghith, A.

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

Bellezza, A.

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

Bellezza, A. J.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

Bi, H.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Birngruber, R.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Bizheva, K.

Boothe, T.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Boppart, S. A.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Bowd, C.

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

Boyle, R.

M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

Broaddus, C.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Broman, A. T.

H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006).
[Crossref]

Brox, T.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Brumm, J.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

Buchholz, T.-O.

A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).

Burgoyne, C. F.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

Burns, T. L.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

Cain, C. P.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986).
[Crossref]

Chan, A.

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

Chan, K.

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Chen, M.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

Chen, X.

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Cheng, C.-Y.

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

Cheng, X.

Chiu, S. J.

Çiçek, Ö.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

Culley, S.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Danesh, H.

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

De Moraes, C. G.

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

Dibrov, A.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

DiCarlo, C. D.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Dong, W.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Dorairaj, S.

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Downs, J. C.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

Duker, J. S.

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

Ethier, C. R.

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

Fabritius, T.

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

Faridi, O. S.

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

Farsiu, S.

Fauser, S.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Fujimoto, J.

J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).

Fujimoto, J. G.

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004).
[Crossref]

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Fukuma, Y.

Furlanetto, R. L.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

Garnavi, R.

Garvin, M. K.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

Gee, J. C.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

Girard, M. J.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

Girkin, C.

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

Grau, V.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

Green, W. R.

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Grimm, J.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

Guo, S.

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

Ha, A.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Hajizadeh, F.

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

Halupka, K. J.

Hangai, M.

Q. Yang, C. A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010).
[Crossref]

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Hasselgren, J.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Hee, M. R.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Hlavac, V.

M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

Hohman, R. M.

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

Hong, Y.

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

Hood, D. C.

Hore, A.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Pattern recognition (ICPR), 2010 20th international conference on, (IEEE, 2010), pp. 2366–2369.

Hoyng, C.

Huang, D.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Ikuno, Y.

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

Inoue, R.

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Išgum, I.

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

Ishikawa, H.

Izatt, J. A.

Jain, A.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Jeoung, J. W.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Jug, F.

A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).

Kaba, D.

Kabadi, R.

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

Kafieh, R.

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

Karras, T.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Kim, H.

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

Kim, M.

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

Kim, T. J.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Kim, T.-W.

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

Kim, Y. K.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Kiumehr, S.

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Ko, T. H.

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004).
[Crossref]

Kotera, Y.

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Krull, A.

A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).

Laine, S.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Lee, E. J.

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

Lee, M. H.

Lee, S. H.

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

Lefkimmiatis, S.

S. Lefkimmiatis, “Universal denoising networks: A novel cnn architecture for image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 3204–3213.

Lehtinen, J.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Leiner, T.

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

Li, X. T.

Li, Y.

Liebmann, J. M.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Liefers, B.

Lienkamp, S. S.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

Lin, C. P.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Liu, X.

Liu, Y.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

Lu, X.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Lucy, K. A.

Ma, Y.

Makita, S.

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

Mao, X.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.

Mari, J. M.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

Mari, J.-M.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

Massof, R. W.

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

Maumenee, A.

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

Medeiros, F. A.

R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
[Crossref]

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Miki, A.

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

Mori, S.

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Morishita, S.

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Munkberg, J.

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

Myllylä, R. A.

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

Nakanishi, H.

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Narayan, D. G.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Netto, C.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

Nicholas, P.

Nishida, K.

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

Oguz, I.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

Pan, X.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Park, K. H.

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Park, S. C.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Puliafito, C.

J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).

Puliafito, C. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Puvanathasan, P.

Quigley, H. A.

H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006).
[Crossref]

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

Rabbani, H.

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

Rai, R. S.

Ramachandran, R.

Raza, A.

Raza, A. S.

Reisman, C. A.

Reynaud, J.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

Ritch, R.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Roach, W. P.

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Roberts, M. D.

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

Russell, S. R.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

Sakata, L.

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

Salazar-Gonzalez, A.

Sánchez, C. I.

Schmidt, D.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Schmidt, U.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Schmitt, J. M.

J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
[Crossref]

Schuman, J.

J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).

Schuman, J. S.

K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, “Retinal optical coherence tomography image enhancement via deep learning,” Biomed. Opt. Express 9(12), 6205–6221 (2018).
[Crossref]

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Shah, A.

A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shen, C.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.

Shi, F.

Shi, G.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Sigal, I. A.

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Sonka, M.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

Stinson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Strouthidis, N. G.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

Su, D.

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

Sui, X.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Swanson, E. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Tan, M. C.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

Tan, N. Y.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

Tatham, A. J.

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

Tello, C.

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Teng, C. C.

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

Thakku, S. G.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

Tham, Y.-C.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

Theelen, T.

Thompson, H.

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

Tomidokoro, A.

Toth, C. A.

S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010).
[Crossref]

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

Usui, S.

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

van Ginneken, B.

van Grinsven, M. J.

VanderBeek, B. L.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

Venhuizen, F. G.

Viergever, M. A.

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

Wang, B.

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

Wang, C.

Wang, J.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

Wang, P.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Wang, X.

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

Wang, Y.

Wang, Z.

Q. Yang, C. A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Wei, B.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Weigert, M.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Weinreb, R. N.

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
[Crossref]

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

Wilhelm, B.

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Wollstein, G.

Wolterink, J. M.

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

Wu, F.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Wu, J.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Wu, X.

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.

Xiang, D.

Xiang, S.

J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
[Crossref]

Yan, Z.

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

Yang, H.

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

Yang, Q.

Yang, Y.-B.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.

Yasuno, Y.

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

Yin, W.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

Yin, Y.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Yoshimura, N.

Q. Yang, C. A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010).
[Crossref]

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

Yung, K. M.

J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
[Crossref]

Zangwill, L. M.

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

Zhang, K.

K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

Zhang, L.

K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

Zhang, S.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Zheng, Y.

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Zhu, H.

Zhu, W.

Ziou, D.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Pattern recognition (ICPR), 2010 20th international conference on, (IEEE, 2010), pp. 2366–2369.

Zuo, W.

K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

Am. J. Ophthalmol. (1)

H. A. Quigley, R. M. Hohman, E. M. Addicks, R. W. Massof, and W. R. Green, “Morphologic changes in the lamina cribrosa correlated with neural loss in open-angle glaucoma,” Am. J. Ophthalmol. 95(5), 673–691 (1983).
[Crossref]

Arch. Ophthalmol. (5)

D. R. Anderson, “Ultrastructure of human and monkey lamina cribrosa and optic nerve head,” Arch. Ophthalmol. 82(6), 800–814 (1969).
[Crossref]

H. A. Quigley, E. M. Addicks, W. R. Green, and A. Maumenee, “Optic nerve damage in human glaucoma: II. the site of injury and susceptibility to damage,” Arch. Ophthalmol. 99(4), 635–649 (1981).
[Crossref]

S. Kiumehr, S. C. Park, S. Dorairaj, C. C. Teng, C. Tello, J. M. Liebmann, and R. Ritch, “In vivo evaluation of focal lamina cribrosa defects in glaucoma,” Arch. Ophthalmol. 130(5), 552–559 (2012).
[Crossref]

C. A. Toth, D. G. Narayan, S. A. Boppart, M. R. Hee, J. G. Fujimoto, R. Birngruber, C. P. Cain, C. D. DiCarlo, and W. P. Roach, “A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy,” Arch. Ophthalmol. 115(11), 1425–1428 (1997).
[Crossref]

A. Chan, J. S. Duker, T. H. Ko, J. G. Fujimoto, and J. S. Schuman, “Normal macular thickness measurements in healthy eyes using stratus optical coherence tomography,” Arch. Ophthalmol. 124(2), 193–198 (2006).
[Crossref]

Biomed. Opt. Express (4)

Br. J. Ophthalmol. (2)

I. A. Sigal, B. Wang, N. G. Strouthidis, T. Akagi, and M. J. Girard, “Recent advances in OCT imaging of the lamina cribrosa,” Br. J. Ophthalmol. 98(Suppl 2), ii34–ii39 (2014).
[Crossref]

H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthalmol. 90(3), 262–267 (2006).
[Crossref]

Comput. Math. Method M. (1)

H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTS using a multiresolution texture based modeling in graph cuts,” Comput. Math. Method M. 2014, 1–9 (2014).
[Crossref]

IEEE Trans. Med. Imaging (1)

J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative adversarial networks for noise reduction in low-dose ct,” IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017).
[Crossref]

IEEE Trans. Med. Imaging. (1)

M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging. 28(9), 1436–1447 (2009).
[Crossref]

IEEE Trans. on Image Process. (3)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8(6), 679–698 (1986).
[Crossref]

Invest. Ophthalmol. Visual Sci. (8)

J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of lamina cribrosa visibility in optical coherence tomography images using adaptive compensation,” Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013).
[Crossref]

M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011).
[Crossref]

H. Yang, J. C. Downs, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “3-d histomorphometry of the normal and early glaucomatous monkey optic nerve head: lamina cribrosa and peripapillary scleral position and thickness,” Invest. Ophthalmol. Visual Sci. 48(10), 4597–4607 (2007).
[Crossref]

J. C. Downs, H. Yang, C. Girkin, L. Sakata, A. Bellezza, H. Thompson, and C. F. Burgoyne, “Three-dimensional histomorphometry of the normal and early glaucomatous monkey optic nerve head: neural canal and subarachnoid space architecture,” Invest. Ophthalmol. Visual Sci. 48(7), 3195–3208 (2007).
[Crossref]

M. D. Roberts, V. Grau, J. Grimm, J. Reynaud, A. J. Bellezza, C. F. Burgoyne, and J. C. Downs, “Remodeling of the connective tissue microarchitecture of the lamina cribrosa in early experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 50(2), 681–690 (2009).
[Crossref]

S. G. Thakku, Y.-C. Tham, M. Baskaran, J.-M. Mari, N. G. Strouthidis, T. Aung, C.-Y. Cheng, and M. J. Girard, “A global shape index to characterize anterior lamina cribrosa morphology and its determinants in healthy indian eyes,” Invest. Ophthalmol. Visual Sci. 56(6), 3604–3614 (2015).
[Crossref]

S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, “Lamina cribrosa depth in different stages of glaucoma,” Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015).
[Crossref]

S. H. Lee, T.-W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, “Diagnostic power of lamina cribrosa depth and curvature in glaucoma,” Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017).
[Crossref]

J. Biomed. Opt. (2)

J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–106 (1999).
[Crossref]

T. Fabritius, S. Makita, Y. Hong, R. A. Myllylä, and Y. Yasuno, “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).
[Crossref]

JAMA (1)

R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014).
[Crossref]

Nat. Methods (1)

M. Weigert, U. Schmidt, T. Boothe, M. Andreas, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Neurocomputing (1)

X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017).
[Crossref]

Ophthalmology (5)

A. J. Tatham, A. Miki, R. N. Weinreb, L. M. Zangwill, and F. A. Medeiros, “Defects of the lamina cribrosa in eyes with localized retinal nerve fiber layer loss,” Ophthalmology 121(1), 110–118 (2014).
[Crossref]

R. Inoue, M. Hangai, Y. Kotera, H. Nakanishi, S. Mori, S. Morishita, and N. Yoshimura, “Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma,” Ophthalmology 116(2), 214–222 (2009).
[Crossref]

O. S. Faridi, S. C. Park, R. Kabadi, D. Su, C. G. De Moraes, J. M. Liebmann, and R. Ritch, “Effect of focal lamina cribrosa defect on glaucomatous visual field progression,” Ophthalmology 121(8), 1524–1530 (2014).
[Crossref]

E. J. Lee, T.-W. Kim, M. Kim, and H. Kim, “Influence of lamina cribrosa thickness and depth on the rate of progressive retinal nerve fiber layer thinning,” Ophthalmology 122(4), 721–729 (2015).
[Crossref]

A. Ha, T. J. Kim, M. J. Girard, J. M. Mari, Y. K. Kim, K. H. Park, and J. W. Jeoung, “Baseline lamina cribrosa curvature and subsequent visual field progression rate in primary open-angle glaucoma,” Ophthalmology 125(12), 1898–1906 (2018).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

PLoS One (1)

A. Miki, Y. Ikuno, T. Asai, S. Usui, and K. Nishida, “Defects of the lamina cribrosa in high myopia and glaucoma,” PLoS One 10(9), e0137909 (2015).
[Crossref]

Sci. Rep. (1)

N. Y. Tan, Y.-C. Tham, S. G. Thakku, X. Wang, M. Baskaran, M. C. Tan, J.-M. Mari, N. G. Strouthidis, T. Aung, and M. J. Girard, “Changes in the anterior lamina cribrosa morphology with glaucoma severity,” Sci. Rep. 9(1), 6612 (2019).
[Crossref]

Science (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Other (14)

J. Schuman, C. Puliafito, and J. Fujimoto, Optical Coherence Tomography of Ocular Diseases (SLACK Incorporated, 2004).

J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

S. Lefkimmiatis, “Universal denoising networks: A novel cnn architecture for image denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 3204–3213.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in neural information processing systems, (2016), pp. 2802–2810.

M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

W. Dong, P. Wang, W. Yin, G. Shi, F. Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” arXiv preprint arXiv:1801.06756 (2018).

S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” arXiv preprint arXiv:1807.04686 (2018).

A. Belghith, C. Bowd, F. A. Medeiros, R. N. Weinreb, and L. M. Zangwill, “Automated segmentation of anterior lamina cribrosa surface: How the lamina cribrosa responds to intraocular pressure change in glaucoma eyes?” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2015), pp. 222–225.

M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision (Cengage Learning, 2014).

A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface segmentation using deep learning,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 3–11.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (Springer, 2016), pp. 424–432.

A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” arXiv preprint arXiv:1811.10980 (2018).

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Pattern recognition (ICPR), 2010 20th international conference on, (IEEE, 2010), pp. 2366–2369.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Definition of different image types used in this study. Top left: B-scans repeated 128x at the same location; top right: registered and averaged B-scan of the 128x repeats; bottom left: one of the B-scan images from the 128 repeats; bottom right: DL noise-reduced image of the bottom left image.
Fig. 2.
Fig. 2. Qualitative evaluation examples. (A) single B-scan; (B) median filtered image with a $5\times 5$ sized filter; (C) DL based noise reduced image; (D) registered and averaged image of B-scan repeated 128× at the same location. For detailed inspection, (E), (F), (G), and (H) show the zoom-ins of the areas highlighted by the green boxes in (A), (B), (C), and (D) respectively.
Fig. 3.
Fig. 3. Comparison of $6\times 6~mm^2$ en-face images of choroidal structures before and after noise reduction. (A) noise-reduced B-scan after flattening, the yellow line indicates the depths of en-face images (B) and (C); (B) en-face image extracted from the original volume corresponding to the depth in (A); (C) en-face image extracted from the noise-reduced volume corresponding to the depth in (A) with the red line indicating where B-frame (A) was extracted from; (D) zoom-in version of (B) for detail investigations; (E) zoom-in version of (C) for detail investigations.
Fig. 4.
Fig. 4. Comparison of $6\times 6~mm^2$ en-face images of LC structures before and after noise reduction. (A) noise-reduced B-frame with the yellow line indicating where the subsequent en-face images are extracted from; (B) en-face images extracted from the original volume at depth highlighted in (A); (C) en-face images extracted from the noise-reduced volume at depth highlighted in (A) with the red line indicating where B-frame (A) was extracted from; (D) zoom-in version of (B) for detail investigations; (E) zoom-in version of (C) for detail investigations.
Fig. 5.
Fig. 5. (A) OCT image with shadows observed underneath the blood vessels (highlighted with arrows); (B) energy profile across B-scan, where the blue curve depicts the original energy profile containing high-frequency random noise and energy dip due to shadow, and the red curve shows low-pass-filtered energy profile.
Fig. 6.
Fig. 6. Detection of the structure region of an OCT A-line. Intersections of the cutoff level (purple line) and the moving average energy profile (red line) defines the structure region (shaded area).
Fig. 7.
Fig. 7. Comparison between (A) the original and (B) compensated images. Yellow lines in the original image mark the segmented starting and ending point of the compensation. Narrower vessels (red, right-pointing arrow) are compensated better than wide vessel (green, left-pointing arrow).
Fig. 8.
Fig. 8. (A) A sample B-scan extracted from a 3D volume; (B) the shadow-compensated results of the B-scan.
Fig. 9.
Fig. 9. (A) Bright-band artifact (enclosed in red box) is common with the adopted compensation method [25], where the bright line can cut through LC structure and obscure the anterior border; (B) Adjusting contrast to reduce noise-level before the compensation reduces the bright-artifact and further improves the contrast of LC border.
Fig. 10.
Fig. 10. A flow chart of the proposed two-round segmentation.
Fig. 11.
Fig. 11. Comparison of LC segmentation results using (A) original and (B) enhanced B-scan images of the same subject. The green dotted line is the manual segmentation results by medical experts and the red dash-dotted lines are the automated segmentation results from our algorithm.
Fig. 12.
Fig. 12. Example 3D LC anterior depth surface. (A) LC anterior surface from 3D segmentation, color coded with depth information; (B) 2D visualization of the depth map with blue indicating an inwards curvature into the ONH.

Tables (3)

Tables Icon

Table 1. PSNR and SSIM results from 3 scans at different locations of the eye.

Tables Icon

Table 2. LC segmentation accuracy results

Tables Icon

Table 3. The average absolute differences between the automatic segmentation boundaries and ground truth segmentation boundaries measured after different enhancement stages

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

total loss = i L ( f θ ( x ^ i ) , y i )
total loss = i L ( f θ ( x ^ i ) , y ^ i ) ,
MSE = 1 m × n i = 0 m 1 j = 0 n 1 [ R ( i , j ) I ( i , j ) ] 2 .
PSNR = 20 log 10 ( MAX I ) 10 log 10 ( MSE ) ,
SSIM ( R , I ) = l ( R , I ) c ( R , I ) s ( R , I ) ,
l ( R , I ) = 2 μ R μ I + c 1 μ R 2 + μ I 2 + c 1 , c ( R , I ) = 2 σ R σ I + c 2 σ R 2 + σ I 2 + c 2 , s ( R , I ) = 2 σ R I + c 3 σ R σ I + c 3 ,
E i , j = I i , j n , ( i = 1 , 2 , , N ; j = 1 , 2 , , D )
E i = j = 1 D E i , j = j = 1 D I i , j n .
C ( i , j , k ) = w 1 Edge ( i , j , k ) + w 2 Gradient V ( i , j , k ) + w 3 Gradient H ( i , j , k )
acc ( i , j , k ) | k = k 0 = { , j < 1   or   j > m . C ( i , j , k 0 ) , i = n . min s = j u : j + u acc ( i 1 , s , k 0 ) + C ( i , j , k 0 ) , otherwise .
conf k 0 = 1 1 n p a b s ( j p j f )

Metrics