Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Comprehensive assessment of the anterior segment in refraction corrected OCT based on multitask learning

Open Access Open Access

Abstract

Anterior segment diseases are among the leading causes of irreversible blindness. However, a method capable of recognizing all important anterior segment structures for clinical diagnosis is lacking. By sharing the knowledge learned from each task, we proposed a fully automated multitask deep learning method that allows for simultaneous segmentation and quantification of all major anterior segment structures, including the iris, lens, cornea, as well as implantable collamer lens (ICL) and intraocular lens (IOL), and meanwhile for landmark detection of scleral spur and iris root in anterior segment OCT (AS-OCT) images. In addition, we proposed a refraction correction method to correct for the true geometry of the anterior segment distorted by light refraction during OCT imaging. 1251 AS-OCT images from 180 patients were collected and were used to train and test the model. Experiments demonstrated that our proposed network was superior to state-of-the-art segmentation and landmark detection methods, and close agreement was achieved between manually and automatically computed clinical parameters associated with anterior chamber, pupil, iris, ICL, and IOL. Finally, as an example, we demonstrated how our proposed method can be applied to facilitate the clinical evaluation of cataract surgery.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Anterior segment is the front-most eye region as shown in Fig. 1(a1) and is associated with many severe ophthalmic diseases such as cataract, glaucoma, high myopia, etc. Cataract is the leading cause of blindness globally [1]. Primary angle-closure glaucoma (PACG) is another major cause of blindness [2]. High myopia can induce complications such as retinal detachment and macular hole [3] and has gained significant attention in recent years due to its increasing prevalence, especially in Asia [4]. Anterior segment structures are routinely inspected in the clinic for diagnosing these diseases, as well as for relevant surgery planning and post-surgery assessment. Scleral spur (SS) and iris root (IR) are two significant landmarks of the anterior chamber. By combining their location with the boundary information of the cornea and iris, comprehensive clinical parameters can be computed for analyzing the anterior chamber angle (ACA), which is a significant marker for glaucoma [5]. For vision-correction surgeries, an intraocular lens (IOL) is routinely implanted to replace the cloudy natural lens for treating cataracts [1]. For patients with high myopia, an implantable collamer lens (ICL) is typically inserted into the ciliary sulcus to restore refractive function [6]. In both cases, comparisons of pre- and post-surgical anterior segment structures are required to evaluate treatment outcomes [7], [8] and to predict the risk of potential complications such as abnormal IOL tilt/decentration, abnormal ICL vault, and increased intraocular pressure after ICL implantation [9]. Therefore, accurate measurement of anterior segment morphological parameters is essential for the clinical diagnosis and management of ophthalmic diseases and surgery evaluation.

 figure: Fig. 1.

Fig. 1. AS-OCT imaging process and the main structures of the anterior segment: (a1) Illustration of the anterior segment in the eye. (a2) AS-OCT radial scanning pattern in Heidelberg ANTERION OCT system. (a3) An example of a cross-sectional AS-OCT image corresponding to one scan line in (a2). (b1-d1) Main anterior segment structures and (b2-d2) their corresponding segmentation masks: iris (purple), lens (green), cornea (yellow), IOL (blue) and ICL (red). (e1-e2) ACA and the two landmarks SS (green dot) and IR (red dot).

Download Full Size | PDF

Anterior segment optical coherence tomography (AS-OCT) is a non-contact and non-invasive three-dimensional imaging technology that allows for depth-resolved assessment of anterior segment with micrometer scale resolution [10]. In particular, AS-OCT based on swept source OCT (SS-OCT) technology at the wavelength of 1300nm (e.g. Heidelberg ANTERION) can achieve a deep penetration depth [11] and high imaging speed with fewer motion artifacts. Commercial AS-OCT systems perform either raster scanning [12] or radial scanning (Fig. 1(a2)), where each scan line corresponds to a 2D cross-sectional image (Fig. 1(a3)). As shown in Fig. 1(b1), the cornea, iris, and lens are the three main natural anterior segment structures that can be clearly delineated in an AS-OCT image. Figure 1(c1) and (d1) show IOL and ICL respectively, which are inserted during surgical procedures. Moreover, AS-OCT is able to resolve the details of ACA (Fig. 1(e1)), where the two significant landmarks SS and IR (Fig. 1(e2)) can be located. However, due to the limitations of AS-OCT, SS is only detectable in about 30${\% }$ of AS-OCT images [13].

Although AS-OCT is an excellent imaging technique, quantitative analysis of AS-OCT images demands extensive experience and manual labor. With the rapid development of artificial intelligence, several methods for analyzing AS-OCT images have been proposed in recent years [1430]. The first class of methods relies on high-level features extracted by sophisticated deep neural networks to directly make a diagnosis for certain types of diseases. For example, Hao et al. [16] combined 2D AS-OCT images and 3D reconstruction as the input to learn features for glaucoma classification. Chase et al. [17] employed VGG19 to classify dry eye disease. Kamiya et al. [18] used ResNet-18 to detect keratoconus. Although these methods are proficient at facilitating the diagnosis of a particular type of disease, they lack the key function to generate quantitative measurements of anterior segment for other clinical scenarios such as surgery planning and assessment.

Another class of methods focuses on segmenting critical structures of anterior segment, which can be subsequently quantified to derive clinical metrics for various disease diagnoses and surgery evaluation. Williams et al. [19] used a graph cut method to segment the anterior and posterior corneal surfaces. After segmenting the cornea and iris boundaries using a thresholding method, Tian et al. [20] detected Schwalbe’s line to locate ACA and then computed three associated clinical parameters: angle opening distance (AOD), angle recess area (ARA), and trabecular-iris space area (TISA). Later, Ni et al. [21] derived two more clinical metrics of ACA, the area of ACA centered at Schwalbe’s line and the iris curvature. Fu et al. [22] adopted label transfer and Otsu’s thresholding for segmentation and computed AOD, iris curvature (I-Curve), anterior chamber depth (ACD) and etc. for glaucoma classification. Shang et al. [23] applied a curvilinear structure enhancement method to segment the iris. Hao et al. [15] developed a U-Net based network for iris segmentation from which a 3D iris anterior surface was reconstructed for glaucoma classification. For research on lens analysis, [24] and [25] firstly made a coarse segmentation and then used a shape template and a curve fitting method to refine the masks. Zhang et al. [26] combined multi-scale input to segment the lens. For IOL analysis, Schwarzenbacher et al. [27] applied an encoder-decoder network with residual blocks to segment IOL.

In addition to the segmentation of anterior segment structures, automated detection of SS and IR is also clinically significant, especially for analyzing ACA. A semi-automatic software requiring users to input the location of SS was designed by Console et al. [32] for calculating ACA related parameters. Recently, fully automatic SS detection methods have been proposed. In the Angle closure Glaucoma Evaluation Challenge (AGE) [28], the most widely used approach was a coarse-to-fine strategy [29], where the first network detected the ACA region and then followed by the second network to locate SS. Multi-scale input of ACA [30] was also applied to learn semantic information at different scales. By splitting an AS-OCT image along the central scanning axis into left and right sub-images, Yang et al. [31] focused on detecting the SS and IR and then computed clinical parameters of ACA. In addition to directly quantifying clinical parameters, SS was also used to locate ACA which was then taken as the input into a CNN for glaucoma classification [14].

Existing methods [15,20,2227] mainly focused on one or two structures with the application limited to 1-2 specific diseases. However, since the structural information of the iris, lens, cornea, SS, IR, IOL, and ICL nearly defines the complete anterior segment, an automatic method capable of performing all these tasks simultaneously can in theory generate more accurate results by exploiting the inherent association and correlation between these anterior structures and key points. However, no such methods have been reported for AS-OCT image analysis. Several multitask methods have been proposed for other domains [3335]. For example, in [34], Duan et al. extracted fine-to-coarse 2.5D features and incorporated shape priors into the segmentation of multiple structures and landmarks in cardiac MRI images. Tan et al. [35] applied U-Net with multiple prediction heads to segment branches and detect bifurcation landmarks of the airway and aorta in CT images. However, there is no evidence that these methods can be directly applied to AS-OCT.

Quantitative diagnosis relies on accurate measurement of anterior segment structures. However, AS-OCT suffers from refraction distortion that can change each structure’s natural shape and location [11], as shown in Fig. 2. Westphal et al. [36] designed a method based on Fermat’s principle to perform refraction correction. Tian et al. [20] used Snell’s law to correct the refraction based on automatically segmented anterior corneal surface. However, there are limitations. Some existing refraction correction methods, such as [36] lacked full automation, requiring manual input to identify the cornea. Some methods such as [20] did not consider the refraction at the interface between the posterior surface of the cornea and aqueous humor. In the present study, the AS-OCT images were acquired by a Heidelberg ANTERION system (Heidelberg Engineering Ltd, Heidelberg, Germany). Two acquisition modes of this system are shown in Fig. 2. While the Metrics mode (Fig. 2(a)) acquires 6 radial refraction corrected cross-sectional images, the 30 degree interval between two consecutive radial scans skips many details, severely limiting the system’s capability to resolve the complete 3D structures of interest, and the details of the correction method have not been disclosed. In contrast, the Imaging mode (Fig. 2(b)) can scan more densely but the refraction is left uncorrected. It is therefore desirable to develop a new refraction correction method that can be directly applied to the densely sampled but uncorrected AS-OCT images acquired by commercial OCT systems to allow for a more accurate 3D measurement of the relevant structures.

 figure: Fig. 2.

Fig. 2. Metrics mode (refraction corrected) and Imaging mode (refraction uncorrected) of Heidelberg ANTERION OCT imaging system: (a1-a2) The scan pattern of Metrics mode and the corresponding B-scan cross-sectional image along the scan line. (b1-b2) The scan pattern of Imaging mode and the corresponding cross-sectional image along the scan line.

Download Full Size | PDF

In this study, we aimed to accurately identify and quantify all the necessary structures of anterior segment and IOL/ICL to enable a comprehensive analysis of anterior segment for various clinical applications. Our contributions are summarized as follows: (1) To the best of our knowledge, this is the first report for simultaneous segmentation and landmark detection of ASOCT images. (2) We proposed a model that can recognize all significant structures of the anterior segment for computing nearly all possible clinical parameters for different clinical scenarios. (3) We further proposed a refraction correction method that can automatically resolve the true geometry of each structure, and can serve as an extension of commercial AS-OCT systems to allow for more accurate measurements.

2. Methods

2.1 Multitask network for simultaneous segmentation and landmark detection

State-of-the-art (SOTA) medical image segmentation networks, such as U-Net [37] and TransUNet [38], typically implement an encoder path to extract multilevel features and a decoder path to upsample and fuse the high resolution input features to generate the segmentation masks. Anatomically, SS is a distinguished inner extension of the sclera located on the posterior corneal surface and IR is the apex of the angle recess between the posterior corneal surface and the anterior iris surface (Fig. 1(e2)). Thus, both the segmentation (i.e. for iris, lens, etc.) and detection tasks (i.e. for SS and IR) depend on the knowledge of the overall structure of the anterior segment and it is desirable to share the same feature extractor between the two tasks. In particular, the spatial information of different structures generated by the segmentation branch can help the detection branch quickly focus on the most likely regions (e.g., ACA) containing the target landmarks, similar to the attention mechanism [39]. Meanwhile, the information learned by the detection branch may provide the segmentation branch with more refined details to improve the segmented boundaries. Therefore, we use a feature exchange method to share knowledge in the decoding process to achieve simultaneous segmentation of the main structures in the anterior segment and landmark detection of SS and IR.

Figure 3 illustrates the workflow of our proposed method. First, AS-OCT images are fed into a multitask network to generate segmentation masks and landmark locations. Second, refraction correction is performed to resolve the true geometry of the anterior segment, and 3D interpolation is used to recover the missing boundary points between the discretely acquired cross-sectional images for more accurate quantitative measurement. Finally, comprehensive clinical parameters are computed from segmentation and landmark detection results.

 figure: Fig. 3.

Fig. 3. Overall workflow of the proposed AS-OCT image analysis method.

Download Full Size | PDF

As shown in Fig. 4, following the design of U-Net [37], the proposed network consists of three main components: a feature encoder, a feature decoder, and a feature exchange module. In the encoder, five levels of Conv2d Blocks are stacked to extract high-level features, where each Conv2d Block repeats the “convolution-batch normalization-ReLU” structure twice followed by max pooling. The decoder contains one branch for segmentation of anterior segment structures and another branch for landmark detection of SS and IR, respectively. At each level of the decoder, high-level features are first upsampled and then used as the attention gating signal [39] on skip connections. The attention module suppresses activations in irrelevant regions and focuses the network on more salient features. The feature exchange module repeats the “convolution-batch normalization-ReLU” module twice to exchange features between the segmentation and the detection branch, and the upsampled features are again input to the feature exchange operation. The features from the last level in the encoder are 2$\times$ upsampled and then fed into the first level of both decoder branches, where feature exchange is not applied due to the lack of task-specific learned features. The upsampled features, the features from the attention block and the exchanged features are combined using a Conv2d Block. At the final layer, a 1$\times$1 convolution is used to generate full resolution results.

 figure: Fig. 4.

Fig. 4. Proposed multitask network for simultaneous segmentation and landmark detection for AS-OCT images (BN: batch normalization; ReLU: rectified linear unit).

Download Full Size | PDF

Formally, we denote the input training dataset as $\mathrm {D}=\left \{\left (X_i, M_i, H_i\right ), \mathrm {i}=1,2, \ldots, \mathrm {N}\right \}$, where $\mathrm {N}$ is the number of images in the training dataset, $X_i$ represents the input image, $M_i$ stands for the ground-truth segmentation mask and $H_i$ is the ground-truth heatmap for landmark detection. For segmentation, we use a combined Dice loss and multiclass cross-entropy loss:

$$L_{\text{Dice }}=1-\frac{1}{C} \sum_j^c \frac{2 \sum_k^K M_{i p}(j, k) \cdot M_i(j, k)}{\sum_k^K M_{i p}(j, k)+\sum_k^K M_i(j, k)}.$$
$$L_{C E}={-}\frac{1}{K} \sum_j^C \sum_k^K M_i(j, k) \log \left(M_{i p}(j, k)\right).$$
$$L_{\text{Seg }}=L_{\text{Dice }}+L_{C E}.$$
Where $C$ is the number of classes, $K$ is the pixel number, $M_{i p} \in [0,1]$ and $M_i \in \{0,1\}$ are the predicted segmentation mask and ground-truth mask respectively for input image $X_i$. For landmark detection, inspired by [29], we use a combined heatmap registration loss [29] and MSE loss:
$$L_{H R}=1-\frac{1}{L} \sum_j^L \frac{2 \sum_k^K H_{i p}(j, k) \cdot H_i(j, k)}{\sum_k^K H_{i p}(j, k)+\sum_k^K H_i(j, k)}.$$
$$L_{M S E}=\frac{1}{K} \sum_{j}^{L} \sum_{k}^{K}\left(H_{i p}(j, k)-H_{i}(j, k)\right)^{2}.$$
$$L_{L}=L_{H R}+L_{M S E}.$$
Where $L$ is the number of landmarks, $K$ is the pixel number, $H_{i p}\in [0,1]$ and $H_i\in \{0,1\}$ are the predicted heatmap and ground-truth heatmap respectively for input image $X_i$. The overall loss function of our proposed method is:
$$\mathrm{L}=L_{S e g}+L_{L}.$$

2.2 Refraction correction

During AS-OCT imaging, light refraction occurs at the interface between any two optical media with different refractive indices and can cause significant image distortion. Since air and cornea have different refractive indices, incident light (indicated by the green line in Fig. 5(a)) changes its original direction to propagate within the cornea (yellow line in Fig. 5(a)). Similarly, refraction occurs again at the interface between cornea and aqueous humor (blue line in Fig. 5(a)). However, conventional OCT image processing assumes that light travels in the same direction in cornea and aqueous humor as the initial incident light, as shown in Fig. 5(b). Therefore, the shapes of anterior segment structures are distorted in the raw AS-OCT display, which could lead to significant errors in subsequent quantitative measurement.

 figure: Fig. 5.

Fig. 5. (a) Real optical path on a refraction corrected AS-OCT image. (b) The corresponding erroneous optical path on an AS-OCT image without refraction correction. (c) Inlet: The real position of A and its uncorrected position $\hat {A}$.

Download Full Size | PDF

To correct such distortions, knowledge of the anterior and posterior corneal surface boundary is a prerequisite to compute the incidence and refraction angle of each point on the surface for refraction correction. The corneal surface boundary information can be obtained from the segmentation results of the proposed multitask network. Based on the anterior corneal surface which is not distorted by refraction, we first correct the posterior corneal surface using Snell’s law (8). Specifically, for any scanning spot shown in Fig. 5(b), and taking point $\hat {A}$ in Fig. 5(c) as an example, we recover the actual direction of light in the cornea (yellow dashed line in Fig. 5(c)) using Snell’s law (8) and then compute the correct position to recover point A using the definition of optical pathlength (9):

$$n_{1} \sin \theta_{1}=n_{2} \sin \theta_{2}.$$
$$\frac{d}{L}=\frac{1}{n}.$$
where $\theta _{1}$ and $\theta _{2}$ are the incident and refraction angle respectively, $n_{1}$ and $n_{2}$ are the refractive indices of the medium where the incident and refracted light propagates, respectively, $d$ is the physical distance, and $L$ is the corresponding optical pathlength. The refractive indices used in this study for refraction correction are 1.376 for cornea and 1.336 for aqueous humor.

2.3 Quantitative measurement of clinical parameters

Based on the segmentation results of our proposed network and refraction correction, a comprehensive list of clinical parameters can be automatically computed. Here, we compared our method with other methods in terms of the capability of automatically computing different metrics of anterior segment, including the metrics associated with iris, lens, cornea, ACA, anterior chamber (AC), IOL and ICL. As shown in Table 1, our method can perform more comprehensive measurements including the lens, IOL and ICL as compared with other methods. The primary clinical parameters are listed as follows:

Tables Icon

Table 1. Comparison of quantitative evaluation between different methods.

(1) I-Curve: perpendicular distance between the line connecting the most central to the most peripheral points of the iris pigment epithelium, to the posterior iris surface at the point of greatest convexity (Fig. 6(a)).

 figure: Fig. 6.

Fig. 6. Illustration of quantification of the main clinical parameters derived in this study.

Download Full Size | PDF

(2) ACD: distance along the central scanning axis between the anterior corneal surface and the anterior lens surface (Fig. 6(a)). $\mathrm {ACD}_\mathrm {C-ICL}$: distance between the anterior corneal surface and the anterior ICL surface (Fig. 6(d)). $\mathrm {Vault}_\mathrm {ICL-L}$: distance between the posterior ICL surface and the anterior lens surface (Fig. 6(d)). $\mathrm {ACD}_\mathrm {C-IOL}$: distance between the anterior corneal surface and the anterior IOL surface (Fig. 6(e)).

(3) Anterior chamber width (ACW): distance between two SS points (Fig. 6(a)).

(4) Anterior chamber volume (ACV): volume of the anterior chamber. $\mathrm {ACV}_\mathrm {C-ICL}$: volume of the region bounded by the cornea, iris and ICL. $\mathrm {ACV}_\mathrm {ICL-L}$: chamber volume between ICL and lens. $\mathrm {ACV}_\mathrm {IOL}$: volume of the region bounded by the cornea, iris and IOL.

(5) Tilt: angle between the lens axis and the scanning axis (Fig. 6(c)). $\mathrm {Tilt}_\mathrm {IOL}$: angle between IOL axis and the scanning axis (Fig. 6(e)).

(6) Decentration: distance between the intersection of lens axis and lens equatorial plane and the intersection of scanning axis and lens equatorial plane (Fig. 6(c)). $\mathrm {Decentration}_\mathrm {IOL}$: distance between two intersections of the two axis defined above and the IOL equatorial plane (Fig. 6(e)).

(7) $\mathrm {AOD}_\mathrm {L}$: perpendicular distance from the posterior corneal surface at a point which is at a distance of $\mathrm {L}$ $\mathrm{\mu}$m to SS, to the anterior iris surface (Fig. 6(b)).

(8) $\mathrm {TISA}_\mathrm {L}$: area bounded by $\mathrm {AOD}_\mathrm {L}$, a line from SS perpendicular to the inner scleral wall to the anterior iris surface, the posterior corneal surface and the anterior iris surface (the area filled with blue lines in Fig. 6(b)).

(9) Smooth index: ratio of the length of a straight line from the most peripheral to the most central points of the anterior iris surface to the actual length of this surface [40].

(10) Iris thickness: The shortest distance between a point on the anterior iris surface and points on the posterior iris pigment epithelium surface.

(11) Pupil diameter: The diameter of the pupil.

(12) Pupil area: The area of the pupil.

(13) Anterior tangential curvature map: tangential curvature of the anterior corneal surface (Fig. 6(f)).

(14) Pachymetry: thickness of the cornea (Fig. 6(g)).

3. Experimental methods

3.1 Dataset and image annotation

The AS-OCT dataset used in the study was collected by a Heidelberg ANTERION system at the Department of Cataract, Shanxi Eye Hospital, Taiyuan, China, between November 2020 and February 2022. 1251 cross-sectional AS-OCT images from 180 patients with ophthalmic diseases including glaucoma, myopia, cataract, and the combination of these diseases were collected in this study, where images with severe motion artifacts had been excluded (n=2). Among these patients, only a fraction of them had received IOL (8${\% }$) and ICL (5${\% }$) implantation surgery, resulting in 167 images with IOL from 14 patients and 120 images with ICL from 9 patients. All images were randomly split on patient level for training (60${\% }$, 605 refraction corrected images from Metrics Mode and 128 refraction uncorrected images from Imaging Mode), validation (20${\% }$, 198 refraction corrected images from Metrics Mode and 50 refraction uncorrected images from Imaging Mode) and testing (20${\% }$, 214 refraction corrected images from Metrics Mode and 56 refraction uncorrected images from Imaging Mode). Informed consent was obtained from patients for this study. This study was conducted in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board of Shanxi Medical University on Mar 11, 2019 under the protocol No. 2019LL130.

Our model outputs five segmentation masks including the cornea, iris, lens, ICL and IOL, and two landmark detection targets including SS and IR. For validation, the iris, lens, cornea, IOL and ICL were first manually annotated using Amira software (Thermo Fisher Scientific) by two AS-OCT image analysts with a mean experience of 1.2$\pm$0.1 years. Meanwhile, SS and IR were marked using an open source software LabelMe [41] (MIT, Cambridge, MA). The final annotation was determined based on the consensus of the two analysts, and conflicts were resolved by consulting a senior ophthalmologist (X. W.) with more than 10 years of experience. Before resolving conflicting opinions, we found the Dice coefficients [42] between the two analysts to be 0.968$\pm$0.004, 0.994$\pm$0.002, 0.988$\pm$0.005, 0.980$\pm$0.005 and 0.993$\pm$0.005 for iris, lens, cornea, ICL and IOL respectively. Meanwhile, for landmark detection, the precision, recall, F1-score and mean Euclidean distances between the two analysts were 0.9459, 0.8235, 0.8805 and 31.09$\pm$18.94 $\mathrm{\mu}$m for SS, and 0.9618, 0.9692, 0.9655 and 34.74$\pm$20.69 $\mathrm{\mu}$m for IR, respectively.

3.2 Implementation details

All deep learning models were implemented in PyTorch [43], using two GeForce RTX 3090 GPUs. Our proposed multitask network was trained using the Adam optimizer, where the weight decay was set to 0.001. The learning rate was set to 0.0001 at first and gradually reduced. The best network parameters were selected based on the loss on the validation data. The minibatch size for training was set to 8 and the size of input images was 3$\times$512$\times$512. The maximum number of epochs was set to 150 and we used the “Early Stopping” strategy [44], where the training process was stopped when the loss on the validation dataset did not improve for 30 consecutive epochs.

3.3 Performance evaluation

The performance of segmentation was evaluated using Dice coefficient [42]. Because landmarks may not be visible in some AS-OCT images [13], we first used the classification metrics to evaluate whether a landmark was detectable in an image, including precision, recall, and F1-score:

$$\text{Precision}=\frac{TP}{TP+FP}. \qquad \quad \text{Recall}=\frac{TP}{TP+FN}.$$
$$\text{F1-score}=\frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision}+\text{Recall}}.$$
where TP, TN, FP, and FN are the number of true positives, true negatives, false positives, and false negatives, respectively. We define a predicted landmark as TP if the distance between the detected landmark fell within 80 $\mathrm{\mu}$m along both X and Y axis with the ground truth [45]. Then, we evaluated the position deviation using the mean Euclidean distance (MED) between the predicted TP and ground truth landmarks. Our proposed method was compared with other widely used SOTA models, including U-Net [37], TransUNet [38], AttnUNet [39] and WRB-Net [15] for segmentation, and with Hourglass [46], HigherHRNet [47], RSN [48], the winner of AGE [29] and Yang’s method [31] for landmark detection.

To evaluate the accuracy of the quantification of clinical parameters, Bland-Altman analysis was applied to compare automatic with manual measurements.

3.4 Clinical application of the proposed method: a case study

To demonstrate the clinical value, we applied our method to analyze the anterior segment of a patient under cataract surgery. The chosen patient was a 51 years old female with cataract in her right eye. The patient received cataract surgery, during which phacoemulsification was performed to emulsify and aspirate the cataract by an ultrasonic handpiece, and then IOL was implanted. AS-OCT images from Metrics and Imaging mode of Heidelberg ANTERION were taken before surgery and one day after surgery, respectively. Images were analyzed using the method proposed in this study, and quantitative clinical parameters were compared between pre- and post-surgery.

4. Results

4.1 Anterior segment segmentation and landmark detection

Table 2 shows the comparison of our proposed method with other SOTA deep learning methods, as well as ablation studies to prove the design behind the chosen model. Our proposed method performed better in the segmentation of iris, cornea, ICL and IOL and achieved the best overall mean Dice coefficient for all tasks compared with other methods. Adding the feature exchange module and the attention module, and including the two decoder branches for multitask learning achieved higher Dice coefficient compared with models without such designs. Finally, our method achieved Dice coefficients of 0.9655$\pm$0.0177, 0.9915$\pm$0.0176, 0.9878$\pm$0.0071, 0.9791$\pm$0.0101 and 0.9821$\pm$0.0304 for segmentation of iris, lens, cornea, ICL and IOL respectively, and the mean Dice coefficient is 0.9812$\pm$0.0089. In this study, the segmentation targets are relatively large with clear boundaries and high contrast compared to the background. Therefore, the improvement through multitask learning obtained from the landmark detection task may not seem so remarkable compared to the large area of the structures. However, such delicate details are necessary for the segmentation of IOL which can be affected by the posterior lens capsule and the anterior vitreous hyaloid.

Tables Icon

Table 2. Evaluation of automated segmentation against ground truth.

Figure 7 shows typical segmentation examples of our proposed method compared with U-Net, AttnUNet, TransUNet and WRB-Net. The first three methods missed some parts of the iris, and WRB-Net missed the upper corner of the lens. The above methods missed a small region in the middle of ICL, which was affected by the strong reflection in AS-OCT images. WRB-Net further missed a region of ICL near the iris. U-Net performed relatively poorly on IOL segmentation. For a more complex case such as shown in Fig. 7(d), the four methods performed relatively poorly on the segmentation of the posterior IOL surface. In comparison, our method can accurately capture the contextual information of different structures and is robust to artifacts and variations of image intensity.

 figure: Fig. 7.

Fig. 7. Visual comparisons of different methods for anterior segment segmentation. (a) AS-OCT image without inserted optics. (b) AS-OCT image with ICL. (c-d) AS-OCT image with IOL. (a2)-(d2) Full-scale AS-OCT images from which the cropped images (a1)-(d1) are extracted (indicated by the white rectangles). (a3)-(d3) Segmentation results of (a2)-(d2) by our proposed method. Iris: purple, lens: green, cornea: yellow, IOL: blue and ICL: red.

Download Full Size | PDF

Table 3 and Table 4 show comparison results of the SS and IR landmark detection task. It can be observed that our proposed method achieved the best performance for all metrics compared with other SOTA methods for SS, with a precision, recall, F1-score, and MED of 0.9462, 0.7961, 0.8647, 41.77$\pm$23.13 $\mathrm{\mu}$m, respectively. For IR, our proposed method achieved the best recall, F1-score, and MED of 0.9142, 0.9433, 37.67$\pm$23.02 $\mathrm{\mu}$m, respectively. Since IR was slightly easier to detect, the performance of IR detection was better than that of SS. Some visual examples of landmark detection are illustrated in Fig.8. It can be seen that our proposed method was more robust in detecting the presence of SS and performed better on complex cases such as that in Fig. 8(d) where anterior chamber angle is very small, due to the fact that the knowledge learned by the segmentation task helped the detection branch to locate SS and IR.

 figure: Fig. 8.

Fig. 8. Visual comparisons of different methods for landmark detection of SS (green dot) and IR (red dot). (a-d) Examples of four ACA regions.

Download Full Size | PDF

Tables Icon

Table 3. Evaluation of SS detection against ground truth.

Tables Icon

Table 4. Evaluation of IR detection against ground truth.

Segmentation and landmark detection results of images where corneoscleral-iris angle is fully or partially blocked by the eyelid are shown in Fig. 9. As demonstrated in Fig. 9(a)-(b), even when certain portions of the cornea and iris are blocked by the eyelid, our method can clearly delineate the visible sections and is robust to such disturbance. Meanwhile, our method can still detect the iris root and scleral spur when they are close to the shadow caused by the eyelid. For the completely blocked corneoscleral-iris angle (Fig. 9(c)), our method can appropriately disregard such regions.

 figure: Fig. 9.

Fig. 9. Segmentation and landmark detection results of images with corneoscleral-iris angle (partially) blocked by the eyelid. (a)-(c) Examples of three AS-OCT cross-sectional images.

Download Full Size | PDF

4.2 Refraction correction evaluation

Due to rapid eye movement, there can be significant changes of the anterior segment between the two acquisitions under the “Imaging” and “Metrics” modes of Heidelberg ANTERION, making it almost impossible for direct comparisons on clinical AS-OCT images. Therefore, to evaluate the accuracy of refraction correction, we used the AS-OCT system, Heidelberg ANTERION, to image a bi-convex lens (LB1450-A-ML, Thorlabs, New Jersey, United States) with known specifications (center thickness: 3.9 mm, diameter: 11.4 mm, refraction index at 1300 nm: 1.5037), as shown in Fig. 10(a1), and compared the ANTERION system corrected image of the lens with the refraction corrected image using our proposed method, which are shown in Fig. 10(a2) and Fig. 10(a3) respectively. To evaluate the corrective error, we define the axis of the lens along which the distance between the anterior and posterior surfaces is the thickest, and the center of the lens as the middle point between the anterior and posterior surfaces along the axis. Next, we can align the refraction corrected lens from Heidelberg ANTERION and that from our proposed method with reference to the center of the lens. For the anterior and posterior surfaces, we calculated the absolute distance between our corrected boundary and the ground truth boundary from Heidelberg ANTERION. The mean error of the corrected boundary is 0.00465$\pm$0.0039 mm, which is below the resolution of the imaging system. We also illustrate refraction correction on two uncorrected clinical AS-OCT images (Fig. 10(b1) and Fig. 10(c1)) and the corresponding corrected images are shown in Fig. 10(b2) and Fig. 10(c2), where the real shapes of anterior segment structures were recovered. These qualitative and quantitative evaluation verified the accuracy of our refraction correction method.

 figure: Fig. 10.

Fig. 10. Validation of refraction correction. (a1) The bi-convex lens used for comparison. (a2) The refraction corrected image from Heidelberg ANTERION. (a3) The refraction corrected image using our proposed method. (b1, c1) Two uncorrected AS-OCT images. (b2, c2) Corrected AS-OCT images by our proposed method corresponding to (b1) and (c1).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Bland-Altman analysis of representative clinical parameters between manual and automatic measurements: (a) $\mathrm {AOD}_\mathrm {500}$, (b) $\mathrm {TISA}_\mathrm {500}$, (c) I-Curve, (d) Pupil diameter.

Download Full Size | PDF

4.3 Evaluation of quantitative measurements of anterior segment

Bland-Altman analysis results for the representative clinical parameters computed automatically vs. manually are shown in Fig. 11. The mean values of the difference are -0.0015 mm for $\mathrm {AOD}_\mathrm {500}$; $3\times 10^{-6} \; \mathrm {mm^2}$ for $\mathrm {TISA}_\mathrm {500}$; 0.0045 mm for I-Curve; -0.008 mm for pupil diameter. The 1.96 SD intervals are 0.14 mm, $8\times 10^{-4} \; \mathrm {mm^2}$, 0.097 mm, and 0.056 mm, respectively. For ACD, ACW, ACV, smoothness index, iris thickness, lens decentration and lens tilt, the mean values are 0.002 mm, 0.0006 mm, 0.019 $\mathrm {mm^3}$, 0.0022, 0.0013 mm, 0.004 mm, and 0.09$^\circ$, and the 1.96 SD intervals are 0.007 mm, 0.1 mm, 1.7 $\mathrm {mm^3}$, 0.02, 0.035 mm, 0.039 mm, and 0.48$^\circ$, respectively. These results indicate that there are very close agreements and minimal bias between automatically and manually computed parameters.

4.4 Clinical case study

Examples of clinical measurements are shown in this section to demonstrate their applications in different clinical scenarios including clinical diagnosis, pre-surgery planning and post-surgery evaluation.

(1) Clinical diagnosis: Keratoconus may induce irregular astigmatism, myopia, and protrusion, resulting in mild to severe impairment in the quality of vision. ACD and ACV are crucial for diagnosing keratoconus [49], where in Table 5 the increase of the two post-surgery metrics is mainly because the insertion location of IOL is further away from the cornea than the lens, as shown in Fig. 1(b1) and (c1). AOD and TISA are sensitive in recognizing any narrow ACA for diagnosing glaucoma [50], and can facilitate the decision of whether to perform a peripheral iridotomy [51]. In Fig. 12(a) and (b), pre-surgery ACA measurements, $\mathrm {AOD}_\mathrm {500}$ and $\mathrm {TISA}_\mathrm {500}$, are illustrated as blue dots, and fit by blue lines. Due to the limitations of AS-OCT, the SS at 300$^\circ$ in pre-surgical AS-OCT images could not be detected, which is represented by the red dot as a result of fitting. For the iris parameters, I-Curve indicates the convexity of the iris which is associated with the presence of PACG [52]. Smooth index quantifies the smoothness of iris surface which could be helpful in unilateral Fuchs uveitis diagnosis [40]. By analyzing the thickness of the iris, thick peripheral iris roll can be diagnosed and differentiated from plateau iris, which are two major mechanisms of angle closure [53]. Besides being an indicator of glaucoma [54], iris thickness is also related to anterior segment inflammation, where the thickening of the iris exists in sympathetic ophthalmia, severe granulomatous uveitis, and masquerade syndromes [55], and iris configuration such as SS-IR distance can help diagnose pigment dispersion syndrome [56].

 figure: Fig. 12.

Fig. 12. Quantitate measurements of ACA and iris before and after cataract surgery: (a) $\mathrm {AOD}_\mathrm {500}$, (b) $\mathrm {TISA}_\mathrm {500}$, (c) I-Curve, and (d) Smooth index (Blue: pre-surgery; Orange: post-surgery). (e1-e2) Pre-surgery iris thickness heatmap and thickness values over different regions. (f1-f2) Post-surgery iris thickness heatmap and thickness values over different regions.

Download Full Size | PDF

Tables Icon

Table 5. Comparison of clinical parameters between pre- and post-surgery.

(2) Pre-surgery planning: For cataract surgery, accurate measurement of ACD is important for predicting the post-surgery effective lens position and the refractive outcome, which can assist in choosing the appropriate IOL [57].

(3) Post-surgery evaluation: The tilt and decentration of IOL and lens can facilitate the assessment of their position and the effects of the implanted IOL on visual functions such as optical aberration, visual acuity, dysphotopsia and wavefront aberrations [7,58,59]. As shown in Table 5, the IOL had a more significant tilt angle and a smaller decentration than the natural lens. AOD and TISA are also essential in post-surgery evaluation to determine the effects of the cataract surgery on ACA. In Fig. 12(a) and (b), the larger $\mathrm {AOD}_\mathrm {500}$ and $\mathrm {TISA}_\mathrm {500}$ indicates that cataract extraction can be potentially beneficial to treat eyes with relatively high risks of angle-closure [60]. Meanwhile, by evaluating the iris change, we can also evaluate whether iris-related diseases occur as a result of the surgery. The I-Curve and smooth index, shown in Fig. 12(c) and (d), indicate that the iris was less bent and smoother after surgery than it was before surgery. As shown in Fig. 12(e1) and (e2), despite similar thickness distribution, the iris became thinner as a result of surgery and the smaller pupil radius during imaging, which was further verified by the average thickness, shown in Fig. 12(f1) and (f2).

5. Discussion

In this study, we developed a multitask deep learning method to simultaneously segment the main structures and detect the key points of the anterior segment in AS-OCT images. Based on the results of our multitask network, we then performed refraction correction to recover the true geometry of the anterior segment for subsequent quantitative analysis. Experiments verified the performance of our proposed method on segmentation, landmark detection, and the accuracy of refraction correction procedure. Finally, by applying the method to analyze a clinical case of cataract surgery, we demonstrated that our proposed method can effectively facilitate the evaluation of surgical outcomes.

There are several competing methods for the measurement of anterior segment structures. One of the most commonly used methods is ultrasound biomicroscopy (UBM) which utilizes ultrasound at 50-100 MHz for imaging. However, UBM makes direct contact with the eye and demands extensive experience to acquire images. Gonioscopy is also commonly used and is currently the gold standard for screening ACA. Similar to UBM, gonioscopy requires direct contact with the eye and is manually performed, which is time-consuming and subjective. Moreover, the improper pressure exerted on the contact lens during a gonioscopy exam may distort the anterior segment, leading to an incorrect diagnosis. In addition, although Scheimpflug imaging is non-contact, it has a low axial resolution for imaging the entire anterior segment. In sharp contrast, AS-OCT provides a non-invasive and non-contact examination method with high imaging speed, which is less affected by the experience of operators and can provide high-quality cross-sectional and 3D images.

One contribution of our study is the proposal of a multitask network to explore the association between various structures and key points of anterior segment. Compared to existing methods taking segmentation and landmark detection as two seperate tasks [14,28,29], our proposed multitask network in performing the two tasks simultaneously is effective by utilizing the mutual correlation between various structures and key features to benefit both tasks, where the architecture of our model is also an extensible paradigm for multitask learning. Experiments demonstrated the accuracy of our proposed method and the benefit of feature sharing. Our method can also be potentially applied to other AS-OCT systems with transfer learning [61], although further studies are needed to demonstrate the application. Another important contribution of our research is the incorporation of refraction correction, which helps to recover the true geometry of anterior segment structures such as the iris. Previous methods [20,36] have limitations of either requiring manual input or not considering the refraction at the interface between the posterior corneal surface and aqueous humor. Our method can be applied to existing uncorrected images acquired by commercial AS-OCT systems to make automatic corrections on the anterior and posterior surfaces of the cornea to generate more accurate quantification results.

Automatic assessment of anterior segment clinical parameters has the potential to significantly improve the efficacy of clinical workflow and provide standardized and objective assessment of anterior segment in a wide range of clinical scenarios including surgery evaluation. In addition to assisting the plan for IOL implanation mentioned in the case study, clinical parameters computed from pre-surgery AS-OCT images can help choose a proper size of ICL [62], which is essential for successful treatment of high myopia. As shown in Fig. 12, rather than computing a single iris thickness number, we can generate the iris thickness heatmap of the whole iris for a comprehensive and detailed analysis of location-specific changes. Furthermore, several anterior segment metrics, including AOD, TISA, I-Curve and etc., can be computed and compared from AS-OCT images acquired under different illumination intensities to evaluate the optimal examination condition [63]. Therefore, the capability of our proposed method to automatically segment and detect all the essential structures of the anterior segment is highly relevant to meet the clinical demands.

There are several limitations in our study. First, we did not correct the refraction at the anterior lens surface considering that the refractive index varies significantly among individuals [64], and the Heidelberg ANTERION does not provide any reference value for the refractive index of the lens either. Additionally, personal conditions such as diabetes [65] and cataract [66] can also cause significant changes of the refractive index of the lens. It is important to note that the curvature of the imaged anterior lens surface is significantly smaller than that of the corneal surface, making the distortion less serious. Another limitation is that our method is under the supervised learning paradigm, where manual labeling of numerous images is required. Since the anterior segment is relatively regular, semi-supervised learning may be beneficial in future work.

6. Conclusion

In summary, we proposed a multitask deep learning method for simultaneous segmentation and landmark detection in AS-OCT images, and a refraction correction method which can be applied to commercial AS-OCT images to restore the true geometry of anterior segment. Experiments verified the performance of the proposed multitask network and refraction correction method. Clinical application of our approach to a patient undergoing cataract surgery demonstrated that our method was capable of assessing the key parameters to predict potential post-surgery complications.

Funding

National Natural Science Foundation of China (62075033, 62135002, 81971697); Shanxi Eye Hospital (B201804); Shanxi Scholarship Council of China (2021-174); Sichuan Province Science and Technology Support Program (2023YFS0022); Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China (ZYGX2021YGCX004); Fundamental Research Funds for the Central Universities (ZYGX2021J009).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Code for our proposed method is available at [67].

References

1. G. Tabin, M. Chen, and L. Espandar, “Cataract surgery for the developing world,” Curr. Opinion Ophthalmology 19(1), 55–59 (2008). [CrossRef]  

2. J. Gj and F. Pj, “Glaucoma in china: how big is the problem?” Br. J. Ophthalmol. 85(11), 1277–1282 (2001). [CrossRef]  

3. Y. Ikuno, “Overview of the complications of high myopia,” Retina 37(12), 2347–2351 (2017). [CrossRef]  

4. D. Jones and D. Luensmann, “The prevalence and impact of high myopia,” Eye contact lens 38(3), 188–196 (2012). [CrossRef]  

5. N. Porporato, M. Baskaran, R. Husain, and T. Aung, “Recent advances in anterior chamber angle imaging,” Eye 34(1), 51–59 (2020). [CrossRef]  

6. D. R. Sanders and J. A. Vukich, “Comparison of implantable contact lens and laser assisted in situ keratomileusis for moderate to high myopia,” Cornea 22(4), 324–331 (2003). [CrossRef]  

7. M. Baumeister, J. Bühren, and T. Kohnen, “Tilt and decentration of spherical and aspheric intraocular lenses: effect on higher-order aberrations,” J. Cataract. & Refract. Surg. 35(6), 1006–1012 (2009). [CrossRef]  

8. F. Memarzadeh, M. Tang, Y. Li, V. Chopra, B. Francis, and D. Huang, “Anterior segment optical coherence tomography for imaging the change in anterior chamber angle morphology after cataract surgery,” Am. J. Ophthalmol. 48(5), 3855 (2007).

9. J. F. Alfonso, C. Lisa, A. Abdelhamid, P. Fernandes, J. Jorge, and R. Montés-Micó, “Three-year follow-up of subjective vault following myopic implantable collamer lens implantation,” Graefe’s Arch. Clin. Exp. Ophthalmol. 248(12), 1827–1835 (2010). [CrossRef]  

10. M. Ang, M. Baskaran, R. M. Werkmeister, J. Chua, D. Schmidl, V. A. Dos Santos, G. Garhöfer, J. S. Mehta, and L. Schmetterer, “Anterior segment optical coherence tomography,” Prog. Retinal Eye Res. 66, 132–156 (2018). [CrossRef]  

11. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2015), vol. 2, chap. 10, pp. 319–356.

12. M. Gora, K. Karnowski, M. Szkulmowski, B. J. Kaluzny, R. Huber, A. Kowalczyk, and M. Wojtkowski, “Ultra high-speed swept source oct imaging of the anterior segment of human eye at 200 khz with adjustable imaging range,” Opt. Express 17(17), 14880–14894 (2009). [CrossRef]  

13. L. M. Sakata, R. Lavanya, D. S. Friedman, H. T. Aung, S. K. Seah, P. J. Foster, and T. Aung, “Assessment of the scleral spur in anterior segment optical coherence tomography images,” Arch. Ophthalmol. 126(2), 181–185 (2008). [CrossRef]  

14. H. Fu, Y. Xu, S. Lin, D. W. K. Wong, M. Baskaran, M. Mahesh, T. Aung, and J. Liu, “Angle-closure detection in anterior segment oct based on multilevel deep network,” IEEE Trans. Cybern. 50(7), 3358–3366 (2019). [CrossRef]  

15. J. Hao, H. Fu, Y. Xu, Y. Hu, F. Li, X. Zhang, J. Liu, and Y. Zhao, “Reconstruction and quantification of 3d iris surface for angle-closure glaucoma detection in anterior segment oct,” International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2020), pp. 704–714.

16. J. Hao, F. Li, H. Hao, H. Fu, Y. Xu, R. Higashita, X. Zhang, J. Liu, and Y. Zhao, “Hybrid variation-aware network for angle-closure assessment in as-oct,” IEEE Trans. Med. Imaging 41(2), 254–265 (2021). [CrossRef]  

17. C. Chase, A. Elsawy, T. Eleiwa, E. Ozcan, M. Tolba, and M. Abou Shousha, “Comparison of autonomous as-oct deep learning algorithm and clinical dry eye tests in diagnosis of dry eye disease,” Clin. Ophthalmol. 15, 4281–4289 (2021). [CrossRef]  

18. K. Kamiya, Y. Ayatsuka, Y. Kato, F. Fujimura, M. Takahashi, N. Shoji, Y. Mori, and K. Miyata, “Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study,” BMJ Open 9(9), e031313 (2019). [CrossRef]  

19. D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Fast segmentation of anterior segment optical coherence tomography images using graph cut,” Eye and Vis 2(1), 1–6 (2015). [CrossRef]  

20. J. Tian, P. Marziliano, M. Baskaran, H.-T. Wong, and T. Aung, “Automatic anterior chamber angle assessment for hd-oct images,” IEEE Trans. Biomed. Eng. 58(11), 3242–3249 (2011). [CrossRef]  

21. S. Ni Ni, J. Tian, P. Marziliano, and H.-T. Wong, “Anterior chamber angle shape analysis and classification of glaucoma in ss-oct images,” J. Ophthalmol. 2014, 1–12 (2014). [CrossRef]  

22. H. Fu, Y. Xu, S. Lin, X. Zhang, D. W. K. Wong, J. Liu, A. F. Frangi, M. Baskaran, and T. Aung, “Segmentation and quantification for angle-closure glaucoma assessment in anterior segment oct,” IEEE Trans. Med. Imaging 36(9), 1930–1938 (2017). [CrossRef]  

23. Q. Shang, Y. Zhao, Z. Chen, H. Hao, F. Li, X. Zhang, and J. Liu, “Automated iris segmentation from anterior segment oct images with occludable angles via local phase tensor,” 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (IEEE, 2019), pp. 4745–4749.

24. P. Yin, M. Tan, H. Min, Y. Xu, G. Xu, Q. Wu, Y. Tong, H. Risa, and J. Liu, “Automatic segmentation of cortex and nucleus in anterior segment oct images,” Computational Pathology and Ophthalmic Medical Image Analysis, (Springer, 2018), pp. 269–276.

25. G. Cao, W. Zhao, R. Higashita, J. Liu, W. Chen, J. Yuan, Y. Zhang, and M. Yang, “An efficient lens structures segmentation method on as-oct images,” 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), (IEEE, 2020), pp. 1646–1649.

26. S. Zhang, Y. Yan, P. Yin, et al., “Guided m-net for high-resolution biomedical image segmentation with weak boundaries,” International Workshop on Ophthalmic Medical Image Analysis, (Springer, 2019), pp. 43–51.

27. L. Schwarzenbacher, P. Seeböck, D. Schartmüller, C. Leydolt, R. Menapace, and U. Schmidt-Erfurth, “Automatic segmentation of intraocular lens, the retrolental space and berger’s space using deep learning,” Acta Ophthalmol. 100(8), e1611–e1616 (2022). [CrossRef]  

28. H. Fu, F. Li, X. Sun, et al., “Age challenge: angle closure glaucoma evaluation in anterior segment optical coherence tomography,” Med. Image Anal. 66, 101798 (2020). [CrossRef]  

29. X. Tao, C. Yuan, C. Bian, Y. Li, K. Ma, D. Ni, and Y. Zheng, “The winner of age challenge: Going one step further from keypoint detection to scleral spur localization,” 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2021), pp. 1284–1287.

30. C. Yuan, C. Bian, H. Kang, S. Liang, K. Ma, and Y. Zheng, “Identification of primary angle-closure on as-oct images with convolutional neural networks,” arXiv, arXiv:1910.10414 (2019). [CrossRef]  

31. G. Yang, K. Li, J. Yao, S. Chang, C. He, F. Lu, X. Wang, and Z. Wang, “Automatic measurement of anterior chamber angle parameters in AS-OCT images using deep learning,” Biomed. Opt. Express 14(4), 1378–1392 (2023). [CrossRef]  

32. J. W. Console, L. M. Sakata, T. Aung, D. S. Friedman, and M. He, “Quantitative analysis of anterior segment optical coherence tomography images: the zhongshan angle assessment program,” Br. J. Ophthalmol. 92(12), 1612–1616 (2008). [CrossRef]  

33. M. Liu, J. Zhang, E. Adeli, and D. Shen, “Joint classification and regression via deep multi-task multi-channel learning for alzheimer’s disease diagnosis,” IEEE Trans. Biomed. Eng. 66(5), 1195–1206 (2018). [CrossRef]  

34. J. Duan, G. Bello, J. Schlemper, W. Bai, T. J. Dawes, C. Biffi, A. de Marvao, G. Doumoud, D. P. O’Regan, and D. Rueckert, “Automatic 3d bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach,” IEEE Trans. Med. Imaging 38(9), 2151–2164 (2019). [CrossRef]  

35. Z. Tan, J. Feng, and J. Zhou, “Multi-task learning network for landmark detection in anatomical tree structures,” 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2021), pp. 1975–1979.

36. V. Westphal, A. M. Rollins, S. Radhakrishnan, and J. A. Izatt, “Correction of geometric and refractive image distortions in optical coherence tomography applying fermat’s principle,” Opt. Express 10(9), 397–404 (2002). [CrossRef]  

37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

38. J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv, arXiv:2102.04306 (2021). [CrossRef]  

39. J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert, “Attention gated networks: Learning to leverage salient regions in medical images,” Med. Image Anal. 53, 197–207 (2019). [CrossRef]  

40. M. Zarei, T. Mahmoudi, H. Riazi-Esfahani, B. Mousavi, N. Ebrahimiadib, M. Yaseri, E. Khalili Pour, and H. Arabalibeik, “Automated measurement of iris surface smoothness using anterior segment optical coherence tomography,” Sci. Rep. 11(1), 8505 (2021). [CrossRef]  

41. B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: a database and web-based tool for image annotation,” Int. J. Comput. Vis. 77(1-3), 157–173 (2008). [CrossRef]  

42. K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells III, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Academic Radiology 11(2), 178–189 (2004). [CrossRef]  

43. A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems 32, 1 (2019). [CrossRef]  

44. L. Rice, E. Wong, and Z. Kolter, “Overfitting in adversarially robust deep learning,” International Conference on Machine Learning, (PMLR, 2020), pp. 8093–8104.

45. B. Y. Xu, M. Chiang, A. A. Pardeshi, S. Moghimi, and R. Varma, “Deep neural network for scleral spur detection in anterior segment oct images: the chinese american eye study,” Trans. Vis. Sci. Tech. 9(2), 18 (2020). [CrossRef]  

46. J. Yang, Q. Liu, and K. Zhang, “Stacked hourglass network for robust facial landmark localisation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), pp. 79–87.

47. B. Cheng, B. Xiao, J. Wang, H. Shi, T. S. Huang, and L. Zhang, “Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (2020), pp. 5386–5395.

48. Y. Cai, Z. Wang, Z. Luo, B. Yin, A. Du, H. Wang, X. Zhang, X. Zhou, E. Zhou, and J. Sun, “Learning delicate local representations for multi-person pose estimation,” European Conference on Computer Vision, (Springer, 2020), pp. 455–472.

49. S. Emre, S. Doganay, and S. Yologlu, “Evaluation of anterior segment parameters in keratoconic eyes measured with the pentacam system,” J. Cataract. & Refract. Surg. 33(10), 1708–1712 (2007). [CrossRef]  

50. S. Radhakrishnan, J. Goldsmith, D. Huang, V. Westphal, D. K. Dueker, A. M. Rollins, J. A. Izatt, and S. D. Smith, “Comparison of optical coherence tomography and ultrasound biomicroscopy for detection of narrow anterior chamber angles,” Arch. Ophthalmol. 123(8), 1053–1059 (2005). [CrossRef]  

51. P. Misra, L. Al-Aswad, S. Daly, D. Blumberg, and R. H. Silverman, “The use of anterior segment oct aod and tisa parameters as an objective way to evaluate the angle (pilot study),” J. Clin. Med. 60, 5567 (2019).

52. B. Wang, L. M. Sakata, D. S. Friedman, Y.-H. Chan, M. He, R. Lavanya, T.-Y. Wong, and T. Aung, “Quantitative iris parameters and association with narrow angles,” Ophthalmology 117(1), 11–17 (2010). [CrossRef]  

53. D. S. J. Ting, V. H. Foo, L. W. Y. Yang, J. T. Sia, M. Ang, H. Lin, J. Chodosh, J. S. Mehta, and D. S. W. Ting, “Artificial intelligence for anterior segment diseases: Emerging applications in ophthalmology,” Br. J. Ophthalmol. 105(2), 158–168 (2021). [CrossRef]  

54. Z. Chen, X. Liang, W. Chen, P. Wang, and J. Wang, “Decreased iris thickness on swept-source optical coherence tomography in patients with primary open-angle glaucoma,” Clin. & Exp. Ophthalmol. 49(7), 696–703 (2021). [CrossRef]  

55. A. Invernizzi, M. Cigada, L. Savoldi, S. Cavuto, L. Fontana, and L. Cimino, “In vivo analysis of the iris thickness by spectral domain optical coherence tomography,” Br. J. Ophthalmol. 98(9), 1245–1249 (2014). [CrossRef]  

56. J. Sokol, Z. Stegman, J. M. Liebmann, and R. Ritch, “Location of the iris insertion in pigment dispersion syndrome,” Ophthalmology 103(2), 289–293 (1996). [CrossRef]  

57. A.-L. Engren and A. Behndig, “Anterior chamber depth, intraocular lens position, and refractive outcomes after cataract surgery,” J. Cataract. & Refract. Surg. 39(4), 572–577 (2013). [CrossRef]  

58. F. Taketani, T. Matuura, E. Yukawa, and Y. Hara, “Influence of intraocular lens tilt and decentration on wavefront aberrations,” J. Cataract. & Refract. Surg. 30(10), 2158–2162 (2004). [CrossRef]  

59. Z. Ashena, S. Maqsood, S. N. Ahmed, and M. A. Nanavaty, “Effect of intraocular lens tilt and decentration on visual acuity, dysphotopsia and wavefront aberrations,” Vision 4(3), 41 (2020). [CrossRef]  

60. M. Kim, K. H. Park, T.-W. Kim, and D. M. Kim, “Anterior chamber configuration changes after cataract surgery in eyes with glaucoma,” Korean J. Ophthalmol. 26(2), 97–103 (2012). [CrossRef]  

61. H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016). [CrossRef]  

62. T. Nakamura, N. Isogai, T. Kojima, Y. Yoshida, and Y. Sugiyama, “Implantable collamer lens sizing method based on swept-source anterior segment optical coherence tomography,” Am. J. Ophthalmol. 187, 99–107 (2018). [CrossRef]  

63. H. Masoodi, E. Jafarzadehpur, A. Esmaeili, F. Abolbashari, and S. M. A. Hosseini, “Evaluation of anterior chamber angle under dark and light conditions in angle closure glaucoma: an anterior segment oct study,” Contact Lens Anterior Eye 37(4), 300–304 (2014). [CrossRef]  

64. Y.-C. Chang, G. M. Mesquita, S. Williams, G. Gregori, F. Cabot, A. Ho, M. Ruggeri, S. H. Yoo, J.-M. Parel, and F. Manns, “In vivo measurement of the human crystalline lens equivalent refractive index using extended-depth oct,” Biomed. Opt. Express 10(2), 411–422 (2019). [CrossRef]  

65. F. Okamoto, H. Sone, T. Nonoyama, and S. Hommura, “Refractive changes in diabetic patients during intensive glycaemic control,” Br. J. Ophthalmol. 84(10), 1097–1102 (2000). [CrossRef]  

66. G. Benedek, “Theory of transparency of the eye,” Appl. Opt. 10(3), 459–473 (1971). [CrossRef]  

67. K. Li, G. Yang, S. Chang, J. Yao, C. He, F. Lu, X. Wang, and Z. Wang, “Comprehensive assessment of anterior segment in refraction corrected OCT based on multitask learning: code,” Github, 2023, https://github.com/kaiwenli325/AS-OCT.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Code for our proposed method is available at [67].

67. K. Li, G. Yang, S. Chang, J. Yao, C. He, F. Lu, X. Wang, and Z. Wang, “Comprehensive assessment of anterior segment in refraction corrected OCT based on multitask learning: code,” Github, 2023, https://github.com/kaiwenli325/AS-OCT.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. AS-OCT imaging process and the main structures of the anterior segment: (a1) Illustration of the anterior segment in the eye. (a2) AS-OCT radial scanning pattern in Heidelberg ANTERION OCT system. (a3) An example of a cross-sectional AS-OCT image corresponding to one scan line in (a2). (b1-d1) Main anterior segment structures and (b2-d2) their corresponding segmentation masks: iris (purple), lens (green), cornea (yellow), IOL (blue) and ICL (red). (e1-e2) ACA and the two landmarks SS (green dot) and IR (red dot).
Fig. 2.
Fig. 2. Metrics mode (refraction corrected) and Imaging mode (refraction uncorrected) of Heidelberg ANTERION OCT imaging system: (a1-a2) The scan pattern of Metrics mode and the corresponding B-scan cross-sectional image along the scan line. (b1-b2) The scan pattern of Imaging mode and the corresponding cross-sectional image along the scan line.
Fig. 3.
Fig. 3. Overall workflow of the proposed AS-OCT image analysis method.
Fig. 4.
Fig. 4. Proposed multitask network for simultaneous segmentation and landmark detection for AS-OCT images (BN: batch normalization; ReLU: rectified linear unit).
Fig. 5.
Fig. 5. (a) Real optical path on a refraction corrected AS-OCT image. (b) The corresponding erroneous optical path on an AS-OCT image without refraction correction. (c) Inlet: The real position of A and its uncorrected position $\hat {A}$.
Fig. 6.
Fig. 6. Illustration of quantification of the main clinical parameters derived in this study.
Fig. 7.
Fig. 7. Visual comparisons of different methods for anterior segment segmentation. (a) AS-OCT image without inserted optics. (b) AS-OCT image with ICL. (c-d) AS-OCT image with IOL. (a2)-(d2) Full-scale AS-OCT images from which the cropped images (a1)-(d1) are extracted (indicated by the white rectangles). (a3)-(d3) Segmentation results of (a2)-(d2) by our proposed method. Iris: purple, lens: green, cornea: yellow, IOL: blue and ICL: red.
Fig. 8.
Fig. 8. Visual comparisons of different methods for landmark detection of SS (green dot) and IR (red dot). (a-d) Examples of four ACA regions.
Fig. 9.
Fig. 9. Segmentation and landmark detection results of images with corneoscleral-iris angle (partially) blocked by the eyelid. (a)-(c) Examples of three AS-OCT cross-sectional images.
Fig. 10.
Fig. 10. Validation of refraction correction. (a1) The bi-convex lens used for comparison. (a2) The refraction corrected image from Heidelberg ANTERION. (a3) The refraction corrected image using our proposed method. (b1, c1) Two uncorrected AS-OCT images. (b2, c2) Corrected AS-OCT images by our proposed method corresponding to (b1) and (c1).
Fig. 11.
Fig. 11. Bland-Altman analysis of representative clinical parameters between manual and automatic measurements: (a) $\mathrm {AOD}_\mathrm {500}$, (b) $\mathrm {TISA}_\mathrm {500}$, (c) I-Curve, (d) Pupil diameter.
Fig. 12.
Fig. 12. Quantitate measurements of ACA and iris before and after cataract surgery: (a) $\mathrm {AOD}_\mathrm {500}$, (b) $\mathrm {TISA}_\mathrm {500}$, (c) I-Curve, and (d) Smooth index (Blue: pre-surgery; Orange: post-surgery). (e1-e2) Pre-surgery iris thickness heatmap and thickness values over different regions. (f1-f2) Post-surgery iris thickness heatmap and thickness values over different regions.

Tables (5)

Tables Icon

Table 1. Comparison of quantitative evaluation between different methods.

Tables Icon

Table 2. Evaluation of automated segmentation against ground truth.

Tables Icon

Table 3. Evaluation of SS detection against ground truth.

Tables Icon

Table 4. Evaluation of IR detection against ground truth.

Tables Icon

Table 5. Comparison of clinical parameters between pre- and post-surgery.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

L Dice  = 1 1 C j c 2 k K M i p ( j , k ) M i ( j , k ) k K M i p ( j , k ) + k K M i ( j , k ) .
L C E = 1 K j C k K M i ( j , k ) log ( M i p ( j , k ) ) .
L Seg  = L Dice  + L C E .
L H R = 1 1 L j L 2 k K H i p ( j , k ) H i ( j , k ) k K H i p ( j , k ) + k K H i ( j , k ) .
L M S E = 1 K j L k K ( H i p ( j , k ) H i ( j , k ) ) 2 .
L L = L H R + L M S E .
L = L S e g + L L .
n 1 sin θ 1 = n 2 sin θ 2 .
d L = 1 n .
Precision = T P T P + F P . Recall = T P T P + F N .
F1-score = 2 × Precision × Recall Precision + Recall .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.