Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated segmentation of the ciliary muscle in OCT images using fully convolutional networks

Open Access Open Access

Abstract

Quantifying shape changes in the ciliary muscle during accommodation is essential in understanding the potential role of the ciliary muscle in presbyopia. The ciliary muscle can be imaged in-vivo using OCT but quantifying the ciliary muscle shape from these images has been challenging both due to the low contrast of the images at the apex of the ciliary muscle and the tedious work of segmenting the ciliary muscle shape. We present an automatic-segmentation tool for OCT images of the ciliary muscle using fully convolutional networks. A study using a dataset of 1,039 images shows that the trained fully convolutional network can successfully segment ciliary muscle images and quantify ciliary muscle thickness changes during accommodation. The study also shows that EfficientNet outperforms other current backbones of the literature.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The ciliary muscle (CM) can be considered the engine of accommodation due to its central role in controlling the forces applied on the lens [1]. During accommodation, ciliary muscle contraction produces an inward movement of the apex of the ciliary processes towards the lens equator. This movement releases resting zonular tension at the lens equator and allows the lens to take a more curved shape, resulting in accommodation [2,3].

The amplitude of accommodation progressively decreases with age [4], eventually leading to presbyopia, the loss of near visual function. Presbyopia is attributed primarily to an increase in lens stiffness with age [58], but there are also age-related changes in the ciliary muscle which may be a factor [9]. Recent studies have used optical coherence tomography (OCT) [1012] to image the movement of the ciliary muscle dynamically and to quantify changes in ciliary muscle thickness (CMT) during accommodation [1319]. One of the challenges in these studies is that they produce large image datasets which cannot be analyzed efficiently using manual segmentation techniques. Kao et al. [20]. and Strasser et al. [21] developed a semi-automatic segmentation application, but their approach still requires significant manual input.

To address this need, we developed an automated computational tool to quantify the shape of the ciliary muscle in transscleral OCT images. The tool combines a fully convolutional network (FCN) to segment the images and post-processing steps to undistort and correct the images.

FCNs have been commonly used in ophthalmology for segmentation applications, especially of the retina and cornea [2225]. Their use has been key in dealing with noisy biomedical images, where traditional image processing tools are not so powerful. So far, two neural networks architectures have been mainly used to segment biomedical images, FCN and convolutional neural networks (CNNs) [26]. The difference is that FCNs employ solely locally connected layers, such as convolution, pooling and upsampling operations, in contrast with CNNs, which also add fully-connected dense layers. This FCNs feature considerably decreases the computational cost and adds local spatial coherence, which is why we decided to use FCN in this study.

2. Overview of the automatic CM segmentation tool

The general structure of the proposed segmentation tool is described in Fig. 1. OCT images of the CM are pre-processed with a contrast filter and then introduced into the FCN, which performs automatic multiclass segmentation to detect the background, other ocular structures, and CM. The boundaries of the segmentation are then overlapped onto the original image to check the automatic segmentation results. Lastly, the segmented contours of the sclera and CM are corrected for distortion due to refraction of the OCT beam through the ocular tissues to accurately quantify biomarkers such as CM and scleral thickness from the real shape of the CM.

 figure: Fig. 1.

Fig. 1. Outline of the proposed tool to automatically segment the CM images. A contrast filter is performed initially to the raw images. The resulting images are introduced in the FCN that performs the multiclass segmentation (background, other ocular structures and CM). Then, the quality of the automatic segmentation is checked by an examiner. Lastly, the segmented boundaries images are corrected for distortion to obtain the real shape of the CM using MATLAB.

Download Full Size | PDF

3. Development of the fully convolutional network

3.1 Dataset

The dataset to train the FCN was obtained from a database of transscleral Spectral-Domain OCT (SD-OCT) images of the ciliary muscle acquired on human subjects using a system that was described previously [1012,17,27]. Briefly, the OCT system (Telesto I, Thorlabs) operates at a central wavelength of 1,325 nm and at a frame rate of 13 Hz, with an axial resolution of 7.5 µm over an axial range of 2.5 mm (in air). The transscleral OCT system was combined and synchronized with an extended depth OCT system that simultaneously acquired images of the anterior segment including the crystalline lens [12]. Anterior segment images were used to quantify changes in lens thickness (LT) during accommodation, which was used as an objective measure of the accommodative response. All studies were approved by the Institutional Review Board at the University of Miami Miller School of Medicine and followed the tenets of the Declaration of Helsinki.

The database included 104 recordings acquired along a horizontal meridian on the temporal side of the left eye of 23 human subjects, ranging in age from 16 to 45 years. Each recording consisted of a sequence of 160 OCT images of the ciliary muscle [512 × 897 pixels] and the anterior segment [400 × 2048 pixels] acquired in real-time during the response to a step stimulus of accommodation. Each recording lasted 6.17 seconds and the accommodation step stimulus was triggered 1.54 s after the start of an acquisition. The amplitude of the accommodation step stimulus ranged from 0D to 6D depending on the study.

All images (104 recordings × 160 images = 16,640 images) were checked visually to select only the images where the CM boundary was visible. From the 16,640 images, 1,039 images were used to develop the FCN as follows: 716 images were used for training (69%), 164 images for validation (16%), and 159 images for testing (15%). From the 23 subjects, 12 were used for training and validation and 11 for testing. The number of images varied between subjects, with a minimum of 5 and a maximum of 51 images per recording.

3.2 Pre-processing

The original size of the OCT images of the CM is 512 × 897 pixels. Each image was cropped to produce an even number of rows and columns [512 × 896] needed for FCN training. The contrast of the image was adjusted with the function ‘imadjust’ MATLAB (MathWorks Inc., USA), which is based on the following function:

$$y = {\left( {\frac{{x - a}}{{b - a}}} \right)^\gamma }$$
where y and x are the resulting and original pixel values, respectively, $\gamma $ specifies the shape of the image histogram (i.e., the brightness of the image), and a and b determine the lower and upper limits calculated for contrast stretching of the image, respectively. Parameters a and b were automatically calculated through the ‘stretchlim’ MATLAB function while $\gamma = 1.5$ was applied to all images. This pre-processing was applied to all annotated images and should be performed to all images submitted to the automatic-segmentation tool developed in this study.

3.3 Annotation

All the chosen 1,039 images were segmented manually with an in-house code developed in MATLAB to generate the datasets needed for training and testing the FCN. Manual segmentation was performed by one examiner familiar with ocular anatomy. The images contained three labels: the background, other ocular structures, and the ciliary muscle, making it a multiclass image segmentation problem.

3.4 Design of the fully convolutional network

This section initially justifies the choice of UNet over other FCN architectures to segment multiclass segmentation problems. Then, the loss function and optimizer used to train all FCNs evaluated during this study are explained. Lastly, the metrics used to evaluate the FCN performance are described.

3.4.1 Choice of architecture

We evaluated the performance of several FCNs for biomedical imaging including UNet [28] and LinkNet [29] with different backbone structures such as ResNet34 [30], EfficientNetb2 [31], MobileNetv2 [32], Vgg19 [33] and EfficientNetb4 [30] for segmenting the OCT images of the CM. Finally, UNet architecture (Fig. 2) was chosen for this study due to its higher performance than the others.

 figure: Fig. 2.

Fig. 2. Architecture of the FCN used in this study (UNet with EfficientNetb2 as backbone). The green arrows indicate a convolutional layer with a stride of 2 (performing the convolution and resizing the image at the same time). The cyan arrows indicate a conventional convolutional layer. The cyan arrows and dotted lines indicate that several actions (several Conv2D layers) were performed in the neural network, but for graphical purposes, they were not added. The orange arrows indicate an upsampling procedure. The code and the model summary are reported in the following repository [38].

Download Full Size | PDF

3.4.2 Choice of loss function

The following loss function, which measures how far a predicted value is from the true value, was used to train the FCN:

$$\textrm{loss function} = \textrm{Dice loss} + \textrm{Categorical Focal loss },$$
where the Dice loss and the Categorical Focal loss, two statistical coefficients used to gauge the similarity, are:
$$Dice\; loss = \; \; 1 - 2\cdot\frac{{TP}}{{2{\cdot}FP + FN + FP}}\; \; ,$$
$$Categorical\; Focal\; loss = \; - gt\cdot\; 0.25\cdot{({1 - pr} )^\alpha }\cdot\log ({pr} )\; ,$$
where TP, FP and FN mean the true positives, false positives and false negatives between the predicted and ground truth pixels, respectively. ‘gt’ and ‘pr’ are pixel values of the ground truth and predicted probability, respectively. The focusing parameter, α, smoothly adjusts the rate at which easy examples are down-weighted. This parameter was set to 4 to treat highly imbalanced data (approximately the background, other ocular structures and CM pixel values in each image were 22%, 68% and 10%, respectively) [34]. Moreover, to address the problem between easy and difficult samples (the background can be considered as an easy sample whilst the CM as a difficult one), a class weight of 0.5, 0.1 and 2.4 was imposed in the Dice loss for the background, other ocular structures and CM pixels, respectively. Thus, the FCN focused more on segmenting properly the CM pixels than the others. The weights were chosen based on a pre-screening analysis of the FCN training.

3.4.3 Training optimization

Network training was conducted using Adam optimizer as it has demonstrated good performance in training neural networks [35,36], with an initial learning rate of 0.001. We also included a function that reduces the learning rate if there has not been a decrease in the loss function in 10 epochs. Finally, the ‘Softmax’ activation function was used for the last layer as it is often used for multiclass segmentation problems because the result could be interpreted as a probability distribution.

3.4.4 Performance assessment

To assess the FCN performance, we quantified the Intersection Over Union (IoU, also known as the Jaccard index), which is particularly suited for multiclass segmentation problems:

$$IoU({gt,pr} )= \frac{{|{gt \cap pr} |}}{{|{gt \cup pr} |}} = \frac{{\textrm{TP}}}{{TP + FP + FN}}\; .$$

Moreover, the F-score metric was also computed to compare our results with other biomedical segmentation studies:

$$F - score = \; \frac{{2{\cdot}TP}}{{2{\cdot}FP + FN + FP}}\; \; .$$

All FCNs were trained for 150 epochs since all FCNs were completely optimized in those epochs. We use Google-Colab Pro and Tensorflow [37] library to train the FCNs. A batch size of 4 was used for memory limitations.

3.4.5 Real time data-augmentation

In order to design a robust FCN that can successfully segment the multiclass transscleral CM image and avoid over-fitting of the training dataset, a real-time data augmentation technique was implemented. Instead of creating a larger dataset from the training dataset, which would involve a considerable increase in the training time, we implemented an in-house algorithm to randomly transform the training dataset in each epoch.

Each image in the training dataset was randomly transformed in each epoch as follows: the image was rotated in the following range [-3, 3°]; the image was displaced in a range of [0-3%] horizontally and vertically. The nearest neighbor algorithm was used for completing the resulting images; a gaussian or speckle noise was introduced; and finally, a contrast filter, (Eq. 1), with $\gamma $ ranging from 0.7 to 1.2, was applied to change the brightness of the image. Fig. 3 shows an example of the same image with three different random transformations.

 figure: Fig. 3.

Fig. 3. Different transformations of the same image. A contrast filter with γ = 0.7, 1.0 and 1.2 was applied to (a), (b) and (c), respectively. Each image was rotated, and horizontally and vertically displaced randomly. Speckle noise was added to the images, but it cannot be appreciated as it is shadowed by the original background noise of the images. The vertical and horizontal scale bar (yellow) are 0.25 and 1 mm, respectively.

Download Full Size | PDF

4. Distortion correction

The raw OCT images of the CM must be corrected for optical distortions and the pixel size of the image must be normalized. The OCT images are distorted due to refraction of the probe beam at the different ocular interfaces [16], mainly produced by the anterior and posterior scleral boundaries [11]. The pixel size is not normalized due to the different pixel density along the width (897 pixels / 10 mm) and the height (512 pixels / 2.5 mm) of the image.

We implemented an algorithm to correct for image distortion due to refraction of the OCT beam at the air-sclera and sclera-ciliary muscle boundaries based on the vector form of Snell’s law [11]. For distortion correction, we used the group refractive indices of the sclera (1.415) and ciliary muscle (1.380) estimated at 1,325 nm [12]. The anterior conjunctiva was considered as part of the sclera since it cannot be detected in most of the OCT images. After the boundaries were corrected for distortion, they were resampled to normalize the pixel size.

5. Results

5.1 FCN performance

Table 1 shows the performance of the different FCN architectures evaluated in this study. Overall, we found no difference in performance between UNet and LinkNet, whilst we found considerable differences in the backbone used for the FCN. The best performance was obtained with EfficientNet as backbone. There was no significant difference between EfficientNetb2 and EfficientNetb4. As a result, for CM segmentation we implemented UNet with EfficientNetb2 backbone. This configuration has an IoU score of 94.54%, 97.37% and 90.23% for the background, other ocular structures and ciliary muscle, respectively, with a mean IoU of 94.04%, and an F-Score of 97.33%.

Tables Icon

Table 1. Performance of different FCN architectures trained for the CM segmentation. All FCNs were trained in 150 epochs. Each FCN was trained three times and the one with best performance was chosen

We did not find any positive correlation between the training time and the number of parameters, except for the EfficientNet structure (Table 1). No difference in the training time was found between different backbone structures as each one uses a different scaling method [31].

The training history, loss function and IoU mean score of the Unet-EfficientNetb2 are shown in Fig. 4. The low batch size (4) enabled to train the FCN in few epochs. Our FCN architecture (UNet with EfficientNetb2 backbone) learned faster than other FCNs such as UNet/LinkNet-MobileNetv2, which needed approximately 100 epochs to improve its performance.

 figure: Fig. 4.

Fig. 4. Training history of the UNet architecture with the EfficientNetb2 as backbone. IoU mean score (a) and loss function (b) are shown.

Download Full Size | PDF

5.2 Application of the FCN for biometry of the ciliary muscle

To show that the trained FCN can quantify ciliary muscle movements during accommodation, the data from 5 recordings (160 images) from 5 subjects included in the testing dataset ranging in age from 20 to 26 years and responding to a step stimulus in accommodation of 4D were selected. The proposed tool (Fig. 1) was used to obtain the CM biometry. No image segmentation by the FCN was discarded for subject #1 by the examiner as the CM boundary was clearly visible. Less than 10% of the images were discarded for subject #2 and #3, and approximately 20% images were discarded for subject #4 and #5. The ratio was proportional to the OCT quality image. Videos showing the automatic-segmentation performed in the recording of each subject is shown, see Visualization 1, Visualization 2, Visualization 3, Visualization 4, and Visualization 5.

Fig. 5 shows the accommodated and unaccommodated CM geometry and the CMT profile for the five subjects. The CMT profile was obtained following the methodology proposed by Strasser et al. [21]., which measures the distance of the scleral-muscle border and the muscle-pigmented epithelium border along the direction perpendicular to the scleroconjunctival-air interface. Fig. 5 shows that using the FCN we can detect differences in CM shape between the unaccommodated and accommodated state across all subjects, with a consistent increase of the CMT in the accommodated state.

 figure: Fig. 5.

Fig. 5. Non- and accommodated geometry of the ciliary muscle at the left while the CMT profile is shown at the right for the five different testing subjects. The ages of subjects #1 to #5 were 26, 23, 25, 26 and 20 years, respectively.

Download Full Size | PDF

The accuracy of the FCN was evaluated by comparing the differences in maximum CMT measured by the proposed FCN and manual segmentation. A Bland–Altman analysis was performed in the testing dataset to show the agreement between the manual and FCN segmentation (Fig. 6). A mean difference of 1.2 µm and a 95% confidence interval (CI) of [-45, 48] µm were obtained.

 figure: Fig. 6.

Fig. 6. Bland-Altman analysis comparing the CMTmax value provided by the FCN and the manual segmentation for the testing dataset (159 images). The root mean square error (RMSE) between the FCN and manual segmentation was 21.9 µm and the percentage limit of agreement (%LOA) was 8.1%.

Download Full Size | PDF

Fig. 7 shows the dynamics of the maximum CMT and the lens thickness (LT) during accommodation (6.17 s), and the changes in LT as a function of the changes in maximum CMT. As expected, the lens increases in thickness as the ciliary muscle contracts (i.e. CMT changes in thickness) during the accommodative response.

 figure: Fig. 7.

Fig. 7. Dynamics of the maximum CMT (left) and LT (center) over the whole recording for 5 subjects. Lens thickness as a function of maximum CMT (right). The navy dots are the experimental values. As expected, the lens increases in thickness and the ciliary muscle contracts (i.e. increased maximum CMT) during accommodation.

Download Full Size | PDF

6. Discussion

The main goal of this study was to develop and evaluate the performance of a tool for automatic segmentation of the ciliary muscle from transscleral OCT images based on fully convolutional networks (FCNs). Specifically, a UNet with EfficientNetb2 as backbone was trained to segment the ciliary muscle from the OCT images. The FCN segment successfully the CM and scleral boundaries, with an IoU mean of 94.03% and an F-Score of 97.33% with respect to manual segmentation.

Our analysis showed that the performance of UNet [28] and LinkNet [29] in segmenting the scleral and CM boundaries were similar. Potentially, the two FCNs can be interchanged for our multiclass segmentation problem. Nevertheless, we found that the EfficientNetb2 [31] backbone outperformed the other backbones tested in this study. Another key of the FCN robustness was that the 716 training images were randomly transformed in each epoch, which made a training of 107,400 (716 × 150) different CM images. Using pre-trained weights (transfer learning) was not considered in this study as most of the available models in biomedical image segmentation are suited for RGB images and not for grayscale images such as OCT images, and the training time was relatively short (∼ 4.5 h).

The performance of our FCN is comparable to that obtained with other FCNs in similar studies: for OCT corneal layer segmentation Santos et al. [22]. achieved an IoU of 99.14%; for detecting cone photoreceptors in adaptive optics images Cunefare et al. [26]. achieved a F-score of 99.00%; for retinal OCT image segmentation Venhuizen et al. [39]. obtained an F-score of 95.14% and Roy et al. [25]. achieved a dice overlap score from 0.77 to 0.99. Nevertheless, comparing FCN performance between different studies is difficult as each problem has its own complexity. In our application, a factor limiting the performance of the FCN is that the inner boundary of the CM images is generally difficult to detect with high reliability due to signal loss in this region (see Fig. 8.a.). The low contrast of the boundary in this region increases the variability and uncertainty of both the manual segmentation and FCN (see Fig. 6). The confidence interval obtained for the maximum CMT ([-45, 48] µm) was less than the manual variability in the CM segmentation reported by the group [17].

 figure: Fig. 8.

Fig. 8. (a) Two different manual segmentations on an accommodated state of a specific subject. There was difficulty in appreciating the whole CM boundary, and thus, the segmentation variability between the images of the same subjects is high. (b) Two groups of images were created for each image of the testing dataset. The apex (black circle) and the edge closest to the scleral spur (purple circle) are shown rounded in the image. The vertical and horizontal scale bar (yellow) are 0.25 and 1 mm, respectively.

Download Full Size | PDF

To demonstrate that the FCN can accurately segment the CM where the inner boundary is clearly distinguished, each image of the testing dataset was split up into two regions of interest (ROIs): one that encloses the CM apex and the edge closest to the scleral spur, and the other one including the regions where the inner boundary of the ciliary body is clearly visible (Fig. 8.b). The CM-IoU scores were 87.42% and 95.13% for the first and second ROI, respectively, proving that if the contrast of the inner boundary of the CM is high (Fig. 3), the CM-IoU score considerably increases.

To broaden the FCN framework, we included images with different quality at the CM apex to train the FCN. In this way, the FCN can segment a larger range of CM images. Then, the examiner task is to decide what automatic segmentations are valid for their study. As expected, the variability is lower in CM images where the contrast at the apex is high (subject #1 in Fig. 7 and Visualization 1), and higher in images where the contrast at the apex is low (subject #4 in Fig. 7 and Visualization 4).

The application of the FCN to the quantification of CM changes during accommodation in five young subjects shows that the FCN was able to accurately quantify changes in the CM such as CMT. The positive correlation between changes in maximal CMT and lens thickness shows that the CM movement produces an accommodative response in all five subjects. Visually, this correlation is better appreciated in subjects with lower CMT variability such as subject #1 and #5 (see Fig. 7).

In general, the quality of the transscleral OCT images, especially near the CM apex, is low due to the high attenuation in the sclera and ciliary body, which reduces the signal returning from the deeper CM structures, such as the apex. As a result, a relatively small amount of images are available to be segmented and thus, train the FCN, which also is one of the limitations of this study. This image-quality problem is also common in other studies of the ciliary muscle movement during accommodation [13,15,16,18,19].

The current study utilized a commercial research-grade SD-OCT system with a power limited to 3 mW, well below the exposure limit defined in the ANSI Z80.36 standard for class 2 ophthalmic instruments. We expect that by delivering a higher incident power to the sclera and increasing the acquisition speed of the system will help to increase the contrast of the CM boundary [27].

In summary, this study shows that automatic segmentation can be performed with a high accuracy rate, highlighting the power of FCN. Moreover, with this work, we intended to provide an efficient tool that enables other researchers to perform larger studies on the ciliary muscle that elucidate its role on presbyopia with considerable time savings.

Funding

Ministerio de Ciencia, Innovación y Universidades (PRE2018-084021); National Eye Institute (1F30-EY027162 (Yu-Cherng Chang), P30EY14801 (Center Core Grant), R01EY014225); Florida Lions Eye Bank; Beauty of Sight Foundation; Henri and Flore Lesieur Foundation; Drs. Harry W. Flynn Jr MD, Raksha Urs, PhD and Aaron Furtado; Karl R. Olsen, MD and Martha E. Hildebrandt, PhD; Research to Prevent Blindness (GR004596); an PID2020-113822RB-C12/ funded by MCIN/ AEI /10.13039/501100011033.

Acknowledgments

We would like to acknowledge G. Gregori for his time in reviewing this manuscript.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. All supplemental data is uploaded in the following repository [38]. In the repository, readers can find: the model weights of the trained FCN for transfer learning; and an example of use of the trained FCN with the open-source code in Google-Colab [38].

References

1. M. A. Croft, A. Glasser, and P. L. Kaufman, “Accommodation and presbyopia,” International Ophthalmology Clinics 41(2), 33–46 (2001). [CrossRef]  

2. A. Glasser and P. L. Kaufman, “The mechanism of accommodation in primates,” Ophthalmology 106(5), 863–872 (1999). [CrossRef]  

3. S. Tamm, E. Tamm, and J. W. Rohen, “Age-related changes of the human ciliary muscle: a quantitative morphometric study,” Mech. Ageing Dev. 62(2), 209–221 (1992). [CrossRef]  

4. H. A. Anderson, G. Hentz, A. Glasser, K. K. Stuebing, and R. E. Manny, “Minus-lens-stimulated accommodative amplitude decreases sigmoidally with age: a study of objectively measured accommodative amplitudes from age 3,” Invest. Ophthalmol. Visual Sci. 49(7), 2919 (2008). [CrossRef]  

5. I. Cabeza-Gil, J. Grasa, and B. Calvo, “A numerical investigation of changes in lens shape during accommodation,” Sci. Rep. 11(1), 9639 (2021). [CrossRef]  

6. I. Cabeza-Gil, J. Grasa, and B. Calvo, “A validated finite element model to reproduce Helmholtz’s theory of accommodation: a powerful tool to investigate presbyopia,” Ophthalmic Physiologic Optic 41(6), 1241–1253 (2021). [CrossRef]  

7. A. Glasser and M. C. W. Campbell, “Biometric, optical and physical changes in the isolated human crystalline lens with age in relation to presbyopia,” Vision Res. 39(11), 1991–2015 (1999). [CrossRef]  

8. M. A. Croft and P. L. Kaufman, “Accommodation and presbyopia: the ciliary neuromuscular view,” Ophthalmology Clinics of North America 19(1), 13–24 (2006). [CrossRef]  

9. M. A. Croft, J. P. Mcdonald, A. Katz, T. L. Lin, E. Lütjen-Drecoll, and P. L. Kaufman, “Extralenticular and lenticular aspects of accommodation and presbyopia in human versus monkey eyes,” Invest. Ophthalmol. Visual Sci. 54(7), 5035 (2013). [CrossRef]  

10. M. Ruggeri, S. R. Uhlhorn, C. de Freitas, A. Ho, F. Manns, and J.-M. Parel, “Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch,” Biomed. Opt. Express 3(7), 1506 (2012). [CrossRef]  

11. M. Ruggeri, V. Hernandez, C. de Freitas, F. Manns, and J.-M. Parel, “Biometry of the ciliary muscle during dynamic accommodation assessed with OCT,” in Ophthalmic Technologies XXIV 8930, 89300W (2014). [CrossRef]  

12. M. Ruggeri, C. de Freitas, S. Williams, V. M. Hernandez, F. Cabot, N. Yesilirmak, K. Alawa, Y.-C. Chang, S. H. Yoo, G. Gregori, J.-M. Parel, and F. Manns, “Quantification of the ciliary muscle and crystalline lens interaction during accommodation with synchronous OCT imaging,” Biomed. Opt. Express 7(4), 1351 (2016). [CrossRef]  

13. S. Wagner, E. Zrenner, and T. Strasser, “Ciliary muscle thickness profiles derived from optical coherence tomography images,” Biomed. Opt. Express 9(10), 5100–5114 (2018). [CrossRef]  

14. S. Wagner, E. Zrenner, and T. Strasser, “Ciliary muscle thickness profiles derived from optical coherence tomography images: erratum,” Biomed. Opt. Express 10(1), 119 (2019). [CrossRef]  

15. S. Wagner, E. Zrenner, and T. Strasser, “Emmetropes and myopes differ little in their accommodation dynamics but strongly in their ciliary muscle morphology,” Vision Res. 163, 42–51 (2019). [CrossRef]  

16. Y. Shao, A. Tao, H. Jiang, M. Shen, J. Zhong, F. Lu, and J. Wang, “Simultaneous real-time imaging of the ocular anterior segment including the ciliary muscle during accommodation,” Biomed. Opt. Express 4(3), 466 (2013). [CrossRef]  

17. Y.-C. Chang, K. Liu, F. Cabot, S. H. Yoo, M. Ruggeri, A. Ho, J.-M. Parel, and F. Manns, “Variability of manual ciliary muscle segmentation in optical coherence tomography images,” Biomed. Opt. Express 9(2), 791–800 (2018). [CrossRef]  

18. L. A. Lossing, L. T. Sinnott, C. Y. Kao, K. Richdale, and M. D. Bailey, “Measuring changes in ciliary muscle thickness with accommodation in young adults,” Optometry and Vision Science 89(5), 719–726 (2012). [CrossRef]  

19. K. Richdale, L. T. Sinnott, M. A. Bullimore, P. A. Wassenaar, P. Schmalbrock, C. Y. Kao, S. Patz, D. O. Mutti, A. Glasser, and K. Zadnik, “Quantification of age-related and per diopter accommodative changes of the lens and ciliary muscle in the emmetropic human eye,” Invest. Ophthalmol. Visual Sci. 54(2), 1095 (2013). [CrossRef]  

20. C. Y. Kao, K. Richdale, L. T. Sinnott, L. E. Grillott, and M. D. Bailey, “Semiautomatic extraction algorithm for images of the ciliary muscle,” Optometry and Vision Science 88(2), 275–289 (2011). [CrossRef]  

21. T. Straßer, S. Wagner, and E. Zrenner, “Review of the application of the open-source software CilOCT for semi-automatic segmentation and analysis of the ciliary muscle in OCT images,” PLoS One 15(6), e0234330 (2020). [CrossRef]  

22. V. A. dos Santos, L. Schmetterer, H. Stegmann, M. Pfister, A. Messner, G. Schmidinger, G. Garhofer, and R. M. Werkmeister, “CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning,” Biomed. Opt. Express 10(2), 622 (2019). [CrossRef]  

23. S. Soltanian-Zadeh, K. Kurokawa, Z. Liu, F. Zhang, O. Saeedi, D. X. Hammer, D. T. Miller, and S. Farsiu, “Weakly supervised individual ganglion cell segmentation from adaptive optics OCT images for glaucomatous damage assessment,” Optica 8(5), 642–651 (2021). [CrossRef]  

24. J. P. Vigueras-Guillén, B. Sari, S. F. Goes, H. G. Lemij, J. van Rooij, K. A. Vermeer, and L. J. van Vliet, “Fully convolutional architecture vs sliding-window CNN for corneal endothelium cell segmentation,” BMC Biomed. Eng. 1(1), 4 (2019). [CrossRef]  

25. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]  

26. D. Cunefare, L. Fang, R. F. Cooper, A. Dubra, J. Carroll, and S. Farsiu, “Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks,” Sci. Rep. 7(1), 6620 (2017). [CrossRef]  

27. G. Monterano Mesquita, D. Patel, Y.-C. Chang, F. Cabot, M. Ruggeri, S. H. Yoo, A. Ho, J.-M. A. Parel, and F. Manns, “In vivo measurement of the attenuation coefficient of the sclera and ciliary muscle,” Biomed. Opt. Express 12(8), 5089–5106 (2021). [CrossRef]  

28. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: Learning dense volumetric segmentation from sparse annotation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9901 LNCS (2016).

29. A. Chaurasia and E. Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” in 2017 IEEE Visual Communications and Image Processing, VCIP 2017 (2018), 2018-January.

30. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016), 2016-December.

31. M. Tan and Q. v. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in 36th International Conference on Machine Learning, ICML 2019 (2019).

32. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2018).

33. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015).

34. T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for dense object detection,” IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 318–327 (2020). [CrossRef]  

35. I. Cabeza-Gil, I. Ríos-Ruiz, and B. Calvo, “Customised selection of the haptic design in c-loop intraocular lenses based on deep learning,” Ann. Biomed. Eng. 48(12), 2988–3002 (2020). [CrossRef]  

36. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv:1412.6980 (2014).

37. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016 (2016).

38. I. Cabeza-Gil, M. Ruggeri, C. Yu-Cherng, B. Calvo, and F. Manns, “CiliaryMuscle_FCNSegmentation_OBCproject,” Version 1.0.0Github, 2022, https://github.com/ICG14/CiliaryMuscle_FCNSegmentation_OBCproject.

39. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. J. P. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017). [CrossRef]  

Supplementary Material (5)

NameDescription
Visualization 1       Visualization 1: Automatic segmentation of the transscleral images of the subject #1 during the recording.
Visualization 2       Visualization 2: Automatic segmentation of the transscleral images of the subject #2 during the recording.
Visualization 3       Visualization 3: Automatic segmentation of the transscleral images of the subject #3 during the recording.
Visualization 4       Visualization 4: Automatic segmentation of the transscleral images of the subject #4 during the recording.
Visualization 5       Visualization 5: Automatic segmentation of the transscleral images of the subject #5 during the recording.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. All supplemental data is uploaded in the following repository [38]. In the repository, readers can find: the model weights of the trained FCN for transfer learning; and an example of use of the trained FCN with the open-source code in Google-Colab [38].

38. I. Cabeza-Gil, M. Ruggeri, C. Yu-Cherng, B. Calvo, and F. Manns, “CiliaryMuscle_FCNSegmentation_OBCproject,” Version 1.0.0Github, 2022, https://github.com/ICG14/CiliaryMuscle_FCNSegmentation_OBCproject.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Outline of the proposed tool to automatically segment the CM images. A contrast filter is performed initially to the raw images. The resulting images are introduced in the FCN that performs the multiclass segmentation (background, other ocular structures and CM). Then, the quality of the automatic segmentation is checked by an examiner. Lastly, the segmented boundaries images are corrected for distortion to obtain the real shape of the CM using MATLAB.
Fig. 2.
Fig. 2. Architecture of the FCN used in this study (UNet with EfficientNetb2 as backbone). The green arrows indicate a convolutional layer with a stride of 2 (performing the convolution and resizing the image at the same time). The cyan arrows indicate a conventional convolutional layer. The cyan arrows and dotted lines indicate that several actions (several Conv2D layers) were performed in the neural network, but for graphical purposes, they were not added. The orange arrows indicate an upsampling procedure. The code and the model summary are reported in the following repository [38].
Fig. 3.
Fig. 3. Different transformations of the same image. A contrast filter with γ = 0.7, 1.0 and 1.2 was applied to (a), (b) and (c), respectively. Each image was rotated, and horizontally and vertically displaced randomly. Speckle noise was added to the images, but it cannot be appreciated as it is shadowed by the original background noise of the images. The vertical and horizontal scale bar (yellow) are 0.25 and 1 mm, respectively.
Fig. 4.
Fig. 4. Training history of the UNet architecture with the EfficientNetb2 as backbone. IoU mean score (a) and loss function (b) are shown.
Fig. 5.
Fig. 5. Non- and accommodated geometry of the ciliary muscle at the left while the CMT profile is shown at the right for the five different testing subjects. The ages of subjects #1 to #5 were 26, 23, 25, 26 and 20 years, respectively.
Fig. 6.
Fig. 6. Bland-Altman analysis comparing the CMTmax value provided by the FCN and the manual segmentation for the testing dataset (159 images). The root mean square error (RMSE) between the FCN and manual segmentation was 21.9 µm and the percentage limit of agreement (%LOA) was 8.1%.
Fig. 7.
Fig. 7. Dynamics of the maximum CMT (left) and LT (center) over the whole recording for 5 subjects. Lens thickness as a function of maximum CMT (right). The navy dots are the experimental values. As expected, the lens increases in thickness and the ciliary muscle contracts (i.e. increased maximum CMT) during accommodation.
Fig. 8.
Fig. 8. (a) Two different manual segmentations on an accommodated state of a specific subject. There was difficulty in appreciating the whole CM boundary, and thus, the segmentation variability between the images of the same subjects is high. (b) Two groups of images were created for each image of the testing dataset. The apex (black circle) and the edge closest to the scleral spur (purple circle) are shown rounded in the image. The vertical and horizontal scale bar (yellow) are 0.25 and 1 mm, respectively.

Tables (1)

Tables Icon

Table 1. Performance of different FCN architectures trained for the CM segmentation. All FCNs were trained in 150 epochs. Each FCN was trained three times and the one with best performance was chosen

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

y = ( x a b a ) γ
loss function = Dice loss + Categorical Focal loss  ,
D i c e l o s s = 1 2 T P 2 F P + F N + F P ,
C a t e g o r i c a l F o c a l l o s s = g t 0.25 ( 1 p r ) α log ( p r ) ,
I o U ( g t , p r ) = | g t p r | | g t p r | = TP T P + F P + F N .
F s c o r e = 2 T P 2 F P + F N + F P .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.