Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Segmentation of beating embryonic heart structures from 4-D OCT images using deep learning

Open Access Open Access

Abstract

Optical coherence tomography (OCT) has been used to investigate heart development because of its capability to image both structure and function of beating embryonic hearts. Cardiac structure segmentation is a prerequisite for the quantification of embryonic heart motion and function using OCT. Since manual segmentation is time-consuming and labor-intensive, an automatic method is needed to facilitate high-throughput studies. The purpose of this study is to develop an image-processing pipeline to facilitate the segmentation of beating embryonic heart structures from a 4-D OCT dataset. Sequential OCT images were obtained at multiple planes of a beating quail embryonic heart and reassembled to a 4-D dataset using image-based retrospective gating. Multiple image volumes at different time points were selected as key-volumes, and their cardiac structures including myocardium, cardiac jelly, and lumen, were manually labeled. Registration-based data augmentation was used to synthesize additional labeled image volumes by learning transformations between key-volumes and other unlabeled volumes. The synthesized labeled images were then used to train a fully convolutional network (U-Net) for heart structure segmentation. The proposed deep learning-based pipeline achieved high segmentation accuracy with only two labeled image volumes and reduced the time cost of segmenting one 4-D OCT dataset from a week to two hours. Using this method, one could carry out cohort studies that quantify complex cardiac motion and function in developing hearts.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Congenital heart defects (CHDs) are developmental defects in the structure of the heart or great vessels of a newborn. They are the most common birth defect, affecting 1.7% of (∼ 2.5 million) newborns each year in the world [1,2]. Abnormalities in the heart are life-threatening. About one in every four babies with a CHD has a critical CHD, requiring surgery or other procedures in their first year of life [3]. A better understanding of the etiology of CHDs is crucial for improving the prevention or treatment of this disease.

The heart is the first functional organ to develop during embryogenesis [4]. It begins to beat and pump blood as a linear tube. The straight heart tube undergoes looping by bending towards the right. The looped heart tube consists of an outer myocardial layer and a luminal endocardial layer separated by a cardiac jelly layer. The cardiac jelly thickens and forms endocardial cushions at the atrioventricular canal and outflow tract (OFT) [5,6]. Endocardial cushions subsequently undergo differentiation, growth, and extensive remodeling to form the valves and septa in the four-chambered heart [5,6]. This study is focused on the looping heart stage because it is a vital step toward the formation of a mature heart. A large number of developmental defects in heart structure and function can be traced back to the malformations in early looping hearts [710].

Throughout heart development, molecular expression, structure, and function are key factors that likely influence each other [11]. For example, changes in blood flow during heart development can alter shear force on the endocardium, resulting in altered gene expression in endocardial cells [1215]. The altered gene expression then affects the structure and function of the developing heart leading to further alterations to the biomechanical forces [1619]. To fully comprehend the etiology of CHDs, we need to understand the links among all three factors. Traditional imaging methods used in developmental biology, such as histology and immunofluorescence, can provide knowledge about molecular expressions and structural abnormalities, but reveal very little about the functional aspects of the heart.

Optical coherence tomography (OCT) is a technique well-suited to investigate heart development because of its capability to image both structure and function of tiny beating embryonic hearts under physiological conditions. Using structural and Doppler OCT imaging, heart function, such as cardiac motion [20], wall strain [2123], blood flow [24,25], and shear stress [2528], in the heart tube throughout the cardiac cycle have been analyzed. To quantify these functional activities, delineating the cardiac tissue boundaries of a beating embryonic heart from OCT images is necessary. Due to the lack of an automatic method, heart structures in OCT images were often labeled manually in the previous studies [22,25,26]. Manual segmentation is time-consuming, labor-intensive, and prone to subjectivity. Therefore, an automatic method is needed to facilitate high-throughput studies. A study proposed a semi-automatic method using a 2-D deformable double-line model to extract OFT structures from OCT images [29], facilitating the measurements of OFT hemodynamics over looping developmental stages [27,3032]. However, that method can only analyze cross-sectional images (with circular-like shapes) of the heart tube and not longitudinal images (showing an elongated view of the heart). This somehow limits the acquisition and analysis of the method in the Ref. [29]. A method that can analyze and segment images from any view of the embryonic heart is still lacking and highly desirable.

Figure 1 shows exemplar OCT images captured at four distinct phases in a heart cycle. Figure 1(a-c) are three sequential phases showing contraction at the inflow, ventricle and OFT respectively. Figure 1(d) is the phase when the heart lumen is mostly expanded. Segmentation of the embryonic heart from OCT images consists of labeling the myocardium, cardiac jelly, and heart lumen. The endocardium is extracted as a part of the lumen because it is a thin monolayer of squamous cells lining the lumen and cannot always be distinguished from blood. Segmentation of embryonic heart structures from OCT images is difficult due to the following reasons: (1) there are no clear edges between the myocardium and its surrounding tissues because of their similar gray values (yellow arrows in Fig. 1); (2) gray level within the heart lumen is inhomogeneous (cyan arrows in Fig. 1); (3) cardiac shapes exhibit a great variability throughout a heart cycle, in particular, the irregular shape of heart lumen (Fig. 1), and (4) the cardiac shape varies across heart samples especially when they are imaged at different orientations.

 figure: Fig. 1.

Fig. 1. Exemplar OCT images captured from a plane of a beating embryonic heart. (a)-(d) are four distinct phases of a cardiac cycle. vtr: ventricle; CJ: cardiac jelly; myo: myocardium.

Download Full Size | PDF

Given all these difficulties, traditional image-driven segmentation approaches such as thresh-holding, edge detection [33,34], region growing [35], clustering [36,37], and active contours [38,39], cannot correctly detect the inhomogeneous lumen or differentiate the myocardium from its adjacent tissues, because these methods are merely based on pixel gray values with no shape knowledge. Model-based approaches such as active shape and appearance models [40,41], can segment an object with prior shape knowledge. These approaches build a statistical shape model of the object to be segmented, then iteratively deform the model to fit the object in a new image. These active models are constructed by extracting statistical information and engineered features from manually annotated training data. However, in a case where data are not available to build a sufficiently general model, this approach is very difficult. Another type of method widely used in medical image segmentation is atlas-based segmentation [4244], in which a labeled reference image, or atlas, is registered to a target image, then the labels of the atlas are propagated to the target image using the resulting deformation field. In this atlas-based approach, the segmentation results heavily rely on the registration performance. A large difference between the target and the atlas often leads to inaccurate registration and segmentation. For example, if we manually label one frame of an embryonic cardiac sequence (a reference) and propagate its labels to other frames (targets) using pairwise image registration, the segmentation accuracy decreases with the increasing distance between the reference and the target. Only the frames that are close to the reference will obtain reliable results.

In recent years, deep learning-based segmentation methods, such as U-Net [45], and their variants [4650], have achieved noteworthy success with remarkable improvement in both time and accuracy. However, to achieve their optimal performance, these approaches require abundant pixel-wise labeled training exemplars, which are unavailable for the task in this study. Imaging hundreds of embryos is quite labor-intensive. Moreover, labels of training exemplars produced using manual methods are extremely time-consuming and require specialized expertise. For example, it takes an expert about five minutes to annotate a 2-D OCT image and thus 250 hours to annotate a complete beating heart sequence consisting of 3,000 images (20 slices and 150 time points).

To reduce the dependency on large amounts of labeled training data, many research efforts have been focused on solutions to train an accurate segmentation model with a limited amount of annotated exemplars. A common solution to enlarge the training dataset is hand-tuned data augmentation including intensity and spatial transformations, such as brightening and enhanced contrast [51,52], random rotations, translations, scaling, affine transformations, and elastic deformations [45,46,53,54], etc. These parameterized transformations are simple to implement and have been shown to reduce overfitting and improve the segmentation performance in some applications [45,46,5154]. However, the parameters of these transform functions are set by experience and might fail to capture the complex variations within the typical range exhibited by the population under study [55]. Generative adversarial networks [56] have also been used for data augmentation by synthesizing new image-label pairs based on the existing labeled exemplars [5759], but its good results are based on vast amounts of training data, and the results cannot be controlled well. Moreover, the uncertainty of whether the generated synthetic images can represent realistic radiological features in medical images is still a main concern [60]. All above data augmentation approaches do not leverage the knowledge in unlabeled training data and thus have limited ability to simulate real variations of the data. Recently, several studies [61,62] introduced image registration for data augmentation. Nalepa et al. [61] aligned image pairs in the training set using diffeomorphic image registration, then the resulting transformations were used to create augmented data. This approach was demonstrated in brain tumor segmentation from magnetic resonance imaging and showed improved generalization of deep segmentation models when combined with an affine augmentation. Zhao et al. [62] proposed a data augmentation approach using deep learning-based registration [63]. Specifically, two deep learning models were trained to learn the spatial and appearance transformations between all the training data, including both the labeled and unlabeled images. The learned transformations were then applied to the labeled images to synthesize new labeled images. This approach used deep learning-based registration [63], usually requiring less than a second to align a new 3-D image pair at test time, which was much faster than traditional registration methods. In addition, this registration-based augmentation can simulate real variations in the dataset by taking advantage of the knowledge in the unlabeled images, thus can synthesize diverse and realistic labeled exemplars. Inspired by these prior works, we propose to segment embryonic heart structures across a cardiac cycle from OCT images based on deep learning, and the problem of limited training data is overcome using registration-based augmentation.

In this study, we propose a deep learning-based pipeline to facilitate the segmentation of beating embryonic heart structures from individual 4-D OCT datasets. The pipeline consists of three steps: first, for each 4-D heart dataset, we selected multiple (no more than four) image volumes and manually annotated heart structures including myocardium, endocardial cushions, and lumen from these image volumes; second, VoxelMorph [63], a learning-based registration model, was used to synthesize additional labeled images by registering the manual labeled volumes to other image volumes in the cardiac cycle; third, the synthesized labeled images were used to train a U-net [45], a semantic segmentation network, to predict the segmentation of cardiac structures from original unlabeled OCT images. This pipeline is semi-automatic because every 4-D OCT dataset goes through the above three steps in which at least two image volumes are manually annotated. We demonstrated this segmentation pipeline on three embryonic hearts. Our results showed that the proposed framework could achieve a high segmentation accuracy with only two manually labeled image volumes, which greatly reduced the time to segment a 4-D OCT dataset.

2. Materials and methods

2.1 Image acquisition

Fertilized quail eggs (Coturnix coturnix communis; Northwest Heritage Quail, Pullman, WA) were incubated in a humidified incubator (G.Q.F. Manufacturing, Savannah, GA) at 38°C. After 48 hours of development, the eggs were taken out of the incubator, and the egg contents were cracked into sterilized 35 mm Petri dishes [64]. The embryos at Hamburger–Hamilton [65] stage 14-15 were cultured in Petri dishes inside an environmental OCT imaging chamber with controlled temperature (38°C) and humidity to ensure imaging under physiological conditions [66].

The hearts were imaged using a custom-built, spectral-domain OCT system [6668], with a light source centered at 1310 nm with a 75 nm full-width at half-maximum bandwidth. The line rate of the OCT system was 47 kHz. The axial and lateral resolutions were ∼10µm and ∼12µm respectively.

For each heart, a 4-D OCT data were collected by imaging over multiple heartbeats at sequential slice locations and reassembled using image-based retrospective gating [69]. 180 A-scans were acquired per frame, and the step between adjacent A-scans was 5µm in the B-scan direction. A final reassembled 4-D dataset of an embryonic heart consisted of 150 image volumes at different time points over one cardiac cycle. In total, three datasets, each have 150 image volumes, obtained from three individual embryonic hearts were used to demonstrate the proposed method.

2.2 Image processing

The proposed strategy can be divided into 3 steps (Fig. 2): For every 4D OCT dataset, (1) we selected multiple (not more than four) image volumes at different time points of a cardiac cycle as key-volumes. The myocardium, cardiac jelly, and lumen in the images of these key volumes were manually labeled. (2) A registration model, VoxelMorph, [63] was used to synthesize additional labeled image volumes by learning transformations between key-volumes and other unlabeled volumes in the sequence. (3) The synthesized labeled images, along with the manually labeled images of key-volumes, were used to train a U-Net for heart structure segmentation. Details of each step will be described in the following sections.

 figure: Fig. 2.

Fig. 2. Image processing pipeline for segmentation of beating embryonic heart structures from 4-D OCT datasets. Step 1 is selecting multiple volumes at different time points of a cardiac cycle as key-volumes. Cardiac structures of key-volumes were manually labeled. Step 2 is synthesizing additional labeled image volumes by transformations learned from registration model VoxelMorph. Step 3 is semantic heart structure segmentation using U-Net trained on synthesized labeled images of step 2 and manually labeled images of step 1.

Download Full Size | PDF

2.2.1 Manual selection and annotation of key-volumes

We selected multiple (no more than four) image volumes at different time points of a cardiac cycle as key-volumes, then manually labeled their myocardium, cardiac jelly, and lumen. The regions of the three heart structures were annotated with different class labels. The key-volumes were chosen among the four cardiac phases shown in Fig. 1. Selecting distinct cardiac phases as the key-volumes assures that the synthesized image volumes cover a wide variety of cardiac shapes through the complete heart cycle. We compared the segmentation results using different numbers of key-volumes. When the number of key-volumes N was equal to 1, the volume that had the most contracted lumen at the ventricle (Fig.1b) was selected. When N was 2, a second volume that had the most expanded lumen (Fig.1d) was added. When N was 3, any three of the four phases in Fig. 1 were selected. When N was 4, all the four phases in Fig. 1 were chosen.

2.2.2 Data augmentation using learning-based registration

We used a learning-based registration model VoxelMorph [63] to synthesize new labeled image volumes. VoxelMorph is a transformation network that parameterizes the registration function. It can register a 3-D image pair in under a second, orders of magnitude faster than traditional registration techniques [63]. The network architecture is similar to U-Net, which consists of encoder and decoder sections concatenated with skip connections. But unlike U-Net which outputs a segmentation mask, VoxelMorph maps an input image pair to a deformation field to align these images. Given a moving image volume m and a fixed image volume f, the VoxelMorph outputs a registration field φ as well as a moved image volume mφ generated by warping m using φ (Fig. 3).

 figure: Fig. 3.

Fig. 3. Generation of additional labeled image volumes using learning-based registration. (a) A VoxelMorph network was trained using random image volume pairs of a 4-D OCT dataset. In each volume pair, one volume was the fixed volume (f) and the other was moving (m). The network was trained by optimizing a loss function (Loss) to output a registration field φ that aligned the input image pair. The loss function Loss has a similarity term Lsim and a smoothness term Lsmooth. The similarity term was measuring the difference between f and the moved volume mφ. (b) The trained network was used to register the key-volumes to other unlabeled volumes. The key-volumes and their segmentations were transformed using the resulting registration field to create synthesized images and segmentations.

Download Full Size | PDF

In our experiments, images were cropped to the region of interest (ROI) to exclude background pixels. The ROI location was determined by the manual labels of key-volumes. The manual labels of all key-volumes were added up and then binarized to create a heart mask. The ROI was determined as the expansion of the heart mask by 5 pixels. Cropped ROI volumes were then resized to 64 × 64 × 32 (∼ 12 um/pixel). For each 4-D OCT dataset, VoxelMorph was trained using image volume pairs randomly selected from the image sequence. The network architecture was adopted from Balakrishnan et al. [63] and was built using Keras with a Tensorflowbackend [70] . The loss function for optimizing the network parameters is defined in the following equation:

$$Loss = {L_{sim}}({f,m \circ \varphi } )+ \lambda {L_{smooth}}(\varphi )$$
where Lsim is the image similarity term that measures the difference between f and mφ and Lsmooth is a regularization term to smooth the deformation field. The regularization term is weighted by a parameter λ. Because the images in an OCT dataset had the same image intensity distributions and local contrast, we used the mean squared voxel-wise difference between f and mφ as Lsim which was defined in the following equation:
$${L_{sim}}({f,m \circ \varphi } )= \frac{1}{{|\Omega |}}\mathop \sum \nolimits_{p \in \Omega } {[{f(p )- [{m \circ \varphi } ](p )} ]^2}$$
where Ω⊂R3 is the 3-D spatial domain of the image volume, p is a voxel in this spatial domain. The smoothness term Lsmooth is defined below:
$${L_{smooth}}(\varphi )= {\sum\limits_{\textrm{p} \in \Omega } {\|{\nabla \textrm{u(p)}} \|} ^2}$$
where u is the displacement between f and m and ∇u is the spatial gradient of the displacement. The parameter λ for weighting the smoothness term (defined in Eq. (1)) was empirically set at 0.2.

The trained VoxelMorph learned global function parameters that were optimal for the entire image sequence. Therefore, for an input image pair, the computed registration field was an approximation to the optimal deformation. This global VoxelMorph could become optimal for a specific image pair by further optimizing the network parameters on this input image pair.

To synthesize additional labeled images, we set one of the key-volumes as moving m and an unlabeled image volume as fixed f (Fig.3b). This image pair was set as an input to the trained VoxelMorph (Fig.3b). The output registration field was not optimal for the input image pair, so it was fine-tuned by further optimizing the network parameters using gradient descent for 100 iterations on this input image pair. The input key-volume m and its segmentation Sm were spatially transformed using the fine-tuned registration field, resulting in a synthesized image volume mφ and its segmentation Smφ (Fig.3b). In total, N × nT labeled image volumes were synthesized, where N was the number of key-volumes, and nT was the total number of image volumes of the 4-D OCT dataset.

2.2.3 Heart structure segmentation by U-Net

U-Net is a fully convolutional network developed for pixel-wise segmentation of biomedical images. In this study, it was used for embryonic heart structure segmentation from OCT images. For each 4-D OCT dataset, an individual U-Net was trained. In one 4-D dataset, a total of 32(slices)*150(volumes) = 4,800 images (including the 2-D images of the synthesized volumes created by the previous step, and the 2-D images of the key-volumes), were shuffled and randomly divided into two sets for training and validation, with proportions of 0.9 and 0.1 respectively, to optimize the parameters of a U-Net network. Then all images of the 4-D OCT dataset except those of the key-volumes, were used to test the final segmentation performance.

The architecture of the U-Net was adopted from Milletari et al. [45]. In total, there are five convolution groups constructing the encoder, and four deconvolution groups constructing the decoder. The input image size was 64 × 64. The network output was segmentation masks for three classes, myocardium, cardiac jelly, and lumen. Weighted cross-entropy [45] was used as the loss function to mitigate the issue of label imbalance. The model was trained by Adam optimizer [71] at a learning rate of 1e-3 on an Nvidia Tesla P100 GPU processor (NVIDIA corporation, Santa Clara, California, USA). The training epochs were selected using early stopping on a validation set.

2.2.4 Post-processing

To improve the segmentation results, some post-processing steps were performed to reduce U-Net prediction error. First, small holes in segmentation masks were filled using morphological operations. Then all 2-D segmentation masks were reassembled back to a 4-D mask which was smoothed in both spatial and temporal dimensions using an average filter (kernel size 3).

2.3 Segmentation evaluation

The segmentation results were compared with manual ground truth and evaluated using the Accuracy score, the Dice score, and the boundary F1 score (BFScore) defined in the following equations:

$$Accuracy = \frac{{|{Pred \cap GT} |}}{{|{GT} |}}$$
$$Dice = \frac{{2|{Pred \cap GT} |}}{{|{Pred + GT} |}}$$
$$BFScore = \,\frac{{2\ast precision\ast recall}}{{precision + recall}}$$
where Pred denotes the segmentation mask predicted by U-Net and GT is the manual ground truth mask. The BFscore is defined as the harmonic mean of the precision and recall values between Pred boundaries and GT boundaries.

3. Results

We proposed a deep learning-based segmentation pipeline to segment individual 4-D OCT datasets of beating embryonic hearts. For each 4-D OCT dataset, we first utilized a registration network – VoxelMorph [63] to synthesize additional labeled exemplars from multiple manually annotated key-volumes, to overcome the problem of the limited training dataset. Then, the synthesized image exemplars were used to train an end-to-end fully convolutional network – U-Net [45] to segment the heart structures of unlabeled images in this 4-D dataset.

We compared the proposed strategy with two baseline methods using only VoxelMorph or U-Net. The baseline method using only VoxelMorph was a registration-based segmentation: the labels of key-volumes were propagated to their adjacent volumes by registration. In another baseline method using only U-Net, the U-Net was trained with only labeled key-volumes. These methods were implemented on three 4-D OCT datasets obtained from three beating embryonic hearts. Table 1 and Fig. 4 show the segmentation performance of the three methods when different numbers of labeled key-volumes were available. As expected, the segmentation performance of all three methods was improved with the increase in the number of key-volumes. The proposed segmentation pipeline that combined VoxelMorph and U-Net outperformed the two baseline methods. The improvements using the proposed method are significant: P (VoxelMorph VS proposed, all three metrics) < 0.01 and P (U-Net VS proposed, all three metrics) < 0.05, where P is the p-value calculated by paired t-test. The improvement of the proposed strategy was obvious when only one or two labeled key-volumes were available (Fig. 4).

 figure: Fig. 4.

Fig. 4. Segmentation performance of three methods versus the number of available labeled key-volumes (the average of 3 heart samples). The three methods were the two baseline methods using only VoxelMorph or U-Net, and the proposed combined strategy using VoxelMorph and U-Net. Mean Accuracy and Mean Dice are the average Accuracy and Dice scores of all classes in all images of all 3 heart samples. Mean BFScore is the boundary F1 contour matching score between the predicted segmentation and the ground truth segmentation of all 3 heart samples.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of three methods: VoxelMorph only, U-Net only, and the proposed combined strategy, respectively on 3 individual embryonic hearts. Mean Accuracy and Mean Dice are the average Accuracy and Dice scores of all classes in all images of one heart sample. Mean BFScore is the boundary F1 contour matching score between the predicted segmentation and the ground truth segmentation.a

Exemplar segmentation results of three hearts are shown in Fig. 5. For each heart dataset, we sorted the segmentation results of individual images by their Dice scores in order to show the full range of result quality. Nine images ranking at the 10th (best) to 90th (worst) percentile are shown in Fig. 5 with their Dice scores labeled at the bottom right (white text in the image). The number of key-volumes N = 2 worked well for all three hearts (Fig. 5). The segmentation results could be further improved when N was increased to 4 (Fig. 5). Figure 6 shows the 3-D reconstructions of segmented myocardium, cardiac jelly, and lumen of a heart at three different time points. The 3-D renderings were created using Amira (Thermo Fisher Scientific, MA, U.S.).

 figure: Fig. 5.

Fig. 5. Segmentation results using the proposed method demonstrated on three heart samples with the Dice score ranking from top 10% to top 90% of all test images. N is the number of available key-volumes. The blue, green, and red contours are the myocardium outer boundary, myocardium inner boundary, and lumen boundary, respectively.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. 3-D reconstructions of extracted myocardium, cardiac jelly, and lumen from a beating heart at three time phases. The first row is showing the volume rendering results. The second row is showing the surface rendering of extracted lumens overlaid on cross-sectional OCT images. See Visualization 1 (MP4, 3.22 MB) for the shape changes of segmented cardiac structures during heart beating.

Download Full Size | PDF

4. Discussion

Segmentation of the cardiac structures is an important step in the quantification of cardiac motion and function in a dynamic embryonic heart using OCT. This study proposed a deep-learning-based framework to facilitate embryonic heart structure segmentation from a 4-D OCT dataset. By using a registration network to synthesize abundant training exemplars, the method achieved high segmentation accuracy with limited manually labeled images (Table.1 and Fig. 4). Although more manually labeled key-volumes provided a higher segmentation accuracy, a good segmentation could be achieved with only two labeled key-volumes (Fig. 5).

The advantage of the proposed framework is its robustness to the variance across heart samples. Unlike the method of Ref. [29] which limits segmentation to cross-sectional images of the heart tube, the method presented here can analyze images from any view of the embryonic heart (Fig. 5). Additionally, although it was demonstrated using embryonic hearts at HH stage 15, the proposed strategy can be applied to other stages of development. This procedure may also potentially be applied to the segmentation of moving organs of other animal models acquired with different imaging modalities.

With the proposed strategy, the manual labor and the time cost for extracting heart structures from a 4-D OCT dataset were greatly reduced. Compared with the fully manual segmentation, which usually takes more than a week to complete a whole 4-D OCT image dataset, the proposed strategy can take as little as 2.5 hours (2 hours for manual segmentation of two key-volumes, 25 minutes for training VoxelMorph and 5 minutes for training U-Net).

One limitation of this method is that it is not fully automatic. For every 4-D OCT image dataset, at least two image volumes have to be manually segmented. Building a fully automatic model that is general enough to cover all possible cardiac shapes over heart samples is not practical in this study due to the limited dataset. If abundant heart samples are imaged in the future, the proposed strategy in this study could be used to facilitate the segmentation of the obtained datasets, resulting in labeled exemplars consisting of various cardiac shapes across heart samples. As a result, a general segmentation network covering varied cardiac shapes over heart samples could potentially be trained to achieve a fully automatic segmentation.

5. Conclusion

In conclusion, the proposed framework provides an efficient and accurate segmentation of looping embryonic heart structures from a 4-D OCT dataset with limited manually labeled images, allowing the significant potential for facilitating quantitative analysis of cardiac motion and function in developing hearts. For example, the segmented heart structures can be used as an input to a computational fluid dynamics model for analyzing cardiac hemodynamics and shear stress. In addition, the reduced time-cost allows future studies investigating the effects of teratogens, such as ethanol or nicotine, on heart development, which are impractical with manual segmentation due to the requirement of a large number of samples for comparison across multiple cohorts.

Funding

National Institutes of Health (R01EY028667, R01HL126747).

Acknowledgments

Research reported in this publication was supported by the National Institutes of Health Grants R01HL126747 and R01EY028667. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Health.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Wu, J. He, and X. Shao, “Incidence and mortality trend of congenital heart disease at the global, regional, and national level, 1990-2017,” Medicine 99, e20593 (2020). [CrossRef]  

2. GBD 2017 Congenital Heart Disease Collaborators, “Global, regional, and national burden of congenital heart disease, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017,” Lancet Child Adolesc Health 4(3), 185–200 (2020). [CrossRef]  

3. M. E. Oster, K. A. Lee, M. A. Honein, T. Riehle-Colarusso, M. Shin, and A. Correa, “Temporal trends in survival among infants with critical congenital heart defects,” Pediatrics 131(5), e1502–e1508 (2013). [CrossRef]  

4. T. Brade, L. S. Pane, A. Moretti, K. R. Chien, and K. L. Laugwitz, “Embryonic heart progenitors and cardiogenesis,” Cold Spring Harbor Perspect. Med. 3(10), a013847 (2013). [CrossRef]  

5. R. R. Markwald, T. P. Fitzharris, and F. J. Manasek, “Structural development of endocardial cushions,” Am. J. Anat. 148(1), 85–119 (1977). [CrossRef]  

6. A. D. Person, S. E. Klewer, and R. B. Runyan, Cell Biology of Cardiac Cushion Development (Academic Press Inc., 2005), Vol. 243, pp. 287–335.

7. L. E. Briggs, J. Kakarla, and A. Wessels, “The pathogenesis of atrial and atrioventricular septal defects with special emphasis on the role of the dorsal mesenchymal protrusion,” Differentiation 84(1), 117–130 (2012). [CrossRef]  

8. A. Fischer, C. Steidl, T. U. Wagner, E. Lang, P. M. Jakob, P. Friedl, K. P. Knobeloch, and M. Gessler, “Combined loss of Hey1 and HeyL causes congenital heart defects because of impaired epithelial to mesenchymal transition,” Circ. Res. 100(6), 856–863 (2007). [CrossRef]  

9. S. M. Ford, M. T. McPheeters, Y. T. Wang, P. Ma, S. Gu, J. Strainic, C. Snyder, A. M. Rollins, M. Watanabe, and M. W. Jenkins, “Increased regurgitant flow causes endocardial cushion defects in an avian embryonic model of congenital heart disease,” Congenit Heart Dis 12, 322–331 (2017). [CrossRef]  

10. G. Karunamuni, S. Gu, Y. Qiu Doughman, L. M. Peterson, K. Mai, Q. McHale, M. W. Jenkins, K. K. Linask, A. M. Rollins, M. Watanabe, Y. Q. Doughman, L. M. Peterson, K. Mai, Q. McHale, M. W. Jenkins, K. K. Linask, A. M. Rollins, and M. Watanabe, “Ethanol exposure alters early cardiac function in the looping heart: a mechanism for congenital heart defects?” American Journal of Physiology-Heart and Circulatory Physiology 306(3), H414–H421 (2014). [CrossRef]  

11. G. H. Karunamuni, S. Gu, M. R. Ford, L. M. Peterson, P. Ma, Y. T. Wang, A. M. Rollins, M. W. Jenkins, and M. Watanabe, “Capturing structure and function in an embryonic heart with biophotonic tools,” Front Physiol 5, 351 (2014). [CrossRef]  

12. R. E. Poelmann, A. C. Gittenberger-de Groot, and B. P. Hierck, “The development of the heart and microcirculation: role of shear stress,” Med. Biol. Eng. Comput. 46(5), 479–484 (2008). [CrossRef]  

13. N. Azuma, S. A. Duzgun, M. Ikeda, H. Kito, N. Akasaka, T. Sasajima, and B. E. Sumpio, “Endothelial cell response to different mechanical forces,” J. Vasc. Surg. 32(4), 789–794 (2000). [CrossRef]  

14. R. J. Dekker, S. Van Soest, R. D. Fontijn, S. Salamanca, P. G. De Groot, E. VanBavel, H. Pannekoek, and A. J. G. Horrevoets, “Prolonged fluid shear stress induces a distinct set of endothelial cell genes, most specifically lung Krüppel-like factor (KLF2),” Blood 100(5), 1689–1698 (2002). [CrossRef]  

15. B. C. W. Groenendijk, K. Van Der Heiden, B. P. Hierck, and R. E. Poelmann, “The role of shear stress on ET-1, KLF2, and NOS-3 expression in the developing cardiovascular system of chicken embryos in a venous ligation model,” Physiology 22(6), 380–389 (2007). [CrossRef]  

16. K. Yashiro, H. Shiratori, and H. Hamada, “Haemodynamics determined by a genetic programme govern asymmetric development of the aortic arch,” Nature 450(7167), 285–288 (2007). [CrossRef]  

17. B. Hogers, M. C. DeRuiter, A. C. Gittenberger-de Groot, and R. E. Poelmann, “Extraembryonic venous obstructions lead to cardiovascular malformations and can be embryolethal,” Cardiovasc Res 41(1), 87–99 (1999). [CrossRef]  

18. P. Basu, P. E. Morris, J. L. Haar, M. A. Wani, J. B. Lingrel, K. M. L. Gaensler, and J. A. Lloyd, “KLF2 is essential for primitive erythropoiesis and regulates the human and murine embryonic beta-like globin genes in vivo,” Blood 106(7), 2566–2571 (2005). [CrossRef]  

19. G. B. Atkins and M. K. Jain, “Role of Krüppel-like transcription factors in endothelial biology,” Circ. Res. 100(12), 1686–1695 (2007). [CrossRef]  

20. B. A. Filas, I. R. Efimov, and L. A. Taber, “Optical coherence tomography as a tool for measuring morphogenetic deformation of the looping heart,” Anat Rec 290(9), 1057–1068 (2007). [CrossRef]  

21. P. Li, X. Yin, L. Shi, A. Liu, S. Rugonyi, and R. K. Wang, “Measurement of strain and strain rate in embryonic chick heart in vivo using spectral domain optical coherence tomography,” IEEE Trans. Biomed. Eng. 58(8), 2333–2338 (2011). [CrossRef]  

22. P. Li, R. K. Wang, X. Yin, L. Shi, and S. Rugonyi, “In vivo functional imaging of blood flow and wall strain rate in outflow tract of embryonic chick heart using ultrafast spectral domain optical coherence tomography,” J. Biomed. Opt. 17(9), 0960061 (2012). [CrossRef]  

23. C. Guo, J. Liu, Q. Wang, R. K. Wang, S. Dou, T. Xu, Y. Wang, Y. Zhao, and Z. Ma, “In vivo assessment of wall strain in embryonic chick heart by spectral domain optical coherence tomography,” Appl. Opt. 54(31), 9253–9257 (2015). [CrossRef]  

24. A. Davis, J. Izatt, and F. Rothenberg, “Quantitative Measurement of Blood Flow Dynamics in Embryonic Vasculature Using Spectral Doppler Velocimetry,” Anat Rec 292(3), 311–319 (2009). [CrossRef]  

25. M. W. Jenkins, L. Peterson, S. Gu, M. Gargesha, D. L. Wilson, M. Watanabe, and A. M. Rollins, “Measuring hemodynamics in the developing heart tube with four-dimensional gated Doppler optical coherence tomography,” J. Biomed. Opt. 15(06), 066022 (2010). [CrossRef]  

26. L. M. Peterson, M. W. Jenkins, S. Gu, L. Barwick, M. Watanabe, and A. M. Rollins, “4D shear stress maps of the developing heart using Doppler optical coherence tomography,” Biomed. Opt. Express 3(11), 3022 (2012). [CrossRef]  

27. A. Liu, X. Yin, L. Shi, P. Li, K. L. Thornburg, R. Wang, and S. Rugonyi, “Biomechanics of the chick embryonic heart outflow tract at HH18 using 4D optical coherence tomography imaging and computational modeling,” PLoS One 7, 1 (2012).

28. S. Elahi, S. Gu, A. M. Rollins, and M. W. Jenkins, “Semi-automated measurement of absolute blood velocity and shear stress in developing embryonic hearts using a MHz FDML swept laser source (Conference Presentation),” in Diagnosis and Treatment of Diseases in the Breast and Reproductive System IV, M. C. Skala and P. J. Campagnola, eds. (SPIE, 2018), Vol. 10472, p. 24.

29. X. Yin, A. Liu, K. L. Thornburg, R. K. Wang, and S. Rugonyi, “Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images,” J. Biomed. Opt. 17(9), 1 (2012). [CrossRef]  

30. S. Goenezen, V. K. Chivukula, M. Midgett, L. Phan, and S. Rugonyi, “4D Subject-Specific Inverse Modeling of the Chick Embryonic Heart Outflow Tract Hemodynamics,” Biomech Model Mechanobiol 15(3), 723–743 (2016). [CrossRef]  

31. M. Midgett, V. K. Chivukula, C. Dorn, S. Wallace, and S. Rugonyi, “Blood flow through the embryonic heart outflow tract during cardiac looping in HH13-HH18 chicken embryos,” J. R. Soc. Interface. 12(111), 20150652 (2015). [CrossRef]  

32. K. Courchaine, M. J. Gray, K. Beel, K. Thornburg, and S. Rugonyi, “4-D Computational Modeling of Cardiac Outflow Tract Hemodynamics over Looping Developmental Stages in Chicken Embryos,” J Cardiovasc Dev Dis 6(1), 11 (2019). [CrossRef]  

33. J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). [CrossRef]  

34. J. S. Lim, Two-Dimensional Signal and Image Processing (Englewood Cliffs, 1990).

35. R. Adams and L. Bischof, “Seeded Region Growing,” IEEE Trans Pattern Anal Mach Intell 16(6), 641–647 (1994). [CrossRef]  

36. T. N. Pappas and N. S. Jayant, “Adaptive clustering algorithm for image segmentation,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing3, 1667–1670 (1989).

37. S. Naz, H. Majeed, and H. Irshad, “Image segmentation using fuzzy clustering: A survey,” 6th International Conference on Emerging Technologies, ICET 2010181–186 (2010).

38. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int J Comput Vis 1(4), 321–331 (1988). [CrossRef]  

39. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. on Image Process. 10(2), 266–277 (2001). [CrossRef]  

40. T. F. Cooles, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Machine Intell. 23(6), 681–685 (2001). [CrossRef]  

41. V. Tavakoli and A. A. Amini, “A survey of shaped-based registration and segmentation techniques for cardiac images,” Computer Vision and Image Understanding 117(9), 966–989 (2013). [CrossRef]  

42. J. E. Iglesias and M. R. Sabuncu, “Multi-atlas segmentation of biomedical images: A survey,” Med. Image Anal. 24(1), 205–219 (2015). [CrossRef]  

43. P. Aljabar, R. A. Heckemann, A. Hammers, J. V. Hajnal, and D. Rueckert, “Multi-atlas based segmentation of brain images: Atlas selection and its effect on accuracy,” NeuroImage 46(3), 726–738 (2009). [CrossRef]  

44. H. A. Kirişli, M. Schaap, S. Klein, S. L. Papadopoulou, M. Bonardi, C. H. Chen, A. C. Weustink, N. R. Mollet, E. J. Vonken, R. J. Van Der Geest, T. Van Walsum, and W. J. Niessen, “Evaluation of a multi-atlas based method for segmentation of cardiac CTA data: a large-scale, multicenter, and multivendor study,” Med. Phys. 37(12), 6279–6291 (2010). [CrossRef]  

45. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.

46. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation,” in Fourth International Conference on 3D Vision (3DV). IEEE (Institute of Electrical and Electronics Engineers Inc., 2016), pp. 565–571.

47. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9901, 424–432 (2016).

48. O. Oktay, J. Schlemper, L. Le Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, “Attention U-Net: Learning Where to Look for the Pancreas,” arXivarXiv.1804.03999 (2018). [CrossRef]  

49. W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3D-UNET: Separable 3D U-Net for brain tumor segmentation,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11384, 358–368 (2019).

50. N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access 9, 82031–82057 (2021). [CrossRef]  

51. J. Hong, B. Y. Park, and H. Park, “Convolutional neural network classifier for distinguishing Barrett’s esophagus and neoplasia endomicroscopy images,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)2892–2895 (2017).

52. F. Perez, C. Vasconcelos, S. Avila, and E. Valle, “Data Augmentation for Skin Lesion Analysis,” OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis303–311 (2018).

53. P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” Proceedings of the International Conference on Document Analysis and Recognition, ICDAR 2003-January, 958–963 (2003).

54. H. R. Roth, C. T. Lee, H. C. Shin, A. Seff, L. Kim, J. Yao, L. Lu, and R. M. Summers, “Anatomy-specific classification of medical images using deep convolutional nets,” in International Symposium on Biomedical Imaging (IEEE Computer Society, 2015), Vol. 2015-July, pp. 101–104.

55. A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox, “Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2014). [CrossRef]  

56. Y. Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS 2014) (2014), pp. 2672–2680.

57. V. Sandfort, K. Yan, P. J. Pickhardt, and R. M. Summers, “Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks,” Sci. Rep. 9(1), 1–9 (2019). [CrossRef]  

58. M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (IEEE Computer Society, 2018), Vol. 2018-April, pp. 289–293.

59. M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” Neurocomputing 321, 321–331 (2018). [CrossRef]  

60. P. Chlap, H. Min, N. Vandenberg, J. Dowling, L. Holloway, A. Haworth, P. Chlap, M. Phd, N Vandenberg, M. Dowling, J., and L. Holloway, “A review of medical image data augmentation techniques for deep learning applications,” J Med Imag Rad Onc 65(5), 545–563 (2021). [CrossRef]  

61. J. Nalepa, M. Cwiek, W. Dudzik, M. Kawulok, G. Mrukwa, S. Piechaczek, P. R. Lorenzo, M. Marcinkiewicz, B. Bobek-Billewicz, P. Wawrzyniak, P. Ulrych, J. Szymanek, and M. P. Hayball, “Data augmentation via image Registration,” in 2019 IEEE International Conference on Image Processing (ICIP) (IEEE Computer Society, 2019), Vol. 2019-Septe, pp. 4250–4254.

62. A. Zhao, G. Balakrishnan, F. Durand, J. V Guttag, and A. V Dalca, “Data augmentation using learned transformations for one-shot medical image segmentation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8543–8553.

63. G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “VoxelMorph: a learning framework for deformable medical image registration,” IEEE Trans. Med. Imaging 38(8), 1788–1800 (2018). [CrossRef]  

64. B. E. Dunn, “Technique for Shell-Less Culture of the 72-Hour Avian Embryo,” Poult Sci 53(1), 409–412 (1974). [CrossRef]  

65. V. Hamburger and H. L. Hamilton, “A series of normal stages in the development of the chick embryo,” Dev. Dyn. 195(4), 231–272 (1992). [CrossRef]  

66. M. W. Jenkins, M. Watanabe, and A. M. Rollins, “Longitudinal imaging of heart development with optical coherence tomography,” IEEE J. Select. Topics Quantum Electron. 18(3), 1166–1175 (2012). [CrossRef]  

67. Z. Hu and A. M. Rollins, “Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer,” Opt. Lett. 32(24), 3525 (2007). [CrossRef]  

68. G. Karunamuni, S. Gu, Y. Q. Doughman, A. I. Noonan, A. M. Rollins, M. W. Jenkins, and M. Watanabe, “Using optical coherence tomography to rapidly phenotype and quantify congenital heart defects associated with prenatal alcohol exposure: OCT of Heart Defects Associated with Alcohol,” Dev. Dyn. 244(4), 607–618 (2015). [CrossRef]  

69. M. Gargesha, M. W. Jenkins, D. L. Wilson, and A. M. Rollins, “High temporal resolution OCT using image-based retrospective gating,” Opt. Express 17(13), 10786 (2009). [CrossRef]  

70. A. Martín, B. Paul, C. Jianmin, C. Zhifeng, D. Andy, D. Jeffrey, D. Matthieu, G. Sanjay, I. Geoffrey, I. Michael, K. Manjunath, L. Josh, M. Rajat, M. Sherry, S. Benoit, M. D. G., and Z. Xiaoqiang, “Tensorflow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) (2016).

71. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015 (International Conference on Learning Representations, ICLR, 2015).

Supplementary Material (1)

NameDescription
Visualization 1       A video of 3-D reconstructions of extracted myocardium, cardiac jelly, and lumen from a beating embryonic heart at HH 14.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Exemplar OCT images captured from a plane of a beating embryonic heart. (a)-(d) are four distinct phases of a cardiac cycle. vtr: ventricle; CJ: cardiac jelly; myo: myocardium.
Fig. 2.
Fig. 2. Image processing pipeline for segmentation of beating embryonic heart structures from 4-D OCT datasets. Step 1 is selecting multiple volumes at different time points of a cardiac cycle as key-volumes. Cardiac structures of key-volumes were manually labeled. Step 2 is synthesizing additional labeled image volumes by transformations learned from registration model VoxelMorph. Step 3 is semantic heart structure segmentation using U-Net trained on synthesized labeled images of step 2 and manually labeled images of step 1.
Fig. 3.
Fig. 3. Generation of additional labeled image volumes using learning-based registration. (a) A VoxelMorph network was trained using random image volume pairs of a 4-D OCT dataset. In each volume pair, one volume was the fixed volume (f) and the other was moving (m). The network was trained by optimizing a loss function (Loss) to output a registration field φ that aligned the input image pair. The loss function Loss has a similarity term Lsim and a smoothness term Lsmooth. The similarity term was measuring the difference between f and the moved volume mφ. (b) The trained network was used to register the key-volumes to other unlabeled volumes. The key-volumes and their segmentations were transformed using the resulting registration field to create synthesized images and segmentations.
Fig. 4.
Fig. 4. Segmentation performance of three methods versus the number of available labeled key-volumes (the average of 3 heart samples). The three methods were the two baseline methods using only VoxelMorph or U-Net, and the proposed combined strategy using VoxelMorph and U-Net. Mean Accuracy and Mean Dice are the average Accuracy and Dice scores of all classes in all images of all 3 heart samples. Mean BFScore is the boundary F1 contour matching score between the predicted segmentation and the ground truth segmentation of all 3 heart samples.
Fig. 5.
Fig. 5. Segmentation results using the proposed method demonstrated on three heart samples with the Dice score ranking from top 10% to top 90% of all test images. N is the number of available key-volumes. The blue, green, and red contours are the myocardium outer boundary, myocardium inner boundary, and lumen boundary, respectively.
Fig. 6.
Fig. 6. 3-D reconstructions of extracted myocardium, cardiac jelly, and lumen from a beating heart at three time phases. The first row is showing the volume rendering results. The second row is showing the surface rendering of extracted lumens overlaid on cross-sectional OCT images. See Visualization 1 (MP4, 3.22 MB) for the shape changes of segmented cardiac structures during heart beating.

Tables (1)

Tables Icon

Table 1. Comparison of three methods: VoxelMorph only, U-Net only, and the proposed combined strategy, respectively on 3 individual embryonic hearts. Mean Accuracy and Mean Dice are the average Accuracy and Dice scores of all classes in all images of one heart sample. Mean BFScore is the boundary F1 contour matching score between the predicted segmentation and the ground truth segmentation.a

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

L o s s = L s i m ( f , m φ ) + λ L s m o o t h ( φ )
L s i m ( f , m φ ) = 1 | Ω | p Ω [ f ( p ) [ m φ ] ( p ) ] 2
L s m o o t h ( φ ) = p Ω u(p) 2
A c c u r a c y = | P r e d G T | | G T |
D i c e = 2 | P r e d G T | | P r e d + G T |
B F S c o r e = 2 p r e c i s i o n r e c a l l p r e c i s i o n + r e c a l l
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.