Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

An open-source toolkit for the volumetric measurement of CT lung lesions

Open Access Open Access

Abstract

An open source lesion sizing toolkit has been developed with a general architecture for implementing lesion segmentation algorithms and a reference algorithm for segmenting solid and part-solid lesions from lung CT scans. The CT lung lesion segmentation algorithm detects four three-dimensional features corresponding to the lung wall, vasculature, lesion boundary edges, and low density background lung parenchyma. These features form boundaries and propagation zones that guide the evolution of a subsequent level set algorithm. User input is used to determine an initial seed point for the level set and users may also define a region of interest around the lesion. The methods are validated against 18 nodules using CT scans of an anthropomorphic thorax phantom simulating lung anatomy. The scans were acquired under differing scanner parameters to characterize algorithm behavior under varying acquisition protocols. We also validated repeatability using six clinical cases in which the patient was rescanned on the same day (zero volume change). The source code, data sets, and a running application are all provided under an unrestrictive license to encourage reproducibility and foster scientific exchange.

©2010 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1552. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

Computed Tomography (CT) is the preferred modality for the detection of lung cancer lesions. Quantifying tumor size and size change over time is necessary to evaluate the effectiveness of a drug. Current practice uses a low-dimensional surrogate for tumor volume such as the bi-dimensional WHO criteria [1] or the uni-dimensional RECIST measurement approach [2] [3], with RECIST being the currently preferred approach. These methods do not fully evaluate the volume of lesions and instead use a manually measured lesion diameter or diameters. The advent of high resolution CT lung scanning offers the opportunity to perform volumetric segmentation of lung nodules, thereby allowing for a more accurate quantification of changes in full tumor burden over time in response to a drug regimen.

In this paper we describe the software architecture available in the Lesion Sizing Toolkit [4] as well as the reference algorithm for CT lung lesion sizing provided with the toolkit. Performance of the CT lung lesion sizing algorithm was evaluated on synthetic phantom data as well as on clinical data. The paper is organized into five sections. Section 2 describes the general architecture of the toolkit. Sections 3 and 4 summarize the segmentation methods employed for CT lung lesion segmentation. Section 5 describes the validation of the methods on synthetic phantoms having known volumetric ground truth as well as results from a small set of “zero change” clinical data. Finally, Section 6 describes the implementation of the toolkit in ISP, a data visualization application developed to support interactive science publishing and distributed by the Optical Society of America. ISP is provided as a free viewer for exploring and visualizing ISP data sets, thus readers are able to apply the CT lung lesion sizing algorithm to all data sets within this ISP paper as well as all other data sets provided within ISP publications.

2. Software Architecture

The Lesion Sizing Toolkit [4] [5] provides a cross platform, C++ software architecture based on the National Library of Medicine’s Insight Segmentation and Registration Toolkit (ITK) [6] and the Visualization Toolkit (VTK) [7]. The main goal of the software architecture is to support a wide range of lesion sizing algorithms and methods with a set of modular software elements that can easily be modified or replaced. The toolkit framework consists of four software elements that plug into a main “Segmenter” class:

  • - Feature Generator: This generator produces one or more “features” to guide the segmentation and thereby prevent the segmentation from bleeding into non-lesion regions. Features can represent propagation regions, exclusion zones, or boundaries between lesion and from non-lesion areas. With the exception of the threshold feature, the feature generators currently provided with the toolkit are of the boundary sort and include generators to detect the boundary between lesions and the lung wall, lesions and vessels and airways, and basic strong image edges.
  • - Feature Aggregator: Conceptually, each feature image represents the likelihood of a particular voxel being part of the lesion from a single perspective. The feature aggregator defines how multiple features are combined to form a single aggregate feature that can be passed into the segmentation module.
  • - Segmentation Module: The segmentation module runs a segmentation algorithm on the input image under guidance from the feature image. A variety of segmentation methods are provided including level sets, geodesic active contours, and region growing.
  • - Spatial Initialization: Spatial initializations help initialize or constrain a segmentation algorithm. The toolkit supports the specification of one or more starting seed points or regions to be used to initialize the segmentation. Additionally, a bounding box or region of interest can be defined to contain the resulting segmentation.

Figure 1 illustrates the general architecture of the toolkit. A segmentation module, such as a level set algorithm, is typically initialized by a spatial seed point and guided by at least one image based feature. This modular architecture, similar to the registration framework of ITK [6], supports rapid evaluation of different algorithmic components of a segmentation algorithm.

 figure: Fig. 1

Fig. 1 Toolkit architecture.

Download Full Size | PDF

The following is a list of feature generators, feature aggregators, and segmentation modules currently available with the toolkit:Feature Generators:

  • - Descoteaux Sheetness feature [8]
  • - Sato Vesselness feature [9]
  • - Frangi tubularness feature [10]
  • - Gradient Magnitude Edge feature
  • - Canny Edge feature [11]
  • - Lung wall feature
  • - Intensity feature
Feature Aggregators:
  • - Minimum feature aggregator
  • - Weighted sum feature aggregator
Segmentation Modules:

  • - Connected Threshold region growing
  • - Confidence connected region growing
  • - Isolated connected region growing
  • - Iterative connected region growing
  • - Fast Marching level set [12]
  • - Shape detection level set [13]
  • - Geodesic active contour level set [14]

One of the goals of the lesion sizing toolkit is to encourage and support the development of well designed lesion sizing algorithms. For example, the modular architecture encourages developers to apply all feature generators to the specified region of interest and to use the feature aggregator to appropriately determine the weighting of features for a given region. This approach encourages developers to maintain consistency of applied features and avoid embedding complex situational logic into lesion segmentation algorithms.

3. CT Lung Lesion Segmentation Algorithm

Lung nodules exhibit a wide range of morphologies and shapes. They may be solid, part-solid (solid nodules surrounded by a diffuse region) or non-solid. Solid nodules consist of a CT density that is similar to soft tissue while non-solid nodules have varying densities between air and soft tissue. Nodules are often attached to vessels and can be attached to the lung wall or the mediastinum. The ability to define and combine an unlimited number of feature generators within the toolkit was designed to support the wide diversity of potential lesion presentations observed in the lung and other organs.

The main algorithm used for segmentation is a region evolution algorithm. In general, this type of algorithm uses a set of image locations or seeds – either user chosen or automatically determined – to define an initial set of locations in the lesion. From this initial small set, the candidate lesion area propagates based on characteristics of the underlying image. For the lesion sizing toolkit, consider a level set algorithm. The level set approach is iterative and during an iteration, the boundary between lesion and non-lesion moves outward an amount proportional to the distance of the current front to one of the feature boundaries. The boundary locations represent a location where the propagation distance for the next iteration goes to zero and thus these boundary locations retard the propagation and curtail lesion boundary growth. Our current implementation uses an initial user chosen seed and four image features to define areas that region growing should avoid. These features are: (i) the lung wall, (ii) the lung vasculature, (iii) the boundary between the lesion and the background lung parenchyma, and finally (iv) the low CT density of air. Together, these features allow propagation of the segmentation front within the intensity thresholds specified by the intensity feature while deterring propagation into the lung wall, vessels, or across sharp edges. With a modular architecture, the addition of new features to specifically account for more complex anatomical areas, such as the hilum, is easily supported. The CT data set is preprocessed by resampling to isotropic resolution matching the smallest voxel spacing.

3.1 Feature Generation

Next we describe the four features that are computed by the algorithm: Lung wall, vesselness, gradient, and intensity.

3.1.1 Lung wall feature

The lung wall is characterized by a generally higher intensity than the lung parenchyma and has a smooth, locally consistent curvature. Nodules impinging on the lung wall can have similar intensities to the wall itself and can be indistinguishable based on intensity alone; however, they do interrupt the local curvature. The lung wall feature generator extracts the lung wall surface by first localizing the wall using intensity thresholding and by then enforcing the local curvature by applying curvature constrained front propagation. Specifically, the data set is binarized so as to retain voxels above a threshold. The intensity difference between parenchyma and lung wall is large and the exact threshold used, −400 Hounsfield Units (HU), is not critical. We then apply an iterative hole filling algorithm on the resulting binary image. The hole filling operation applies at every voxel a quorum (voting) algorithm to decide whether a new voxel will be filled. At every iteration, for voxels near the lung boundary, the decision as to whether to keep the current classification or to change it is determined by a vote of the neighboring voxels. A super-majority can be required and is essentially equivalent to changing the maximum curvature allowed in the final binary segmentation. The quorum is computed among voxels inside a neighborhood of 7 × 7 × 7 surrounding the voxel under consideration. The default majority value used for defining quorum is 1 (the lowest possible curvature).

The voting-based hole-filling algorithm shown in Algorithm 1 is run iteratively until no more pixels are changed from background to foreground. Its effect on the front is that cavities whose surfaces have a high Gaussian curvature will be filled-up to the point where they are reduced to a cavity with small Gaussian curvature. Protrusions on the surface are not affected by this algorithm, since it only converts background voxels into foreground voxels, and never moves a foreground voxel to the background. Figure 2 illustrates the evolution of the front under the control of the hole-filling algorithm. The front moves until the Gaussian curvature of the front's surface is reduced to the point where a local count of voxels in the foreground returns values that are below the majority threshold.

 figure: Fig. 2

Fig. 2 Left: Segmentation of the lung wall by voting based front propagation on a lesion attached to the lung wall. The front starts with the red contour (obtained by thresholding the data set at −400 HU. In cyan are contours of the front at iterations of 10, 20, 30, 40, 50, 60, 80. The algorithm converges in 151 iterations at the green front. Right: The lesion segmented and volume rendered in ISP (View 1).

Download Full Size | PDF

Algorithm 1. Voting based front propagation

  • 1: Let Kernel Radius r = 3
  • 2: Let BirthThreshold, T=(2r+1)3/2. For a 7 × 7 × 7 city-block kernel, T = 171
  • 3: Let Ai be indices of voxels on the front at iteration i
  • 4: Threshold CT image at −400 HU. Voxels above are set to 1, below to 0.
  • 5: Iterate over the image and add all indices within r voxels of the boundary to A0
  • 6: repeat
  • 7: Let PixelsChangingStatus = 0
  • 8: for each voxel in Ai do
  • 9: Check for quorum, Q. (Q is true if number of ON voxels within neighborhood (7 × 7 × 7) centered at the current voxel > T)
  • 10: if Q is true then
  • 11: Schedule this pixel for inclusion in foreground.
  • 12: + + PixelsChangingStatus
  • 13: Add background voxels in neighborhood to front for next iteration, Ai + 1.
  • 14: else
  • 15: Schedule this pixel for inclusion in background.
  • 16: Add this location to the front for the next iteration, Ai + 1
  • 17: end if
  • 18: end for
  • 19: Update the status of scheduled voxels.
  • 20: + + i
  • 21: until PixelsChangingStatus = 0

Finally, a sigmoid filter is applied to the resulting binary lung wall segmentation. The sigmoid [6] is a non-linear mapping that converts one intensity range into another range, with a smooth and continuous transition. The resulting image is a grayscale representation of the inner lung, with a gradual roll-off towards the lung wall.

3.1.2 Vesselness feature

Nodules are generally compact, while vessels can be approximated in a local region by a cylinder or tube. By calculating how closely an image region resembles a tube, vessels can be discriminated from lesions, even when the lesions are attached or adjacent to the vessels. The vesselness features are computed using the Sato [9] tubularness measure. Parameters used for Sato’s line enhancement filtering were α = 0.1 and β = 2.0. The resulting binary image is again transformed using a sigmoid filter resulting in a grayscale image with a roll-off towards tubular structures.

3.1.3 Gradient feature

The gradient feature attempts to localize the edges of the lesion where the lesion is adjacent to a low density structure, such as the background lung parenchyma. This is modeled using a Canny edge detector [11], which achieves optimal edge localization (with voxel accuracy), and results in a voxelized edge map. The resulting image is transformed using a sigmoid filter resulting in a grayscale image with a roll-off towards the identified boundary edges.

The selection of the Canny edge detector was based on two properties of the algorithm. First, the Canny edge detector is well known in the computer vision community for its ability to detect subtle edges. Second, the algorithm has well understood parameters that allow control of the segmentation. These parameters allow the Canny detector to adapt to image characteristics such as desired edge strength, image resolution, sampling rate, and noise. The ability to perform well despite large variations in image acquisition characteristics is an algorithm property that is encouraged throughout the lesion sizing toolkit.

3.1.4 Intensity feature

The intensity feature is based on a simple intensity threshold applied to the data set, followed by a transformation with a sigmoid filter to generate gradual roll-offs. The user specifies the type of the lesion (solid / part-solid), which determines the threshold level: −200 HU for solid lesions and −500 HU for part-solid lesions. The threshold values were determined experimentally, over a number of lesions in the studied data.

3.1.5 Aggregation of features

Figures 3 and 5 show the application of feature generators on a solid and a part-solid lesion. An aggregation of the features is obtained by normalizing each individual feature to lie within the range [0,1] and then choosing the minimum feature value at each voxel. The resulting image is then passed onto the segmentation module.

 figure: Fig. 3

Fig. 3 Top left: Lung wall features in cyan, Canny Edges features in red; a contour around the single voxel thick edges is shown. Note that the voxels may have a thickness of 1 along the Z dimension and hence several voxels thick in-plane as shown. Top right: Intensity features for solid lesion parameters in blue, Sato Vesselness features in yellow. Bottom: Lesion segmented and volume rendered using the ISP lesion sizing tool (View 2).

Download Full Size | PDF

4. Segmentation Module

As noted previously, there are a number of algorithms that can be used to perform the actual lesion segmentation. We use the fast marching algorithm [12] to quickly enumerate the majority of the lesion voxels and then refine the fast marching output using a shape detection based level set [13] to further refine the lesion boundaries. In our user interface, the user identifies the lesion with a single seed point. The seed is used to initialize the fast marching front, which starts at the seed and proceeds outwards, solving the eikonal equation, its propagation guided solely by the aggregate feature image and an optional, user-defined, region of interest (ROI). The eikonal equation is typically used to model wave propagation as a function of time, and the fast marching solution represents a pseudo-time solution with the output from the fast marching algorithm represented as a time crossing map. Each pixel in the map represents the time taken for the front to reach that voxel. We use a stopping time value of 5, implying that the front stops propagating when all voxels on the front have an arrival time greater than 5. Taking Δt = 0.125, this results in termination at 40 iterations. The fast marching front is conservative enough to ensure that the segmentation results remain within the lesion. This resulting time crossing image is renormalized to the range [−4, 4], so as to be symmetric about zero and passed onto the level set module.

The zero level set curves are guided by the propagation and curvature components:

φt+F(1wK)|φ|=0.

The propagation term, F in this case, is the aggregate feature image. The curvature term here is K (mean curvature), and w is the relative importance of curvature to propagation. We choose a very low relative curvature weight (w = 0.002), to account for the wide variety of shapes that lesions can assume. The final lesion boundary is then taken as the −0.5 isosurface of the level set. Note that this isosurface value is driven by the characteristics of our aggregate feature map. In the nominal case of the lesion having a boundary within the parenchyma, we find that the Canny edge is the dominant feature for lesion segmentation. Our Canny edge feature only localizes the true edge boundary to a one voxel precision (Fig. 3). By taking the level set at −0.5 we force the segmentation to the interior of the Canny detected boundary voxel and reduce our error bound from 1 to 0.5 voxels.

A surface of the resulting level set is generated using the Marching Cubes algorithm [15] at an isovalue of −0.5. The volume is computed using a discretized version of the divergence theorem [16], to compute the volume in a closed triangle mesh.

5. Validation of the segmentation algorithm

5.1. Evaluation on an anthropomorphic thorax phantom

We evaluated the results on CT scans of an anthropomorphic thorax phantom [17] [18] (courtesy US FDA), containing nodules of various densities, sizes, shapes and at various locations in the chest. This phantom provides ground truth for simple nodules in a realistic anatomy, as shown in Fig. 4 that includes attachments to vasculature, airways, and the lung wall. Exposure varied from 20mAs to 200mAs. Ground truth volume information was available only for the spherical nodules. The database consists of 6 CT scans, each containing a 5, 8, and 10mm diameter spherical nodule with a density of 100 HU. Three of the scans have a slice thickness of 0.8mm and a z spacing of 0.4mm, while the other three have a slice thickness of 3mm and a z spacing of 1.5mm.

 figure: Fig. 4

Fig. 4 Left: A photograph of the anthropomorphic phantom and an inlaid nodule (courtesy US FDA). Middle, Right: Lesion segmented using the ISP wizard and volume rendered in ISP (View 3).

Download Full Size | PDF

Defining the error as,

ξ=|ComputedVolumeTrueVolume|/TrueVolume,
we found that for the 18 lesions in the FDA phantom, our average ξ was 35% with a Standard Deviation (S.D) of 20%. One source of error is a poor estimate of the features due to the resolution across slices. This was confirmed by noting that on the 9 lesions, with 0.8mm slice thickness, our average volumetric error, ξ was 22% (S.D 12%). Furthermore, the half voxel error (as mentioned above) causes an amplification of the errors for the small 5mm lesions. This is confirmed by noting that on the 6 lesions with a diameter ≥ 8mm and a slice thickness of 0.8mm, our average volumetric error ξ is 14% (S.D. 5%).

It is instructive to relate the error in volumetric estimate to the error in the uni-dimensional RECIST measurement. A 30% overestimate in a lesion with a diameter of 8mm is equivalent to a change in diameter of 0.73mm, Hence a ξ of 30% for a 8mm lesion would roughly translate to an error of one voxel.

5.2. Validation on “zero change” CT scans

Validation on clinical data was done on six zero change CT scans. These are CT scans of a patient obtained hours apart, presumably with insufficient time for a change in lesion volume. However, breathing artifacts, subtle differences in patient positioning, and image noise result in slightly different segmentations. In addition, the two timepoints often had different acquisition parameters. Table 1 provides the performance of the algorithm on all 6 zero change data sets. The average ξ for the “zero change” data was 7% (S.D. 9%).

Tables Icon

Table 1. Zero change Volumes V1, V2, for 6 cases each acquired at two time points

6. Implementation

 figure: Fig. 5

Fig. 5 The segmentation of a part-solid lesion (View 10).

Download Full Size | PDF

The lesion sizing algorithm was added to ISP 2.2 [19], a 2D and 3D volume visualization application based on VolView [20]. ISP provides a guided wizard to perform one-click segmentations of solid and part-solid lesions. The user identifies the region containing the lesion with a box, places a seed within the lesion and selects the lesion type (solid or part-solid). The application then segments the lesion, reports volumetric statistics and displays the lesion in the context of the surrounding anatomy. A typical segmentation takes between 5 to 30 seconds, depending on the lesion size, on an Intel 2.66GHz Core2Duo with 2GB of RAM.

Figure 6 shows the lesion sizing algorithm applied to a large lung cancer lesion. The lesion was scanned with 2.5mm CT slice thickness at two time points separated by a 10 month time interval. This demonstrates the potential ability of the toolkit to quantitatively measure complex lesions over time, such as is commonly performed in a lung cancer clinical trial setting.

 figure: Fig. 6

Fig. 6 The segmentation of a lung cancer lesion at two time points, imaged 10 months apart. Quantitative analysis indicates that the lesion has more than doubled in volume (View 11).

Download Full Size | PDF

7. Loading the views in this paper

To load any of the views in this paper, first click on the view link in the paper and wait for the full resolution data set to be downloaded and displayed. Next navigate to the Advanced Algorithms tab. The seed points and the bounding box have already been defined in the view, so executing the segmentation algorithm only requires that the user set the lesion type (solid or part-solid (e.g. Figure 5)) before clicking through the prompts for seed points and region of interest selecting “next” or “finished” as appropriate. For views with more than one time point, such as the zero change data sets, select the first time point’s image window (it will become highlighted in green) and walk through the wizard to perform the segmentation. After the segmentation completes, select the second time point’s image window and repeat the process.

8. Conclusions

This paper presents an extensible architecture for developing lesion sizing algorithms and demonstrates a CT lung lesion segmentation algorithm developed based on it. Both the framework and the lesion sizing algorithm are available as open access resources. The algorithm has been evaluated on clinical data and an anthropomorphic lung phantom. An important component of this work is reproducibility through open source and sharing of data. Future enhancements will incorporate improved modeling of the acquisition system and additional models of lung anatomy. With continued improvements made by the open source imaging community we hope this toolkit will accelerate the development of algorithms capable of robustly quantifying response to therapy.

Acknowledgements

We thank the Optical Society of America and the Air Force Research Laboratory for funding this work. We also thank David Yankelevitz (Mount Sinai Medical School) and Anthony Reeves (Cornell University) for the I-ELCAP clinical and zero change cases. We also thank Nick Petrick, Lisa Kinnard, Marios Gavrielides and Kyle Myers for the anthropomorphic lung phantom data provided by the U.S. Food and Drug Administration CDRH/OSEL/DIAM. The FDA collection was partially supported with funding through the intramural programs of NCI and NIBIB.

References and links

1. R. D. Hunter, “WHO Handbook for Reporting Results of Cancer Treatment,” Int. J. Radiat. Biol. 38(4), 481 (1980), doi:. [CrossRef]  

2. P. Therasse, S. G. Arbuck, E. A. Eisenhauer, J. Wanders, R. S. Kaplan, L. Rubinstein, J. Verweij, M. Van Glabbeke, A. T. van Oosterom, M. C. Christian, and S. G. Gwyther, “New guidelines to evaluate the response to treatment in solid tumors,” J. Natl. Cancer Inst. 92(3), 205–216 (2000). [CrossRef]   [PubMed]  

3. E.A. Eisenhauer, P. Therasse, J. Bogaerts, L.H. Schwartz, D. Sargent, R. Ford, J. Dancey, S. Arbuck, S. Gwyther, M. Mooney, L. Rubinstein, L. Shankar, L. Dodd, R. Kaplan, D. Lacombe, and J. Verweij, “New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1),” Eur. J. Cancer 45(2), 228–247 (2009). [CrossRef]  

4. Lesion sizing toolkit, http://public.kitware.com/LesionSizingKit

5. K. Krishnan, L. Ibanez, W. D. Turner, and R. S. Avila, “Algorithms, architecture, validation of an open source toolkit for segmenting CT lung lesions,” in Proc. MICCAI Workshop on Pulmonary Image Analysis (Sept. 2009), pp. 365–375.

6. L. Ibanez, W. Schroeder, L. Ng, J. Cates, and ITK Software Guide, Kitware Inc.

7. W. Schroeder, K. Martin, and W. Lorensen, Visualization Toolkit, Kitware Inc.

8. M. Descoteaux, M. Audette, K. Chinzei, and K. Siddiqi, “Bone enhancement filtering: Application to sinus bone segmentation and simulation of pituitary surgery,” in MICCAI (2005), pp. 9–16.

9. Y. Sato, S. Nakajima, N. Shiraga, H. Atsumi, S. Yoshida, T. Koller, G. Gerig, and R. Kikinis, “Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Med. Image Anal. 2(2), 143–168 (1998). [CrossRef]  

10. A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” Lect. Notes Comput. Sci. 1496, 130–137 (1998). [CrossRef]  

11. J. F. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). [CrossRef]   [PubMed]  

12. J. A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge Press (1999).

13. R. Malladi, J. A. Sethian, and B. C. Vemuri, “Shape modeling with front propagation: A level set approach,” IEEE Trans. Pattern Anal. Mach. Intell. 17(2), 158–175 (1995). [CrossRef]  

14. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” Int. J. Comput. Vis. 22(1), 61–79 (1997). [CrossRef]  

15. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3D surface construction algorithm,” ACM SIGGRAPH Comput. Graph. 21(4), 163–169 (1987). [CrossRef]  

16. A. M. Alyassin, J. L. Lancaster, J. H. Downs 3rd, and P. T. Fox, “Evaluation of new algorithms for the interactive measurement of surface area and volume,” Med. Phys. 21(6), 741–752 (1994). [CrossRef]   [PubMed]  

17. M. A. Gavrielides, R. Zeng, L. M. Kinnard, K. J. Myers, and N. A. Petrick, “A matched filter approach for the analysis of lung nodules in a volumetric CT phantom study,” in Proc. SPIE Med. Imaging (Feb. 2009).

18. M. A. Gavrielides, L. Kinnard, S. Park, I. Kyprianou, B. Gallas, A. Badano, N. Petrick, and K. J. Myers, “Quantitative in silico imaging and biomarker assessment using physical and computational phantoms: a review of new tools and methods available from the NIBIB/CDRH joint Laboratory for the Assessment of Medical Imaging Systems,” in Radiology (2008).

19. The Optical Society of America, The National Library of Medicine, Kitware Inc, Interactive Science Publishing: http://www.opticsinfobase.org/isp.cfm

20. Kitware Inc, VolView, http://www.kitware.com/VolView

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Toolkit architecture.
Fig. 2
Fig. 2 Left: Segmentation of the lung wall by voting based front propagation on a lesion attached to the lung wall. The front starts with the red contour (obtained by thresholding the data set at −400 HU. In cyan are contours of the front at iterations of 10, 20, 30, 40, 50, 60, 80. The algorithm converges in 151 iterations at the green front. Right: The lesion segmented and volume rendered in ISP (View 1).
Fig. 3
Fig. 3 Top left: Lung wall features in cyan, Canny Edges features in red; a contour around the single voxel thick edges is shown. Note that the voxels may have a thickness of 1 along the Z dimension and hence several voxels thick in-plane as shown. Top right: Intensity features for solid lesion parameters in blue, Sato Vesselness features in yellow. Bottom: Lesion segmented and volume rendered using the ISP lesion sizing tool (View 2).
Fig. 4
Fig. 4 Left: A photograph of the anthropomorphic phantom and an inlaid nodule (courtesy US FDA). Middle, Right: Lesion segmented using the ISP wizard and volume rendered in ISP (View 3).
Fig. 5
Fig. 5 The segmentation of a part-solid lesion (View 10).
Fig. 6
Fig. 6 The segmentation of a lung cancer lesion at two time points, imaged 10 months apart. Quantitative analysis indicates that the lesion has more than doubled in volume (View 11).

Datasets

Datasets associated with ISP articles are stored in an online database called MIDAS. Clicking a "View" link in an Optica ISP article will launch the ISP software (if installed) and pull the relevant data from MIDAS. Visit MIDAS to browse and download the datasets directly. A package containing the PDF article and full datasets is available in MIDAS for offline viewing.

Questions or Problems? See the ISP FAQ. Already used the ISP software? Take a quick survey to tell us what you think.

Tables (1)

Tables Icon

Table 1 Zero change Volumes V1, V2, for 6 cases each acquired at two time points

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

φ t + F ( 1 w K ) | φ | = 0.
ξ = | ComputedVolume TrueVolume | / TrueVolume ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.