Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Abnormal pixel detection using sum-of-projections symmetry in cone beam computed tomography

Open Access Open Access

Abstract

A novel abnormal pixels (APs) detection approach is proposed to remove artefacts from reconstructed images in cone beam computed tomography (CBCT). This approach is based on the symmetry detection of sum-of-projections (SOP). Because some factors affect the SOP symmetry, we combine dyadic wavelet transform-based singularity detection to extract the APs. Next, the Laplacian solution (LS) method is employed to restore the APs in each projection image. Experimental results demonstrate the efficiency of our approach for different imaging tasks.

©2012 Optical Society of America

1. Introduction

Cone beam computed tomography (CBCT) is widely adopted in radiation therapy [1], breast imaging [2, 3], and preclinical research [4, 5]. In contrast to the detectors that are used in helical CT, detectors with a flat detection area are always used in CBCT, including detectors that are charge coupled devices (CCD) camera-based [5], amorphous selenium (a-Se)-based [6], complementary metal oxide semiconductor (CMOS)-based [7], and amorphous silicon (a-Si)-based [1, 8]. Usually, these detectors comprise millions of pixels, which do not perform exactly the same because of the limitations of fabrication [9]. Additionally, radiation damage, temperature drift, sensitivity drift, and changes in X-ray beam intensity or spectra will make some pixels of the detector under-estimate or over-estimate the X-ray attenuation in the imaging [10]. When the detectors are adopted in digital radiography or fluoroscopy, the abnormal pixels (APs) are scattered in the image, and their influence is not obvious. However, image artefacts, e.g., ring artefacts and radiant line artefacts, can be produced by the APs spread across the reconstructed three-dimensional images [11, 12]. Furthermore, the artefacts not only degrade the images but also affect future processes, such as image segmentation, target identification, and quantitative analysis [913].

Many approaches have been suggested recently to reduce artefacts that are induced by APs. A flood-field and dark-field correction of the detector before data acquisition is necessary in any case [14]. However, this strategy is not sufficient to depress all of the artefacts [9], and it only works well under ideal conditions that cannot be fulfilled in the real imaging tasks. In real imaging tasks, the intensity and spectra of the X-ray beams sampled by the detector change dramatically. Consequently, some pixels of detectors will perform abnormally, even when the detector is highly stable. Some correction approaches work directly on the reconstruction images to diminish the artefacts [10, 11, 15]. These methods can reduce the ring shape in the Cartesian coordinate or the line shape in the polar coordinate with different filters. However, they cannot address the ring artefacts with high intensity and the radiant line artefacts. Moreover, some useful information will also be removed. Some other correction methods attempt to directly process the raw data before reconstruction [13, 1621]. These approaches smoothed the sinogram or the sum-of-projections (SOP) to remove the APs. However, filtering the data in the sinogram is prone to introducing new artefacts, not only in the sinogram or the SOP, but also in the reconstructed images. Additionally, these methods cannot entirely remove the artefacts with high intensity, and they are only valid when the response of the APs are linear and time invariant [21]. Another approach, which directly identified and restored the APs in sinogram or SOP using singularity detection, may be superior to the filtering methods because it is able to remove all of the artefacts without introducing new artefacts [13, 21]. However, this method could cause false alarms, because some details of the imaging object in the SOP also perform singularly. Therefore, completely differentiating the APs from the imaging object remains a challenge.

In this paper, we extract the APs in the SOP by using the symmetry property of the SOP. We will demonstrate in Section 2 that the imaging object in the SOP is symmetric while the APs are not. This property is an effective measurement for distinguishing the APs and the imaging object, and therefore, it can be utilised to detect the APs. Because some factors, such as the polychrome of the X-ray and the movement of the object, will actually influence the symmetry of the SOP, there are low frequency components that remain after the symmetry detection. Fortunately, these components are easy to remove because they are different from the real APs with respect to spatial frequency. In our approach, we combine singularity detection using two-dimensional dyadic wavelet transform to remove these low frequency components in the symmetry detection result. After extracting the APs by both symmetry detection and singularity detection, we adopt the Laplacian solution (LS) method to restore the APs in each projection image. Finally, numerical phantom studies are performed to investigate the symmetry of the SOP with different projection numbers, different noise levels, and APs with different intensities. This approach is also applied to our CBCT to validate the efficiency for real imaging tasks.

2. Methods

2.1 Demonstration of the symmetry of the SOP in CBCT

The system geometry of the CBCT with a circular scanning trajectory is described in Fig. 1 . We assume that the system is well calibrated. Moreover, the projection of the X-ray source is at the centre of the detector. This status can be achieved by performing a geometric calibration procedure [22].

 figure: Fig. 1

Fig. 1 System geometry of the CBCT.

Download Full Size | PDF

Suppose that there is an object in the imaging field of the CBCT, which has an X-ray attenuation coefficient that is limited and integrable. The imaging field that is determined by the image area of the detector, the source-to-detector distance (SDD), and the source-to-rotation axis distance (SRD). As shown in Fig. 2 , it is a column with a height of h and a diameter of w, where h=SRDSDDH, and w=SRDSDDW. The attenuation coefficient of the object can be expressed as μ(r,θ,z) in a cylindrical coordinate system, where r[0,w/2],θ[0,2π),z=[h/2,h/2]. We define (x˜,d,z˜) is an arbitrary point in the detector plane (X˜,Z˜), where d=SDDSRD. Then, the SOP in the (X˜,Z˜) is expressed as follows:

SOP(x˜,d,z˜)=Rθ[μ(r,θ,z)]dθ
where θ is the rotation angle of the object along the rotation direction, Rθ[] represents the cone beam projection [23] with the rotation angle of θ, and d stands for the rotation axis-to-detector distance. Here, we introduce (X˜,Z˜) as an imaginary detector that intersects the Z axis and is parallel to the real detector. Therefore, d equals 0 thereafter. And (x˜,d,z˜) becomes (x˜,z˜). Suppose that the cone beam projection is performed with a rotation angle of 0 and that the imaging object rotates around the rotation axis; after the coordinate transformation, we have:

 figure: Fig. 2

Fig. 2 Illustration of the imaging field in CBCT.

Download Full Size | PDF

SOP(x˜,z˜)=R0[μ(r,θθ,z)]dθ

Making use of a Dirac function [24], we define the μ(r,θ,z) as:

μ(r,θ,z)=μ(r0,θ0,z0)δ(rr0,θθ0,zz0)dr0dθ0dz0
where δ(rr0,θθ0,zz0)=δ(rr0)δ(θθ0)δ(zz0).

Combining Eq. (2) and 3, the SOP becomes:

SOP(x˜,z˜)=R0[μ(r0,θ0,z0)δ(rr0,θθθ0,zz0)dr0dθ0dz0]dθ

Considering the linearity of the Radon transform [25], Eq. (4) becomes:

SOP(x˜,z˜)=R0[μ(r0,θ0,z0)δ(rr0,θθθ0,zz0)dr0dθ0dz0dθ]

Changing the order of integration, Eq. (5) becomes:

SOP(x˜,z˜)=R0[μ(r0,θ0,z0)δ(rr0,θθθ0,zz0)dθdr0dθ0dz0]

Utilising the property of Dirac function, we obtain:

SOP(x˜,z˜)=R0[μ(r0,θ0,z0)δ(rr0,zz0)dr0dθ0dz0]

If we define:

F(r,z)=μ(r0,θ0,z0)δ(rr0,zz0)dr0dθ0dz0
and substitute Eq. (8) into Eq. (7), we have the following equation:

SOP(x˜,z˜)=R0[F(r,z)]

It can be seen that the F(r,z) is independent of the θ. Therefore, F(r,z) is circularly symmetric about the Z axis. As a result, SOP is symmetric about the Z˜ axis that is the projection of Z axis on the detector plane. Therefore, the SOP satisfies the symmetry property, which will facilitate the APs detection.

2.2 Features of the APs in the SOP

As described above, the SOP stands for the sum of the projections with the full angle scanning. If we separate the response of normal pixels Pu(θ) and the response of APs Pn(θ) from each projection P(θ), we have:

SOP=(Pu(θ)+Pn(θ)+n(θ))dθ
where n(θ) is the noise of each projection. In practice, the projection angles are discretized, and Eq. (10) becomes:
SOP=2πNn=0N1(Pu(2πNn)+Pn(2πNn)+n(2πNn))
where N is the number of projections.

In the SOP, the noise will be effectively depressed by the summation. Considering that the imaging time for our system is only several minutes, it is reasonable to suppose that the performance of these APs will not change dramatically, e.g. the response of the APs changed from over-response to under-response. Therefore, the response of APs will be somewhat intensified, depending on the characteristics of the corresponding APs. Furthermore, few pixels will perform abnormally after the well flood-field pre-calibration. This scenario implies that it is almost impossible to find pixel pairs that are symmetric about Z˜axis while exhibiting the same singularity. Therefore, if we subtract the left part of the SOP from the right part along the Z˜ axis, the Pu(θ) will be entirely depressed, according to the symmetry property of SOP. Then, only Pn(θ)will exist. This is the elementary principle of our approach.

2.3 Proposed algorithm

Utilising the symmetry detection, the APs can be entirely extracted in the ideal condition. However, the symmetry of the SOP is derived under the condition of the monochrome X-ray, with the object free of movement and an infinite number of projections, which are rarely adopted in CBCT. This scenario will cause some errors in the symmetry detection. Fortunately, these errors will be averaged by the summation of the projections, and they perform asymmetrically as low-frequency components. Therefore, we combined singularity detection in the symmetry detection result to locate the positions of the APs. Utilising the property of symmetry, most details of the imaging objects will be removed from the SOP, which make the singularity detection more sensitive and reliable. It can also be seen that the singularity detection here is quite different from the singularity of the APs described in the previous studies.

The scheme of our approach is presented in Fig. 3 , containing the following 6 steps:

 figure: Fig. 3

Fig. 3 Flow chart of our proposed approach to detect and restore the Aps.

Download Full Size | PDF

Step 1: acquisition of the SOP

The minus logarithm is applied to the raw projection images. Then the SOP is obtained by summing the projections using Eq. (11), which is shown in the image titled “SOP” in Fig. 3.

Step 2: symmetry detection result of the SOP

On the basis of the symmetry property that is demonstrated in Section 2.1, the symmetry detection result of the SOP, which is denoted as SY and named “Symmetry” in Fig. 3, is obtained by:

SY(i,j)=SOP(i,j)SOP(i,nj+1)
where i[1,m],j[1,n2] and (i,j) denotes the index of the pixels in the SOP. The variables m and n represent the number of rows and columns of the SOP, respectively. In addition, it is assumed that n is even and the Z˜axis is located at the position of j=(n+1)/2.

Step 3: singularity detection of SY

SY is processed by a translation-invariant 2-D dyadic wavelet transform. The transformation can be obtained by applying filters to the horizontal and vertical direction of SY [26]:

S2k+1(SY)=S2k(SY)*[Lk,Lk]W2k+11(SY)=S2k(SY)*[Gk,D]W2k+12(SY)=S2k(SY)*[D,Gk]
where k is the decomposition level, S1(SY) is the original image, and W1(SY) and W2(SY) denote the horizontal and vertical details of the image, respectively. A[B,C] represents the separable convolution of the rows and columns of A with one-dimensional filters B and C, respectively. L is a low-pass filter, G is a high-pass filter [27, 28], and D is the Dirac filter [24, 26]. Lk stands for the filter that pads 2k1 zeros between each coefficient of the original filter L. The modulus of the 2-D dyadic wavelet transform can be expressed as:

WT2k+1(SY)=(W2k+11(SY))2+(W2k+12(SY))2

In Fig. 3, the result of the calculated modulus is shown in the image titled “Modulus”.

Step 4: obtaining AP pairs from wavelet transform result

The size of the symmetry detection result is only half of the whole image. Therefore, the positions of the detected APs in this step may be from either the left side or the right side, and we name the detected APs as AP pairs, which is shown in the lower right image in Fig. 3.

The AP pairs are roughly discovered by thresholding the WT. The threshold Tw1 is the maximum of the modulus in level 0 of an area that is manually selected. This area is usually selected in the flood-field part of the SOP to guarantee that no APs will be involved. Because the APs near the Z˜axis will induce more serious artefact, the threshold is weighted according to the distance from the location of the APs pairs to the Z˜ axis:

Tw2=(11x/20+4)×Tw1
where x is the distance between the estimated pixel and the Z˜ axis. The positions, whose dyadic wavelet transform modulus are higher than Tw2, are marked with 1 in the map of the AP pairs (MA). Then, the holes in the MA are filled. Because the wavelet we used has one vanishing moment, some pixels adjacent to these AP pairs with high-intensity will be falsely regarded as APs. We adopt a simple and effective method to exclude these pixels from the MA. First, for each pixel in SY, we calculate the summation of its 8-connected neighbourhood using [29]:

SN(i,j)=m,n=1,0,1SY(i+m,j+n)APsSY(i+m,j+n)

It should be noted that the suspicious AP pairs marked in MA are not involved in the calculation. Second, the difference between SY and SN is computed by:

SYN(i,j)=|SY(i,j)18SN(i,j)|

Third, a threshold is applied to SYN to remove these false AP pairs. Similar to the Tw1, the threshold Ts1 is regarded as the maximum of SYN in the manually selected area, and it is also weighted as:

Ts2=(11x/20+4)×Ts1

Using this method, the false AP pairs are removed, and the MA result is improved.

Step 5: the possible AP pairs are extended into the whole area of the SOP

The approach mentioned before will indicate the locations of potential AP pairs whose positions are symmetric about the Z˜ axis. We need to extend this to the whole image field. Therefore, we restore all of the AP pairs. If they change significantly, they are determined to be real APs; otherwise, they are false APs and are removed. The extension result is illustrated in the image titled “Mask” in Fig. 3.

The SOP is processed with the LS method [30], using the extended MA as the mask:

MA(i,j)={MA(i,j),ifjn2MA(i,nj+1),ifj>n2

The LS method iteratively assigns the pixel value of the APs to the average of its 4-connected neighbourhood until it converges. Then, these pixels, whose changes after restoration are less than 23Ts2, are regarded as normal pixels in the MA. In addition, the holes in the MA will be filled again to include these blocks of APs. This processing will also exclude the normal pixel pairs with noise that performs opposite in phase.

Step 6: restoring all the projections

Based on the obtained final map of the APs in Step 5, the LS method is applied to each projection using the map as a mask to restore the APs.

In summary, using the above 6 steps, the proposed method is able to extract and restore the APs. Based on the symmetry property of SOP, the imaging object can be entirely removed from the symmetry detection result of the SOP. And the noise is also highly depressed due to the summation, which also enhance the singularity of APs in the SOP. Wavelet transform is introduced as singularity detection method. Thresholding the wavelet modulus is used to acquire the suspicious AP pairs, and the threshold Ts2 is also used to reduce the false alarms of AP pairs due to the vanishing moment of wavelet transform. Then the AP pairs are extended to the whole image field, and finally the LS method is applied to restore the APs in each projection at last.

About the threshold selection, we only need to select an imaging area without obvious APs in the flood-field part of the SOP. Then, all the thresholds can be automatically calculated. The thresholds will not change when the imaging parameters remain the same. In Section 3, it can be seen that the APs with the same response will induce artefacts with different intensities, depending on the distance from the location of the APs to the Z˜ axis. The intensity will be proportional to the reciprocal of the distance from the APs location to the Z˜ axis. Therefore, we designed a weighting factor that included a basic part and a distance-dependent part. The distance-dependent part in the weighting factor was determined with experimental trails. It only depends on the distance between the estimated pixel and the Z˜ axis, and is independent of the other parameters of the system. Therefore, it may need few adjustments for different systems. In the determination of the weighting factor, a trade-off between the sensitivity and the accuracy was made to ensure that the artefacts were totally removed, while few normal pixels were regarded as APs to preserve the fine structures in the reconstructed images.

3. Simulations and experiments

3.1 Validation of the symmetry of the SOP

To validate the symmetry of the SOP, we modified a numerical phantom to simulate the projections in the data acquisition of the CBCT [31]. The details of the numerical phantom are shown in Table 1 . The rotation angles α, β, and γ are Euler angles with Ζ-Χ-Ζconvention. This numerical phantom was also built in the Cartesian coordinate system that is described in Fig. 1. In our simulation, the detector had a matrix of 1100 × 1100 pixels, and the pixel pitch was 0.002. A cone angle of 10 degrees was used to simulate our real system. The position of the projection of the X-ray source was at the middle of the detector, and the X-ray source is assumed to be a point. The imaginary detector concept is used to simplify the calculation of the projections. Different numbers of projections with equal angle spacing were acquired from 0 to 360 degrees for further processing.

Tables Icon

Table 1. Parameters of the Numerical Phantom

As we described earlier, the symmetry of the SOP is under the condition of a continuous image capture mode. However, the CBCT always works in intermittent mode in reality, which reduces the data redundancy, and consequently, introduces some errors into the symmetry of the SOP. We calculated the total error TE of the symmetry with different projection numbers to investigate whether it is appropriate in reality,

TE=i=11100j=1550|SOP(i,j)SOP(i,1101j)|
where i and j is the index of the row and column of the SOP, respectively. The SOPs with each projection number were normalised to 100 to make it comparable.

3.2 Numerical phantom studies

The symmetry of the SOP is also affected by the noise level of the projections. Therefore, we investigated the symmetry under different noise levels. We assumed that the noise that we added to the projections had the standard normal distribution, where the variance of the noise was tested from 0.4% to 4% of the incident X-ray beam intensity, with a step of 0.4%. Furthermore, the dark noise with a variance of 0.2% of the incident X-ray beam intensity was also introduced into each projection image. In data processing, the SOPs with different noise levels were normalised to 100 for comparison. Furthermore, the APs were introduced into the projections with and without noise in different positions to investigate their influence on reconstruction images. The APs used in this simulation were set to be 1% over-response. The simulation studies used the projection number of 800, which is also the number used in our real imaging. A ramp filter was adopted, and Feldkamp’s algorithm was used to reconstruct the images [32].

3.3 Detection and compensation of APs in real CBCT imaging

The modified system of our previous CBCT [33] was used to validate the algorithm that we proposed. An X-ray tube (Series 5000 Apogee, Oxford Instruments) equipped with a 2.5 mm Al filter was used as a source. The a-Si-based FPD (Paxscan 2520V, Varian Medical Systems) was employed as a detector. A rotation stage (ADRS-100, Aerotech), which was synchronised with the FPD, performed rotations with the imaging object when the image acquisition was conducted. Four different objects, a water phantom, a chicken’s femur and two mice (one was alive and the other was euthanised) were imaged with 800 projections in each experiment. Before imaging, the FPD was carefully processed with a gain and an offset calibration, which means that 100 flood images and 100 dark images were used to pre-calibrate the FPD. All of the image acquisitions were performed with a tube voltage of 50 kV, a tube power of 40 W and a frame rate of 1.5 fps. Then, the algorithm that we proposed was applied to demonstrate the feasibility. Two other approaches proposed recently to suppress ring artefacts are also used for comparison [18, 19]. In the implementation of the two methods, the parameters are chosen just following the recommendation of these papers.

4. Results

4.1 Validation of the symmetry of the SOP

The SOP of the 6400 projections of the numerical phantom is shown in Fig. 4(a) . The total error of the SOP with 6400, 3200, 1600, 800, 400, 200, 100 and 50 projections is illustrated in Fig. 4(b). As expected, the total error of the SOP decreased dramatically as the projection number increased. The relationship between the total error and the number of projections NP is expressed as:

 figure: Fig. 4

Fig. 4 Validation of the symmetry of the SOP with different projection numbers. (a) The SOP of a numerical phantom, with 6400 projections. (b) The total error as a function of the projection numbers. (c) Pointwise detail of the errors with different projection numbers; the positions are indicated in (a) by the white line.

Download Full Size | PDF

log10(TE)=1.459log10(NP)+6.859

The coefficient of determination of the linear regression was as high as 0.9994. Figure 4(c) shows the pointwise error with the absolute value of the position revealed by the white line shown in Fig. 4(a). As shown in Fig. 4(c), with 50 projections, the error was as high as 0.37, which is approximately 5% of the SOP intensity, while the error decreased to 0.025 and 0.0062 with the projection numbers of 400 and 800, respectively, which are almost negligible.

The symmetry of the SOP with noise is shown in Fig. 5 . Figure 5(a) presents the SOP with a noise variance of 4%. Because of the averaging of the projections, no obvious noise and unsymmetry is observed in Fig. 5(a). In Fig. 5(b), we can observe that the TE increases notably as the noise level increases:

TE=12960NL+917.3
where NL is the noise with a different variance introduced into the projections. To be concise, we only compared the SOPs with a noise variance of 0, 0.4%, 0.8%, 1.6%, and 3.2%. The position of the detailed errors in Fig. 5(c) is indicated by the white line in Fig. 5(a). As shown in Fig. 5(c), the TE induced by a noise variance of 0.4% is approximately equivalent to that induced by 400 projections which can be found in Fig. 4(c). This implies that the TE is mainly caused by the noise rather than the limitation of the projection number in most cases.

 figure: Fig. 5

Fig. 5 Validation of the symmetry of the SOP with different noise levels. (a) The SOP of the numerical phantom with a noise variance of 4%. (b) The total error as a function of the variance of the shot noise. (c) Pointwise details of the errors with different noise levels, whose position is indicated in (a) by the white line.

Download Full Size | PDF

Figure 6 demonstrates the performance of APs in the reconstructed images. Figure 6(a) is the reconstructed image of a central slice of the numerical phantom with noise-free projections and several isolated APs. The intensity profile of the position indicated by the white line in Fig. 6(a) can be observed in Fig. 6(b). The intensity of the artefacts, induced by the APs with the same response, will decrease when the distance between the position of the APs and the Z˜ axis increases, as shown in Fig. 6(b). However, these artefacts become faint as the noise level of the projections increases. The projections used to reconstruct Fig. 6(c) have a noise variance of 2%. The artefacts are not easy to discover, even in Fig. 6(d). Therefore, we can obtain two conclusions from Fig. 6. First, the APs near the Z˜ axis are more important, because more serious artefacts will be induced by them. Second, although the noise will degrade the symmetry of the SOP, noise makes the artefacts induced by the APs less obvious.

 figure: Fig. 6

Fig. 6 Performance of the APs in the reconstructed image of a numerical phantom. (a) Reconstructed image of a central slice of the numerical phantom, free of noise with APs. (b) The intensity profile of (a), in which the position is indicated by the white line in (a). (c) Reconstructed image of the central slice of the numerical phantom, with a noise variance of 2% and APs. (d) Magnified image of (c), in which the position is indicated by the black square in (c).

Download Full Size | PDF

In our simulation, the maximum asymmetry of the SOP with a noise variance of 2% was approximately ± 3.8, while the asymmetry induced by the APs with an over-response of 1% was 8. Therefore, in the worst case, the asymmetry of the APs was as low as 4.2, which is still higher than the maximum asymmetry of the noise. This scenario implies that those APs that induce noticeable artefacts could be extracted by a simple thresholding method via symmetry examination. Furthermore, this also suggests that it is almost impossible to include any APs in the selected reference area of the flood-field part in our method. Because the difference between the APs and the noise will be further enlarged in the wavelet modulus image, and the APs in the flood-field part will be easily to recognize. Although we use the APs with 1% over-response as an example, our method will not be restricted to the sensitivity of 1%. The sensitivity of our method is actually dependent on the noise level of the SOP. As long as the noise level decreased, the sensitivity of our method will enhanced correspondingly. In addition, even though some APs with relatively low intensity will be regarded as normal pixels, this is also acceptable because they will introduce unnoticeable artefacts which are totally covered by the noise in the reconstructed images.

To validate the thresholding strategy in our method, we analysed the detection results of the APs with over-response of 1%. We found that, with the weighting thresholds, 25 normal pixels will be regarded as APs in the final result, which is only 0.0025% of the total number of the pixels in the detector. This suggests that our method will introduce few false alarms. The false alarms are mainly due to that the threshold near the Z˜ axis is much lower than that at the margins. Without the threshold Ts2, 63 normal pixels will be falsely regarded as APs, which is about 2.5 times more than that using Ts2. Therefore, as we expected, the Ts2 can effectively decrease the false alarm rate.

4.2 Detection of APs in real CBCT imaging

Figure 7 is the results of processing the water phantom with three different approaches. Figure 7(a) is the SOP of the water phantom with 800 projection images. Figure 7(b), whose position is indicated by the white square in Fig. 7(a), is the symmetry detection result of the SOP after magnification. Figure 7(c) is the finally map of the positions of the APs in the white square of Fig. 7(a). These white points denote APs, and the black points represent normal pixels. We can see that both the isolated AP and the blocks of APs can be extracted correctly. Figure 7(d) presents the intensity profile of the SOP at the position illustrated with the black line in Fig. 7(a). The red curves are the SOP before correction, and the blue curves are the SOP after correction. From these curves and their magnified partial curves, we find that the singularities caused by the APs have been successfully removed. Figure 7(e), 7(f), 7(g) and 7(h) are the reconstructed images of the water phantom at the position indicated by the black line, before and after the correction using our method, averaging filter in SOP and wavelet - fourier filter in sinogram, respectively. Arrows 1 point to the ring artefacts with low intensity; arrow 2 indicates the artefacts with high intensity, which will introduce ring and noticeable radiance line artefacts. From the details of Fig. 7(f), we can clearly see that the artefacts with both high and low intensity were successfully removed by our proposed method. Using averaging filter, it can be seen from Fig. 7(g) that the artefacts with low intensity can be removed well, while it shows bad performance in the artefacts with high intensity. In Fig. 7(h), both artefacts with high and low intensity can be depressed at the same time. However, some obvious artefacts are also introduced by this method as indicated by the white arrow in the Fig. 7(h). In this experiment, Tw2 equals to 0.0571, Ts2 equals to 0.018, and 3424 pixels are regarded as APs.

 figure: Fig. 7

Fig. 7 Study of the water phantom. (a) The SOP of the water phantom. (b) Symmetry detection result of the water phantom, the position of which is indicated by the white square in (a). (c) Map of the detected APs, the position of which is indicated by the white square in (a). (d) The intensity profile of the SOP, indicated in (a) by the black line. (e), (f), (g) and (h) are the reconstruction images of the raw data, processing using our method, processing using the averaging filter [18], and wavelet - fourier filter [19], respectively, at the position that is indicated by the white line in (a). The arrows present the location of the artefacts.

Download Full Size | PDF

Figure 8 shows the reconstructed images of a euthanised mouse, in which Fig. 8(a) and 8(d) are the images of the thorax of the mouse before and after correction. Figure 8(b) and 8(c) are the magnified image of 8(a) and 8(d), which clearly shows that the artefacts are removed, and no useful information of the image is lost. Figure 8(e) and 8(h) present the images of the abdomen of the mouse before and after correction, and Fig. 8(f) and 8(g) are the magnified image of 8(e) and 8(h). Although the artefacts in Fig. 8 are inconspicuous, our approach is still able to extract the APs and exclude them. Furthermore, this study demonstrates the efficiency of our approach in the imaging of complex objects.

 figure: Fig. 8

Fig. 8 Study of the euthanised mouse. (a) and (d) The reconstructed image of the thorax before and after correction, respectively. (b) and (c) The magnified image of (a) and (d), respectively. (e) and (h) The reconstructed image of the abdomen before and after correction, respectively. (f) and (g) The magnified image of (e) and (h), respectively.

Download Full Size | PDF

The aforementioned imaging objects are water and a dead mouse, which will not introduce serious object movements and polychrome of the X-ray into the projection images. To validate the efficiency of our approach for different biomedical imaging tasks, the application of our approach in the imaging of a free breathing mouse and a chicken’s femur is demonstrated in Fig. 9 and 10 , respectively. Figure 9(a) is the symmetry detection result of the SOP of the free breathing mouse. The breathing of the mouse influences the symmetry at the position of the abdomen and thorax, especially for the position of the ribbons. However, this concern can almost be removed by applying 2-D dyadic wavelet transform to the symmetry detection result of the SOP, which is shown in Fig. 9(b). The reason for this approach can be observed in Fig. 9(c) and 9(d), in which the position is indicated in Fig. 9(a) by the black and white line, respectively. The red curves represent the right side of the SOP, and the blue curves denote the left side of the SOP. The detailed difference between them can be found in the magnified partial curves. As shown in Fig. 9(c), the asymmetry induced by the movement of the imaging object is low-frequency components in the symmetry detection result, which can be easily distinguished from the singularity that was induced by the APs. Figure 9(e) and 9(h), the position of which is indicated by the white line in Fig. 9(b), are the reconstructed images of the throat of the living mouse before and after correction. Figure 9(i) and 9(l), in which the position is indicated by the black line in Fig. 9(b), are the reconstructed images of the thorax of the living mouse before and after correction. In Fig. 9(i), although the breathing is quite obvious, which can be deduced from the blur of the bone and lung image, the APs, which induces artefacts with low intensity, can still be extracted using our approach. The performance is similar to the imaging of the chicken’s femur. Figure 10(a) is the symmetry detection result of the SOP of the chicken’s femur. Obvious asymmetry resulting from the polychrome of the X-ray can be observed. Figure 10(b) shows the details of the asymmetry. The asymmetry induced by the polychrome of the X-ray is also the low-frequency component in the symmetry detection result. Therefore, our approach can remove both the artefacts of high intensity and the artefacts of low intensity, which is demonstrated in Fig. 10(d) and 10(h), respectively. The results after using averaging filter are shown in Fig. 10(e) and 10(i). It can be seen that, it cannot deal with the artefacts with high intensity. And some new artefacts are introduced in the Fig. 10(i) as indicated by the white arrows. While the wavelet - Fourier filter perform better for those artefacts with high intensity as shown in Fig. 10(f) and 10(j). However, more remarkable new artefacts as indicated by the white arrow in the Fig. 10(f) and 10(j) are introduced.

 figure: Fig. 9

Fig. 9 Study of the living mouse. (a) Symmetry detection result of the living mouse. (b) The dyadic wavelet transform modulus of the symmetry detection result. (c) and (d) The intensity profile of the SOP, the position of which is indicated by the black and white line in (a), respectively. (e) and (h) The reconstructed image of the throat before and after correction, the position of which is indicated by the white line in (d), respectively. (f) and (g) The magnified image of (e) and (h), respectively. (i) and (l) The reconstructed image of the thorax before and after correction, the position of which is indicated by the black line in (d), respectively. (j) and (k) The magnified image of (i) and (l), respectively. The arrows present the location of the artefacts.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Study of the chicken’s femur. (a) The symmetry detection result of the chicken’s femur. (b) The intensity profile of the SOP, whose position is indicated with the black line in (a). (c) and (g) The reconstructed images of the femur before correction, respectively. (d) and (h), (e) and (i), (f) and (j) are the reconstructed images corrected by our method, averaging filter [18] and wavelet - fourier filter [19], respectively. The arrows indicate the position of the artefacts.

Download Full Size | PDF

5. Discussion and conclusions

In this paper, we propose a simple and effective approach to depress the artefacts that are induced by APs, based on the symmetry property of the SOP. Both numerical phantom studies and extensive biomedical studies demonstrate the efficiency of this approach.

As we deduced in Section 2, the symmetry of the SOP is under the ideal condition. However, many factors degrade the symmetry. They can be classified into three categories. Factors from source, such as the non-uniformity of both the X-ray flux and the focal spot shape of the X-ray tube; factors from detector, such as the non-uniformity of the detector, pixel pitch of detector, discretisation and limitation of the projection angle, and the imperfection of the system geometry; factors from the X-ray attenuation, polychrome of the X-ray which break the Beer’s law, movement of the imaging object. As we described earlier, the non-uniformity of both the X-ray source and the detector can be partially compensated by the calibration before imaging. The geometry of the system can be calibrated by many approaches. Additionally, the projection number of 400 is sufficient to guarantee the symmetry of the SOP. The larger pixel pitch of detector also deteriorates the symmetry. We found that the error will increase linearly as the pixel pitch increases with a coefficient of about 2. However, the error is quite low. For example, even using a pixel pitch of 400μm, which is about 5% of the length of the numerical phantom, the introduced error approximately equals to that introduced by the noise with variance of 0.8%. The influence of the movements of the imaging object and the polychrome of the X-Ray is difficult to diminish. However, as we have demonstrated, our approach still works well in different biomedical imaging tasks.

Although the FPD was carefully calibrated before imaging, the APs still existed in all of the imaging tasks. As can be observed in Fig. 7, our method can address the APs that have high intensity as well as the APs with low intensity that induce almost unnoticeable artefacts relative to the noise in the reconstruction images. As we demonstrated in the Fig. 7 and Fig. 10, the filteration methods can also remove the ring artefacts with low intensity perfectly. However, they do not show good performance in depression of the artefacts with high intensity because new artefacts are introduced. Thus, our approach is more feasible to deal with the APs with arbitrary intensity than the filtering methods that are applied to reconstructed images, sinograms, and SOPs. Compared with the other methods, which also attempt to directly locate the APs in the SOP, our method, in fact, extracts the positions of the APs in the symmetry detection result. Taking advantage of the symmetry property of the SOP, the localisation of the APs in the symmetry detection result avoids the disturbance from the fine structures of the imaging object in the SOP, which can be regarded as background. In other words, the symmetry detection result removes almost all of the information of the imaging object in the SOP. Therefore, our method can detect the APs with high sensitivity and can, thus, introducing few false alarms. However, our method is based on the symmetry of the SOP of CBCT with circular scanning trajectory. If the imaging system does not work in this configuration, our method may be limited.

Although we only introduced this approach in the CBCT, it can be extended to other imaging modalities with circular scanning trajectories, such as optical projection tomography [34] and synchrotron-based X-ray tomographic microscopy [18]. In addition, the symmetry property of the SOP can potentially play an important role in the CBCT. Those factors that degrade the symmetry may be detected and compensated using this property, such as geometric mis-calibration, metal artefacts, and the heel effect. The effect of the SOP in the CBCT is still under investigation.

Acknowledgments

This work was supported by National Key Technology R&D Program (Grant No. 2012BAI23B02), Specific International Scientific Cooperation (Grant No. 2010DFR30820), and the Science Fund for Creative Research Group (Grant No. 61121004).

References and links

1. D. A. Jaffray, J. H. Siewerdsen, J. W. Wong, and A. A. Martinez, “Flat-panel cone-beam computed tomography for image-guided radiation therapy,” Int. J. Radiat. Oncol. Biol. Phys. 53(5), 1337–1349 (2002). [CrossRef]   [PubMed]  

2. J. M. Boone, T. R. Nelson, K. K. Lindfors, and J. A. Seibert, “Dedicated breast CT: radiation dose and image quality evaluation,” Radiology 221(3), 657–667 (2001). [CrossRef]   [PubMed]  

3. B. Chen and R. Ning, “Cone-beam volume CT breast imaging: feasibility study,” Med. Phys. 29(5), 755–770 (2002). [CrossRef]   [PubMed]  

4. F. Kiessling, S. Greschus, M. P. Lichy, M. Bock, C. Fink, S. Vosseler, J. Moll, M. M. Mueller, N. E. Fusenig, H. Traupe, and W. Semmler, “Volumetric computed tomography (VCT): a new technology for noninvasive, high-resolution monitoring of tumor angiogenesis,” Nat. Med. 10(10), 1133–1138 (2004). [CrossRef]   [PubMed]  

5. C. T. Badea, M. Drangova, D. W. Holdsworth, and G. A. Johnson, “In vivo small-animal imaging using micro-CT and digital subtraction angiography,” Phys. Med. Biol. 53(19), 319–350 (2008). [CrossRef]   [PubMed]  

6. L. Y. Chen, Y. T. Shen, C. J. Lai, T. Han, Y. C. Zhong, S. A. P. Ge, X. M. Liu, T. P. Wang, W. T. Yang, G. J. Whitman, and C. C. Shaw, “Dual resolution cone beam breast CT: A feasibility study,” Med. Phys. 36(9), 4007–4014 (2009). [CrossRef]   [PubMed]  

7. S. C. Lee, H. K. Kim, I. K. Chun, M. H. Cho, S. Y. Lee, and M. H. Cho, “A flat-panel detector based micro-CT system: performance evaluation for small-animal imaging,” Phys. Med. Biol. 48(24), 4173–4185 (2003). [CrossRef]   [PubMed]  

8. R. Ning, B. Chen, R. F. Yu, D. Conover, X. Y. Tang, and Y. Ning, “Flat panel detector-based cone-beam volume CT angiography imaging: system evaluation,” IEEE Trans. Med. Imaging 19(9), 949–963 (2000). [CrossRef]   [PubMed]  

9. D. Prell, Y. Kyriakou, and W. A. Kalender, “Comparison of ring artifact correction methods for flat-detector CT,” Phys. Med. Biol. 54(12), 3881–3895 (2009). [CrossRef]   [PubMed]  

10. J. Sijbers and A. Postnov, “Reduction of ring artefacts in high resolution micro-CT reconstructions,” Phys. Med. Biol. 49(14), N247–N253 (2004). [CrossRef]   [PubMed]  

11. Y. Kyriakou, D. Prell, and W. A. Kalender, “Ring artifact correction for high-resolution micro CT,” Phys. Med. Biol. 54(17), 385–391 (2009). [CrossRef]   [PubMed]  

12. X. Tang, R. Ning, R. Yu, and D. Conover, “Cone beam volume CT image artifacts caused by defective cells in x-ray flat panel imagers and the artifact removal using a wavelet-analysis-based algorithm,” Med. Phys. 28(5), 812–825 (2001). [CrossRef]   [PubMed]  

13. R. A. Ketcham, “New algorithms for ring artifact removal,” Proc. SPIE 6318, 63180O, 63180O-7 (2006). [CrossRef]  

14. J. A. Seibert, J. M. Boone, and K. K. Lindfors, “Flat-field correction technique for digital detectors,” Proc. SPIE 3336, 348–354 (1998). [CrossRef]  

15. S. Titarenko, V. Titarenko, A. Kyrieleis, and P. J. Withers, “A ring artifact suppression algorithm based on a priori information,” Appl. Phys. Lett. 95(7), 071113 (2009). [CrossRef]  

16. F. Sadi, S. Y. Lee, and M. K. Hasan, “Removal of ring artifacts in computed tomographic imaging using iterative center weighted median filter,” Comput. Biol. Med. 40(1), 109–118 (2010). [CrossRef]   [PubMed]  

17. C. Raven, “Numerical removal of ring artifacts in microtomography,” Rev. Sci. Instrum. 69(8), 2978–2980 (1998). [CrossRef]  

18. M. Boin and A. Haibel, “Compensation of ring artefacts in synchrotron tomographic images,” Opt. Express 14(25), 12071–12075 (2006). [CrossRef]   [PubMed]  

19. B. Münch, P. Trtik, F. Marone, and M. Stampanoni, “Stripe and ring artifact removal with combined wavelet--Fourier filtering,” Opt. Express 17(10), 8567–8591 (2009). [CrossRef]   [PubMed]  

20. A. N. M. Ashrafuzzaman, S. Y. Lee, and M. K. Hasan, “A self-adaptive approach for the detection and correction of stripes in the sinogram: suppression of ring artifacts in CT imaging,” EURASIP J. Adv. Signal Process. 2011(1), 183547 (2011). [CrossRef]  

21. E. M. A. Anas, S. Y. Lee, and M. K. Hasan, “Removal of ring artifacts in CT imaging through detection and correction of stripes in the sinogram,” Phys. Med. Biol. 55(22), 6911–6930 (2010). [CrossRef]   [PubMed]  

22. K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, “A geometric calibration method for cone beam CT systems,” Med. Phys. 33(6), 1695–1706 (2006). [CrossRef]   [PubMed]  

23. M. Defrise and R. Clack, “A Cone-Beam Reconstruction Algorithm Using Shift-Variant Filtering and Cone-Beam Backprojection,” IEEE Trans. Med. Imaging 13(1), 186–195 (1994). [CrossRef]   [PubMed]  

24. A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals and Systems (Prentice-Hall, 1997), Chap. 2.

25. S. R. Deans, The Radon Transform and Some of Its Applications (Dover Publications, Inc., 2007), Chap. 3.

26. S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern. Anal. 14(7), 710–732 (1992). [CrossRef]  

27. S. Mallat and W. L. Hwang, “Singularity detection and processing with wavelets,” IEEE Trans. Inf. Theory 38(2), 617–643 (1992). [CrossRef]  

28. N. F. Law and W. C. Siu, “An efficient computational scheme for the two-dimensional overcomplete wavelet transform,” IEEE Trans. Signal Process. 50(11), 2806–2819 (2002). [CrossRef]  

29. S. L. Barna, M. W. Tate, S. M. Gruner, and E. F. Eikenberry, “Calibration procedures for charge-coupled device x-ray detectors,” Rev. Sci. Instrum. 70(7), 2927–2934 (1999). [CrossRef]  

30. D. W. Nelms, H. I. Shukla, E. Nixon, J. E. Bayouth, and R. T. Flynn, “Assessment of three dead detector correction methods for cone-beam computed tomography,” Med. Phys. 36(10), 4569–4576 (2009). [CrossRef]   [PubMed]  

31. A. C. Kak and M. Slaney, Principle of computerized tomographic imaging (IEEE Press, 1988), Chap. 3.

32. L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” J. Opt. Soc. Am. A 1(6), 612–619 (1984). [CrossRef]  

33. X. Q. Yang, Y. Z. Meng, Q. M. Luo, and H. Gong, “High resolution in vivo micro-CT with flat panel detector based on amorphous silicon,” J. XRay Sci. Technol. 18(4), 381–392 (2010). [PubMed]  

34. J. R. Walls, J. G. Sled, J. Sharpe, and R. M. Henkelman, “Correction of artefacts in optical projection tomography,” Phys. Med. Biol. 50(19), 4645–4665 (2005). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 System geometry of the CBCT.
Fig. 2
Fig. 2 Illustration of the imaging field in CBCT.
Fig. 3
Fig. 3 Flow chart of our proposed approach to detect and restore the Aps.
Fig. 4
Fig. 4 Validation of the symmetry of the SOP with different projection numbers. (a) The SOP of a numerical phantom, with 6400 projections. (b) The total error as a function of the projection numbers. (c) Pointwise detail of the errors with different projection numbers; the positions are indicated in (a) by the white line.
Fig. 5
Fig. 5 Validation of the symmetry of the SOP with different noise levels. (a) The SOP of the numerical phantom with a noise variance of 4%. (b) The total error as a function of the variance of the shot noise. (c) Pointwise details of the errors with different noise levels, whose position is indicated in (a) by the white line.
Fig. 6
Fig. 6 Performance of the APs in the reconstructed image of a numerical phantom. (a) Reconstructed image of a central slice of the numerical phantom, free of noise with APs. (b) The intensity profile of (a), in which the position is indicated by the white line in (a). (c) Reconstructed image of the central slice of the numerical phantom, with a noise variance of 2% and APs. (d) Magnified image of (c), in which the position is indicated by the black square in (c).
Fig. 7
Fig. 7 Study of the water phantom. (a) The SOP of the water phantom. (b) Symmetry detection result of the water phantom, the position of which is indicated by the white square in (a). (c) Map of the detected APs, the position of which is indicated by the white square in (a). (d) The intensity profile of the SOP, indicated in (a) by the black line. (e), (f), (g) and (h) are the reconstruction images of the raw data, processing using our method, processing using the averaging filter [18], and wavelet - fourier filter [19], respectively, at the position that is indicated by the white line in (a). The arrows present the location of the artefacts.
Fig. 8
Fig. 8 Study of the euthanised mouse. (a) and (d) The reconstructed image of the thorax before and after correction, respectively. (b) and (c) The magnified image of (a) and (d), respectively. (e) and (h) The reconstructed image of the abdomen before and after correction, respectively. (f) and (g) The magnified image of (e) and (h), respectively.
Fig. 9
Fig. 9 Study of the living mouse. (a) Symmetry detection result of the living mouse. (b) The dyadic wavelet transform modulus of the symmetry detection result. (c) and (d) The intensity profile of the SOP, the position of which is indicated by the black and white line in (a), respectively. (e) and (h) The reconstructed image of the throat before and after correction, the position of which is indicated by the white line in (d), respectively. (f) and (g) The magnified image of (e) and (h), respectively. (i) and (l) The reconstructed image of the thorax before and after correction, the position of which is indicated by the black line in (d), respectively. (j) and (k) The magnified image of (i) and (l), respectively. The arrows present the location of the artefacts.
Fig. 10
Fig. 10 Study of the chicken’s femur. (a) The symmetry detection result of the chicken’s femur. (b) The intensity profile of the SOP, whose position is indicated with the black line in (a). (c) and (g) The reconstructed images of the femur before correction, respectively. (d) and (h), (e) and (i), (f) and (j) are the reconstructed images corrected by our method, averaging filter [18] and wavelet - fourier filter [19], respectively. The arrows indicate the position of the artefacts.

Tables (1)

Tables Icon

Table 1 Parameters of the Numerical Phantom

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

SOP( x ˜ ,d, z ˜ )= R θ [μ(r,θ,z)]d θ
SOP( x ˜ , z ˜ )= R 0 [μ(r,θ θ ,z)] d θ
μ(r,θ,z)= μ( r 0 , θ 0 , z 0 )δ(r r 0 ,θ θ 0 ,z z 0 ) d r 0 d θ 0 d z 0
SOP( x ˜ , z ˜ )= R 0 [ μ( r 0 , θ 0 , z 0 )δ(r r 0 ,θ θ θ 0 ,z z 0 ) d r 0 d θ 0 d z 0 ]d θ
SOP( x ˜ , z ˜ )= R 0 [ μ( r 0 , θ 0 , z 0 )δ(r r 0 ,θ θ θ 0 ,z z 0 ) d r 0 d θ 0 d z 0 d θ ]
SOP( x ˜ , z ˜ )= R 0 [ μ( r 0 , θ 0 , z 0 ) δ(r r 0 ,θ θ θ 0 ,z z 0 )d θ d r 0 d θ 0 d z 0 ]
SOP( x ˜ , z ˜ )= R 0 [ μ( r 0 , θ 0 , z 0 )δ(r r 0 ,z z 0 ) d r 0 d θ 0 d z 0 ]
F(r,z)= μ( r 0 , θ 0 , z 0 )δ(r r 0 ,z z 0 ) d r 0 d θ 0 d z 0
SOP( x ˜ , z ˜ )= R 0 [F(r,z)]
SOP= ( P u (θ)+ P n (θ)+n(θ)) dθ
SOP= 2π N n=0 N1 ( P u ( 2π N n)+ P n ( 2π N n)+n( 2π N n))
SY(i,j)=SOP(i,j)SOP(i,nj+1)
S 2 k+1 (SY)= S 2 k (SY)*[ L k , L k ] W 2 k+1 1 (SY)= S 2 k (SY)*[ G k ,D] W 2 k+1 2 (SY)= S 2 k (SY)*[D, G k ]
W T 2 k+1 (SY)= ( W 2 k+1 1 (SY)) 2 + ( W 2 k+1 2 (SY)) 2
Tw2=(1 1 x/20+4 )×Tw1
SN(i,j)= m,n=1,0,1 SY(i+m,j+n)APs SY(i+m,j+n)
SYN(i,j)=| SY(i,j) 1 8 SN(i,j) |
Ts2=(1 1 x/20+4 )×Ts1
MA(i,j)={ MA(i,j), if j n 2 MA(i,nj+1), if j> n 2
TE= i=1 1100 j=1 550 | SOP(i,j)SOP(i,1101j) |
log 10 (TE)=1.459 log 10 (NP)+6.859
TE=12960NL+917.3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.