Abstract

We describe how to transfer the characteristic functional of an object model through a noisy, discrete imaging system to arrive at the characteristic function of the images. Our method can also incorporate linear post-processing of the images.

1 Introduction

In order to properly evaluate the performance of a digital imaging system on tasks such as detection or estimation, it is desirable to have the probability density on the images or, at least, the probability density on a set of filter outputs derived from the images . However, this is usually difficult because the imaging system maps a continuous function to a noisy finite-dimensional image. Both the randomness in the objects being imaged and the noise in the imaging system contribute to the stochastic nature of the images. Even in simulations where we have analytic expressions for the full probability densities of the objects and noise, it is exceedingly difficult to derive a corresponding analytic expression for the probability density on the images.

In terms of characteristic functions, this problem is more tractable. For images and other finite-dimensional random vectors, the characteristic function (CF)  is the Fourier transform of the probability density function (PDF). The CF can also be viewed as the expectation of certain complex exponential functions of the data. For random functions like objects the probability density is a map from a Hilbert space to the real numbers. This kind of map is usually called a functional. Just as a PDF has a corresponding CF, this probability density functional has a corresponding characteristic functional (CFl). We will show that if we have an analytic expression for the CFl of the objects, we can often derive an analytic expression for the characteristic function for the images or any set of linear filter outputs.

2 Method

A linear, digital imaging system can be mathematically represented by

$g=Hf+n,$

where f is a sample object from the ensemble of functions that are being imaged, H is the system operator which maps a continuous function to a discrete image, n is the noise in the imaging system, the statistics of which may depend on f, and g is the image. We assume that the mean image for a fixed object f is given by

$g¯=Hf.$

For this reason, we call the noiseless image.

2.1 Noiseless Imaging Systems

Let us assume that we know the CFl of the object distribution

$Ψf(ξ)=〈exp(−2πiξ†f)〉f$

where the function ξ represents the Fourier conjugate of the function f. For now, let us envision a noiseless imaging system. The characteristic function of is

$Ψg¯(ρ)=〈exp(−2πiρ†g¯)〉g¯$

where the <> represents the expectation with respect to the PDF of , and ρ is the Fourier conjugate of . By using Eq. (2), the properties of the adjoint, and standard rules for transforming expectations, we can rewrite Eq. (4) as

$Ψg¯(ρ)=〈exp(−2πiρ†Hf)〉f$
$=〈exp(−2πi(H†ρ)†f)〉f.$

By definition of the CFl of the objects, this equation is equivalent to

$Ψg¯(ρ)=Ψf(H†ρ).$

Thus, if the CFl of f is known, then we also know the CF of any linear mapping of f by simply using the adjoint of the linear operator.

2.2 Noisy Imaging Systems

Thus far, we have dealt with noise-free imaging systems which are unrealistic. In order for the methods developed in this paper to be of practical interest, we must be able to compute the characteristic function of the image data g = + n accounting for both object variability and noise. Two common noise models that researchers employ are Gaussian noise and quantum or Poisson noise.

The characteristic function for Gaussian noise with mean 0 and covariance matrix K is known to be Gaussian shaped as well with the form

$Ψn(ρ)=exp(−2π2ρ†Kρ).$

Because the noise is independent of the object being imaged, we know that the PDF of g is a convolution of the PDF of and the PDF of n, which, using the Fourier-convolution theorem, yields,

$Ψg(ρ)=Ψf(H†ρ)Ψn(ρ).$

Poisson detector noise can also be incorporated into this framework. Poisson noise is conditional upon the mean image, i.e.,

$pr(gg¯)=∏m=1Mexp(−g¯m)g¯mgmgm!,$

where m denotes the mth component of the M-vector . The probability of g can then be obtained by marginalizing over the mean image ,

$pr(g)=∫pr(g∣g¯)pr(g¯)dg¯$
$=∫pr(g¯)∏m=1Mexp(−g¯m)g¯mgmgm!dg.¯$

By definition, the CF for g is,

$Ψg(ρ)=〈exp(−2πiρ†g)〉g$
$=∏gexp(−2πi∑m=1Mρmgm)pr(g)$
$=∑gexp(−2πi∑m=1Mρmgm)∫dg¯pr(g¯)∏m=1Mexp(−g¯m)g¯mgmgm!,$

where the sum over vector g indicates a sum over all components of g from 0 to ∞,

$∑g=∑g1=0∞∑g2=0∞⋯∑gM=0∞.$

By rearranging terms in the above equation we arrive at,

$Ψg(ρ)=∫dg¯pr(g¯)∑g∏m=1Mexp(−g¯m−2πiρmgm)g¯mgmgm!$
$=∫dg¯pr(g¯)∏m=1Mexp(−g¯m)∏gm=0∞(exp(ln(g¯m−2πiρm)))gmgm!$

The sum in the above equation is a series expansion of an exponential function which gives us

$Ψg(ρ)=∫dg¯pr(g¯)∏m=1Mexp(−g¯m+eln(g¯m)−2πiρm),$
$=∫dg¯pr(g¯)∏m=1Mexp(−g¯m+g¯me−2πiρm),$

which is close to the CF of except that the term in the exponent is not the same. We can relate the above expression to the CF of by defining a nonlinear operator Γ(·) which maps an M vector to another M vector using the following equation for each component m,

$[Γ(ρ)]m=−1+exp(−2πiρm)−2πi.$

Thus we can relate the CF of g to that of the noiseless CF of which we previously related to the CF of f, i.e.,

$Ψg(ρ)=Ψg¯(Γ(ρ))=Ψf(H†Γ(ρ)).$

In other words, because we know the CFl for our object models, we are able to use H and a known nonlinear operator to determine the CF for our noisy image data.

2.3 Filter Outputs

A linear filter bank, like the imaging system itself, can be represented as a linear operator and, thus, if one wants to know the CF for the filter outputs ν = Tg = T(Hf +n), you need only the adjoint T . That is,

$Ψv(ω)=Ψf(H†Γ(Tωω)),$

where ω is the Fourier conjugate of the filter outputs ν.

Typical choices for T could be Laguerre-Gauss channels for signal-detection tasks , or wavelet filters for edge detection. We can also view g as the sinogram data from a tomographic imaging system, and the T as a linear reconstruction operator. With this latter viewpoint, we arrive at the CF for the reconstructed images.

3 Conclusions

We have shown that one can transfer the characteristic functional for the object ensemble through the imaging chain of a noisy, linear, continuous-to-discrete imaging system. The end result is the characteristic function of the image or any linear post-processed image. Since the CF of the image contains all of the statistical information that the PDF contains, it can, in principle, be used for statistical inference, parameter estimation, signal detection, and other relevant tasks. In a future publication, we will employ the techniques described here to determine the parameters which characterize the randomness in the objects being imaged.

Acknowledgments

This work was supported by NSF grant 9977116 and NIH grants P41 RR14304, KO1 CA87017, and RO1 CA52643.

1. H. H. Barrett, “Objective assessment of image quality: Effects of quantum noise and object variability,” J. Opt. Soc. Am. A , 7, 1266–1278 (1990). [CrossRef]   [PubMed]

2. A. Papoulis, Probability, Random Variables, and Stochastic Processes. (McGraw-Hill, New York, 1991).

3. H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998). [CrossRef]

References

• View by:

1. H. H. Barrett, “Objective assessment of image quality: Effects of quantum noise and object variability,” J. Opt. Soc. Am. A,  7, 1266–1278 (1990).
[Crossref] [PubMed]
2. A. Papoulis, Probability, Random Variables, and Stochastic Processes. (McGraw-Hill, New York, 1991).
3. H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

1998 (1)

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

1991 (1)

A. Papoulis, Probability, Random Variables, and Stochastic Processes. (McGraw-Hill, New York, 1991).

Abbey, C.

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

Barrett, H. H.

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

Eckstein, M.

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

Gallas, B.

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

Papoulis, A.

A. Papoulis, Probability, Random Variables, and Stochastic Processes. (McGraw-Hill, New York, 1991).

Proc. SPIE (1)

H. H. Barrett, C. Abbey, B. Gallas, and M. Eckstein, “Stabilized estimates of Hotelling-observer detection performance in patient-structured noise,” in SPIE Medical Imaging: Image Perception, ed. H. L. Kundel, Proc. SPIE 3340, 27–43 (1998).
[Crossref]

Other (1)

A. Papoulis, Probability, Random Variables, and Stochastic Processes. (McGraw-Hill, New York, 1991).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Equations (23)

$g = H f + n ,$
$g ¯ = H f .$
$Ψ f ( ξ ) = 〈 exp ( − 2 πi ξ † f ) 〉 f$
$Ψ g ¯ ( ρ ) = 〈 exp ( − 2 πi ρ † g ¯ ) 〉 g ¯$
$Ψ g ¯ ( ρ ) = 〈 exp ( − 2 πi ρ † H f ) 〉 f$
$= 〈 exp ( − 2 πi ( H † ρ ) † f ) 〉 f .$
$Ψ g ¯ ( ρ ) = Ψ f ( H † ρ ) .$
$Ψ n ( ρ ) = exp ( − 2 π 2 ρ † Kρ ) .$
$Ψ g ( ρ ) = Ψ f ( H † ρ ) Ψ n ( ρ ) .$
$pr ( g g ¯ ) = ∏ m = 1 M exp ( − g ¯ m ) g ¯ m g m g m ! ,$
$pr ( g ) = ∫ pr ( g ∣ g ¯ ) pr ( g ¯ ) d g ¯$
$= ∫ pr ( g ¯ ) ∏ m = 1 M exp ( − g ¯ m ) g ¯ m g m g m ! d g . ¯$
$Ψ g ( ρ ) = 〈 exp ( − 2 πi ρ † g ) 〉 g$
$= ∏ g exp ( − 2 πi ∑ m = 1 M ρ m g m ) pr ( g )$
$= ∑ g exp ( − 2 πi ∑ m = 1 M ρ m g m ) ∫ d g ¯ pr ( g ¯ ) ∏ m = 1 M exp ( − g ¯ m ) g ¯ m g m g m ! ,$
$∑ g = ∑ g 1 = 0 ∞ ∑ g 2 = 0 ∞ ⋯ ∑ g M = 0 ∞ .$
$Ψ g ( ρ ) = ∫ d g ¯ pr ( g ¯ ) ∑ g ∏ m = 1 M exp ( − g ¯ m − 2 πi ρ m g m ) g ¯ m g m g m !$
$= ∫ d g ¯ pr ( g ¯ ) ∏ m = 1 M exp ( − g ¯ m ) ∏ g m = 0 ∞ ( exp ( ln ( g ¯ m − 2 πi ρ m ) ) ) g m g m !$
$Ψ g ( ρ ) = ∫ d g ¯ pr ( g ¯ ) ∏ m = 1 M exp ( − g ¯ m + e ln ( g ¯ m ) − 2 πi ρ m ) ,$
$= ∫ d g ¯ pr ( g ¯ ) ∏ m = 1 M exp ( − g ¯ m + g ¯ m e − 2 πi ρ m ) ,$
$[ Γ ( ρ ) ] m = − 1 + exp ( − 2 πi ρ m ) − 2 πi .$
$Ψ g ( ρ ) = Ψ g ¯ ( Γ ( ρ ) ) = Ψ f ( H † Γ ( ρ ) ) .$
$Ψ v ( ω ) = Ψ f ( H † Γ ( T ω ω ) ) ,$