Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient and accurate conversion-gain estimation of a photon-counting image sensor based on the maximum likelihood estimation

Open Access Open Access

Abstract

We establish a method for estimating conversion gains of image sensors on the basis of a maximum likelihood estimation, one of the most common and well-established statistical approaches. A numerical simulation indicates the proposed method can evaluate the conversion gain more accurately with less data accumulation than known approaches. We also applied this method to experimental images accumulated under a photon-counting–regime illumination condition by a CMOS image sensor that can distinguish how many photoelectrons are generated in each pixel. Resultantly, the conversion gains were determined with an accuracy of three digits from 1000 observed images, whose number is at most 10 times smaller than that required for achieving a similar accuracy by known gain-estimation methods.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Recent development in imaging devices provides us with powerful tools for studies of biology, material science, and so on [13], and now an inevitable tool for cultivating scientific fields. Above all, complementary metal-oxide semiconductor (CMOS) image sensors are now becoming a gold standard due to their merits of versatility, easy usage, wide aperture, low noise, high sensitivity, and so on [4,5]. However, to obtain correct information from such devices, one must understand how the imaging devices work and what they actually output.

Generally, CMOS image sensors consist of a pixelated active area, amplifier circuits, and signal processing circuits [6]. First, incident photons are converted into photoelectrons in the active area. Then, the electrons are magnified by the amplifier circuits to make the signals detectable, and finally, the signals are converted into digital ones at the signal processing stage for facilitating signal transfer and handling them easy. Among them, a conversion gain, which is defined as a ratio of output signal level to the number of the initial photoelectrons generated at each pixel, should be one of the most fundamental characteristics of the image sensors to guarantee quantitative correspondence between input and output images.

Moreover, the conversion gain differs from pixel to pixel in practical devices, meaning that the observed image does not always give quantitative information of a subject unless output signal levels have been calibrated correctly on the basis of precise estimations of the conversion gain. So far, two methods have been developed to estimate the conversion gains: photon transfer curve (PTC) [713] and photon counting histogram (PCH) [14] methods. However, these methods require numerous repetitions of image acquisition, typically, of at least several tens of thousands of times, for determining the conversion gains with a precision sufficient for practical use. Above all, application of the PCH method is very limited to photodetectors that have an ability of resolving the number of photo(electro)ns in output signals. These make precise calibration of the conversion gains difficult despite its significance.

To achieve an efficient and accurate method for estimating the conversion gain, we establish another approach by introducing statistical models and the method of maximum likelihood [15], an approach which is essentially different from PTC and PCH methods. By taking account of statistical distributions of generated photoelectron numbers, conversion gains are accurately estimated while reducing image acquisition repetition to one tenth of that required in known methods. We also applied this method to images accumulated under illumination conditions of dozens of incident photons by a CMOS image sensor that can distinguish how many photoelectrons are generated in each pixel. As a result, the conversion gains were determined with an accuracy of three digits from $1000$ observed images, whose number is at most $10$ times smaller than that required for achieving a similar accuracy by currently known gain-estimation methods. This accuracy assures an ability to distinguish differences of single photoelectrons in output signals that correspond to $\sim 100$ photoelectrons. The proposed method also has the merit of versatility in various applications, because it does not require precise control of illumination intensity and is also effective for imaging devices that cannot distinguish the number of photoelectrons.

2. Statistical model for signal output from CMOS image sensor

We consider in the following output signals from a CMOS image sensor shined by photon-counting–regime light. According to the standard picture of photo-electron conversion, electrons are generated probabilistically with probability $\eta$ for incident photons in each pixel of the CMOS image sensor. Here, $\eta$ is often referred to as a conversion efficiency or quantum efficiency and generally takes a non-negative value of less than unity, and electrons generated via the photo-electron conversion process are called photoelectrons. The number of the photoelectrons behaves as a random variable obeying a Poisson distribution in many cases:

$$p_\text{L}(n | \zeta) = \frac{e^{-\zeta}\zeta^n}{n!},$$
where $n$ and $\zeta$ denote the number of the generated photoelectrons and its statistical average, respectively. This gives a correct estimation by applying $\zeta = \eta \bar {N}$, when the number of the incident photons is a Poissonian random variable with $\bar {N}$ as a mean photon number.

It is stressed that we can never identify how many photons are arrived deterministically from an output of the image sensor unless $\eta = 1$, because direct origins of the sensor output are not the incident photons but the photoelectrons that were generated probabilistically with probability $\eta$. Since there are no solutions to this common and inevitable issue for any existing detectors and image sensors, we restrict our scope to how to precisely determine the number of photoelectrons from the sensor output.

2.1 Basic model

Even when we know the number of the generated photoelectrons $n$, the final output signal becomes a random variable that fluctuates around the mean value $\alpha n$ due to readout noise of the device. Here, $\alpha$ is a multiplication factor of amplifier and signal processing circuits and is often referred to as a conversion gain. Realization $x$ of the output signal has the following probability density in most cases [16]:

$$p_\text{N}(x | n, \alpha, \sigma) = \frac{1}{\sqrt{2 \pi \alpha^2 \sigma^2}} \exp \left[-\frac{(x - \alpha n)^2}{2 \alpha^2 \sigma^2} \right],$$
where $\sigma$ is a standard deviation of an effective readout noise expressed in a unit of the number of photoelectrons. In other words, $X$ is assumed to obey a normal distribution with mean $\alpha n$ and variance $\alpha ^2 \sigma ^2$, a fact which is also expressed as $X \sim N(\alpha n, \alpha ^2 \sigma ^2)$.

In Eq. (2), $\sigma$ is the standard deviation of the readout noise reduced to that expressed in a unit of the number of photoelectrons. Here we note that the term “readout noise” indicates the whole contribution of noises appearing in the output signal without identifying its origins, hence it is meaningless to decompose the readout noise into a product of $\alpha$ and $\sigma$. Nevertheless, we introduced $\sigma$ as a relative benchmark for noises of an imaging device to make the quantitative discussion on noises concise.

A total statistical model for a CMOS image sensor is described as a convolution of probability densities of the number of incident photoelectrons and of the readout noise [Eqs. (1) and (2)]:

$$f(x | \zeta, \alpha, \sigma) = \sum_{n=0}^{\infty} {p_\text{N}(x | n, \alpha, \sigma) p_\text{L}(n | \zeta)}.$$
We show in Fig. 1 two examples of probability densities [Eq. (3)] for $\sigma = 0.30$ and $0.45$ $\text {e}^{-}$ with setting $\alpha = 1.0$ and $\zeta = 10$ fixed to visualize the influence of the effective readout noise on the behavior of our statistical model. Clear peaks and valleys in $f(x | 10, 1.0, 0.30)$ suggest that we can distinguish how many photoelectrons were generated initially by choosing discrimination ranges as intervals between adjacent valleys, while the peaks and valleys become ambiguous as $\sigma$ increases from 0.30 to 0.45 $\text {e}^{-}$. In this study, we define an upper threshold of the readout noise to be $\sigma _\text {th} := 0.30$ $\text {e}^{-}$ as a criterion for the possibility of resolving the number of photoelectrons. This threshold is commonly seen in similar studies [1719].

 figure: Fig. 1.

Fig. 1. Probability densities of signal value $x$ [Eq. (3)] for $\sigma =0.30$ and $0.45$ $\text {e}^-$. Conversion gain and averaged photoelectron number are fixed as $\alpha = 1$ and $\zeta = 10$, respectively. We can observe a clear separation of individual peaks corresponding to the number of photoelectrons in the case of $\sigma = 0.30$ $\text {e}^-$.

Download Full Size | PDF

2.2 Modified model for practical measurement

Generally, a statistical model should be constructed from strict descriptions of statistical processes contained in the total system under consideration. However, one often encounters a situation under which not all parameters in the strict statistical processes are experimentally available. In such cases, a modified model needs to be established that contains only experimentally available parameters and $\alpha$ as a subject to be estimated.

First, note that we can access statistical properties of a CMOS image sensor only through Eq. (3). Calculating mean value $\mu$ and variance $v$ of $X$ for a probability distribution of Eq. (3), we obtain [11]

$$\left. \begin{aligned} \mu & = \alpha\zeta \\ v & = \alpha^2\sigma^2 + \alpha^2\zeta = \sigma_s^2 + \alpha^2\zeta \end{aligned} \right\},$$
where $\sigma _s:= \alpha \sigma$ is standard deviation of the readout noise observed at the final output stage. In the above equations, both $\mu$ and $v$ correspond to those evaluated from observed values of $X$. However, $\alpha$, $\zeta$, and $\sigma$ are unavailable separately: e.g., we can only determine $\mu$, while $\alpha$ and $\zeta$ are left unknown.

By replacing parameters in Eqs. (1), (2), and (3) with the help of Eq. (4), we obtain a total statistical model expressed in terms of experimentally accessible parameters:

$$f(x | \mu, v, \alpha) = \frac{1}{\sqrt{2\pi(\nu-\alpha\mu)}} \sum_{n=0}^\infty \frac{e^{-\mu/\alpha}\left(\mu/\alpha\right)^n}{n!} \exp\left[-\frac{(x-\alpha n)^2}{2(\nu-\alpha\mu)}\right] .$$
In the following section, we establish a method to determine the most likely $\alpha$ at each pixel on the basis of statistical models established in this section.

3. Estimation of conversion gains

We briefly summarize the existing methods for estimating the conversion gain to highlight the difference of our method.

3.1 Existing method I: PTC method

According to Eq. (4), the variance and mean of $X$ satisfy the following relation:

$$v_\zeta = \sigma_s^2 + \alpha \mu_\zeta,$$
where subscript $\zeta$ attached to $v$ and $\mu$ indicates their dependency on the average number of incident photoelectrons. Practically, $v_\zeta$ and $\mu _\zeta$ are obtained as an unbiased variance and a sample mean, respectively, from a series of realizations of $X$ [denoted as $\{x_m\}_{m = 1, 2, \ldots, M}$], which are observed under a common condition of $\zeta$ [7,8].

The PTC method is based on estimating $\alpha$ as a linear coefficient of a scatter plot that consists of $\{ (\mu _\zeta, v_\zeta ) \}$ for various $\zeta$. When the scatter plot contains $L$ pairs of $(\mu _\zeta, v_\zeta )$, we replace the subscript by integer suffix $l (= 1, 2, \ldots, L)$ to distinguish different conditions of $\zeta$. Strictly speaking, the average number of incident photoelectrons, $\zeta$, is almost impossible to access directly. Nevertheless, so far as applying a Poissonian light source, we can control $\zeta$ by a mean incident photon number via $\zeta = \eta \bar {N}$ as described at the beginning of Sec. 2.

An accuracy of $\alpha$ by the PTC method is estimated from the square root of the inverse of Fisher information for a statistical model of $v$. The final expression for the accuracy of $\alpha$ is approximately given as $\alpha /\sqrt {L (M - 1)/2}$, suggesting that PTC estimation requires at least $L M \gtrsim 300000$ observations to achieve a relative error of $2.6 \times 10^{-3}$.

3.2 Existing method II: PCH method

Consider observing $X$ repeatedly and generating a histogram from a series of the observed values. Then, peaks (maxima) of occurrence should appear at bins covering $\alpha n$ with $n$ as an arbitrary natural number. If readout noise is suppressed adequately and each bin width is sufficiently small for resolving each peak, $\alpha$ can be estimated from intervals between adjacent peaks in the histogram. This approach is referred to as the PCH method [14].

Since the PCH method is based on a sensor’s ability to resolve numbers of generated photoelectrons, it is necessary to satisfy the criterion for readout noise as discussed in Sec. 2.: $\sigma < \sigma _\textrm {th}$. The PCH method works correctly for incident light of an arbitrary probabilistic nature. Moreover, the estimation of $\alpha$ does not require observations with varying physical conditions of an illumination.

The accuracy of $\alpha$ estimated by the PCH method depends on detailed settings of the image sensor and the observation conditions (e.g., averaged light intensity, bin settings, peak detection algorithm, etc.), hence a simple and general expression for the accuracy is impossible to establish. Typically, the PCH method requires observations of at least several tens of thousands of times to achieve a relative error of the order of $10^{-3}$.

3.3 Proposed method

In this subsection, we describe a novel method for estimating $\alpha$ that has merits of efficiency, accuracy, and wider applicability. Once a sequence of observed values of $X$ is given as $\{x_m\}_{m = 1, 2, \ldots, M}$, a likelihood function for conversion gain $\alpha$ is determined by exchanging roles of variables and parameters in a conditional probability distribution.

From Eq. (3), a basic likelihood function of $\alpha$ is given as follows:

$$l_0(\alpha) = \frac{1}{(2 \pi \sigma_s^2)^{M/2}} \prod_{m = 1}^{M} \sum_{n = 0}^\infty \frac{e^{-\zeta}\zeta^n}{n!} \exp\left[-\frac{(x_m - \alpha n)^2}{2 \sigma_s^2} \right].$$
For given $\zeta$ and $\sigma _s$, the most likely estimation for $\alpha$, denoted as $\alpha ^\ast$ in the following, is given by
$$\alpha^* = \arg\max_{\alpha} \left[\log l_0(\alpha) \right].$$
Solutions of Eq. (8) are obtained by applying any well-known numerical optimization methods. Generally, a CMOS image sensor includes pixels where readout noises are exceptionally large, meaning that histograms generated from observed signal values do not present clear peaks. Hence, the PCH method is inapplicable to such cases, while the proposed algorithm works in principle regardless of the readout noise with the cost of necessity for the value of $\zeta$.

If we consider applying Eq. (7) to a situation of unknown $\zeta$, the statistical model in Sec. 2.2 should be effective. In this case, the basic likelihood function is determined from Eq. (5) as follows:

$$l_1(\alpha) = \frac{1}{(2 \pi \sigma_s^2)^{M/2}} \prod_{m=1}^M \sum_{n=0}^\infty \frac{e^{-\bar\mu/\alpha}\left(\bar\mu/\alpha\right)^n}{n!} \exp\left[-\frac{(x_m-\alpha n)^2}{2\sigma_s^2}\right] ,$$
where $\bar \mu$ is the mean value of $\{x_m\}_{m = 1, 2, \ldots, M}$. The readout noise is expressed as $\sigma _s^2=\nu -\alpha \mu$ in the original model of Eq. (5), but this expression sometimes becomes numerically unstable in practice due to enhancement of a relative error brought about by subtraction between two larger values. In this case, we can achieve stable calculation by using the value of $\sigma _s$ directly, which can be evaluated from a different experiment; for example, $\sigma _s$ can be derived as the standard deviation of $\{x_m\}_{m=1,2,\ldots M}$ observed without any illumination [20].

The estimated conversion gain $\alpha ^*$ is again expressed as $\alpha ^* = \arg \max _{\alpha } \left [\log l_1(\alpha ) \right ]$.

3.4 Numerical evaluation of the proposed method

We here validate the $\alpha$-estimation method [Eq. (9)] by applying it to series of artificial data generated on a computer to simulate output signals of a CMOS image sensor.

The simulation data are prepared by repeating the following procedure. First, a sequence of pseudo-random numbers is generated that corresponds to a distribution of initial photoelectron numbers. Here, the numbers obey a Poisson distribution of mean $\zeta$ [i.e., Eq. (1)], and $\zeta$ is typically chosen as $30$ for evaluating the proposed methods. Then, we obtain a series of simulated output from an image sensor by adding readout noise, which is given as a Gaussian random number distributed with a standard deviation of $\sigma _s$ around zero mean, to each incident-photoelectron number and by multiplying a given conversion gain. In the following, we choose $\alpha = 7.0$, which is a typical value of conversion gain in practical devices.

We evaluate the accuracy of three estimation methods by comparing errors in estimated conversion gains. The estimation error is defined as a standard deviation of the 1024 $\alpha ^\ast$s, each of which is calculated by applying an identical estimation method to each simulation-data sequence. Here, in the case of the proposed methods, the uniformed-search algorithm was used for determining the maximum of log-likelihood functions via direct comparison of the function values at 10000 representative points in a domain bounded by upper and lower search limits. Note that the accuracy of $\alpha ^\ast$ is also limited by the interval of the representatives, hence one must choose a sufficiently narrow search domain so that the width of the interval is negligible compared with the estimation error.

Figures 2(a) and (b) show behaviors of the relative errors of $\alpha ^\ast$ with respect to the length of each simulation data sequence for two typical values of $\sigma _s$, where three estimation methods are compared: the PTC method, the proposed method based on an ideal model [$l_0(\alpha )$], and the proposed method based on a practical model [$l_1(\alpha )$]. We prepared five variations of simulation data sequences corresponding to different initial photoelectron numbers such as $\zeta = 10, 20, 30, 40$, and $50$, respectively, for making it possible to apply the PTC method. Contrarily, the proposed methods can estimate conversion gains without varying the initial photoelectron number: estimations are performed only for $\zeta = 30$ as mentioned earlier. Here, one should recall that the PCH method requires position information of independent peaks in histograms of output signal levels as shown in Fig. 1. Nevertheless, independent signal peaks are unavailable at all for the lengths of simulation data sequence adopted in Figs. 2(a) and (b). The PCH method has a potential of estimating conversion gains more precisely than the PTC method at the cost of heavy demands for operating conditions.

 figure: Fig. 2.

Fig. 2. Behaviors of relative errors in conversion gains ($\alpha$) estimated by applying three estimation methods to simulation data sequences for different conditions of readout noise: (a) $\sigma _s = 0.27 \alpha$ and (b) $\sigma _s = 1.0 \alpha$. Open blue squares, open red circles, and green triangles exhibit the relative errors by PTC method, the proposed method based on the ideal model [choice of likelihood function as $l_0(\alpha )$], and that based on the experimental model [$l_1(\alpha )$], respectively. Search domains for $\alpha$ by the uniformed-search algorithm were chosen as (a) $\alpha \in [6.8, 7.2]$ and (b) $\alpha \in [6.0, 8.0]$, both of which were discretized to 10000 representative points. Note here that the PCH method does not work under these conditions.

Download Full Size | PDF

Under a condition satisfying the criterion of photoelectron-number discrimination ($\sigma _s = 0.27 \alpha$), we can see in Fig. 2(a) that the $l_1(\alpha )$-based method achieves a similar accuracy to the ideal $l_0(\alpha )$-based one, and that both of the proposed methods give more accurate estimations than the PTC method even for shorter data lengths. We also show in Fig. 2(b) behaviors of the relative errors for $\sigma _s = 1.0 \alpha$, where the readout noise is too large to distinguish how many photoelectrons were generated at the initial stage of light detection, suggesting that the PCH method utterly fails. It is also noted that stable $\alpha ^\ast$ determinations become impossible for simulation data sequences that are less than $1000$ long. In Fig. 2(b), accuracy of $\alpha ^\ast$ by the $l_1(\alpha )$-based method deteriorates from that via the ideal $l_0(\alpha )$-based one, while it maintains a similar accuracy to the PTC method. Resultantly, these facts indicate the proposed methods are widely applicable for various observation conditions.

4. Results and discussion

4.1 Experiment

In the present study, measurements were performed by recording and analyzing output signals of an image sensor under various precisely controlled illumination conditions. Figure 3 exhibits a schematic picture of our experimental setup.

 figure: Fig. 3.

Fig. 3. Schematic picture of experimental setup. The qCMOS camera [Hamamatsu: C15550-20UP (Prototype)] is illuminated with light (wavelength: 420 nm) from pulsed laser source through optical fiber and a phase diffuser plate. The output end of the optical fiber, the phase diffuser plate, and the qCMOS camera are combined rigidly by a lens tube (Thorlabs: SM1L40) to suppress influence from mechanical vibrations as far as possible. The active area of qCMOS camera consists of $4096 \times 2304$ pixel sensors, each of which dimension is $4.6 \times 4.6~{\mu \textrm {m}}^2$.

Download Full Size | PDF

We recall here the scope of this study, i.e., precise assignment of the number of photoelectrons (see the beginning of Sec. 2), hence the primary role of a light source is creating photoelectrons in the active area of an image sensor and controlling the number of photoelectrons at least statistically. So far as obeying a basic physical description of photo-electron conversion process in photodetectors, the mean number of photoelectrons is given as $\eta \times (\textrm {number of photons})$. From this viewpoint, wave properties of the incident light, e.g., wavefront geometry, coherence, and so on, does not produce intrinsic issues, and the light source can be chosen upon controllability and usability in photon-counting regimes as priority matters. An illumination system was designed for enabling easy and quantitative control of total photon flux. A pulse-driven laser source [PicoQuant: LDH-D-C-420 (wavelength $\sim$420 nm)] emitted coherent light pulses, each of which had a temporal width of $\sim$60 ps. The light pulses were collimated by a plano-convex lens and introduced to the end of a single-mode optical fiber via an objective. Here, an output photon flux was controlled mainly by varying the interval of the light pulses; i.e., by applying a method often referred to as pulse-density modulation in signal processing. A variable neutral density (ND) filter was also placed between the lens and objective for fine tuning the photon flux.

Output light pulses from another end of the optical fiber were projected onto an active area of a CMOS image sensor through a phase diffuser plate (Thorlabs: DG10-600-MD) for enabling evaluation of pattern-independent properties. Since the conversion gain is defined as a proportionality coefficient between the number of photoelectrons and the sensor output, its estimation is basically independent of the illumination intensity so far as avoiding extremely low/high-level regimes. The end of the optical fiber, the phase diffuser plate, and the CMOS image sensor are combined rigidly to suppress the effect from mechanical vibration as well as to remove influence of ambient and stray lights. Here, the CMOS image sensor [Hamamatsu: C15550-20UP (Prototype)] is equipped with pixel sensors and subsequent electronic circuits specially designed to suppress noise as much as possible for quantitative observation of images under wide regimes of illumination conditions. Typical value of quantum efficiency is $\sim$85% at wavelength of 420 nm. Typical values of dark currents and readout noises are 0.016 $\textrm {e}^{-}/\textrm {s}$ and 0.27 $\textrm {e}^{-}$, respectively, which enable discrimination of the number of photoelectrons initially created in each pixel sensor. For these properties, the image sensor device is referred to as a qCMOS (quantitative CMOS) camera.

As a typical measurement condition for conversion-gain estimation, image acquisition was repeated 1000 times while setting the exposure time of the qCMOS camera as 100 ms, which is much longer than the pulse interval of the laser source, e.g., order of $\sim ~\mu \textrm {s}$, to justify the present scheme for controlling the photon flux. Actually, bare outputs of the qCMOS camera include not only readout noises but also offsets, both of which originate from electronic circuits and are independent of physical phenomena in pixel sensors. Such circuit-originated properties can be extracted from other properties by monitoring outputs without any illuminations under the condition of the shortest exposure time of the device, $176~\mu \textrm {s}$, during which the dark signal is at most $2.8 \times 10^{-6}~\textrm {e}^{-}$, i.e., almost negligible. The offset and the readout noise are determined experimentally from the bare output values obtained by the repeated image acquisition (1000 times, here) without any illuminations: the mean and the standard deviation of the bare output signal values correspond to the offset value and the readout noise, respectively. In the following, “observed data” indicate the results after subtractions of the corresponding offset values from the bare outputs. As described in Sec.3.4, the PCH method is inapplicable for the length of data sequence of less than 2000. Hence, experimental comparisons are made between the proposed and PTC methods in the following.

Conversion gains were estimated from the observed data in a workstation, where we introduced general-purpose computing on graphics processing units (GPGPUs; NVidia: Tesla V100S) for efficiently processing a large number of data. Figure 4 shows a schematic picture for the conversion-gain estimation at $i$th pixel. We note that conversion gains at other pixels can be estimated independently and that GPGPUs’ ability of parallel computing is suitable for performing such independent operations simultaneously.

 figure: Fig. 4.

Fig. 4. Schematic picture of operations and data flow for estimating conversion gain at the $i$th pixel by the proposed method. The estimations at different pixels can be performed independently. Typical values of the condition parameters are $M = 1000$, $D = 10000$, and $[\alpha _\textrm {min}, \alpha _\textrm {max}] = [6.8, 7.3]$.

Download Full Size | PDF

4.2 Effects of pixel-to-pixel distribution of conversion gain

Although statistical behaviors have been modeled for an individual pixel of a qCMOS camera in the previous sections, we should consider another issue in practical usage of imaging devices: imaging devices are usually used to observe incident light patterns distributed over multiple pixels. For example, histogram analyses are often performed for output signals in a specific user-defined area of the observed pattern. However, in the case of observing images under a low-level illumination, photoelectron-number peaks in such histograms should not be able to be discriminated without precisely estimating the conversion gains.

Figures 5(a) and (b) demonstrate distributions of conversion gains estimated from experimental data acquired over a $128\times 128$-pixel area that is chosen arbitrarily on an active area of a qCMOS camera. The histogram [Fig. 5(b)] actually shows a profile similar to the normal distribution of which the mean and standard deviation are $7.06$ and $6.0 \times 10^{-2}$, respectively. The distribution of conversion gains appears to be small in value, nevertheless it can cause significant deviations in output signal levels. For example, under light illumination producing more than several hundreds of photoelectrons in a pixel, deviation of $1.0 \times 10^{-2}$ in the conversion gain brings about a significant change in the output level that corresponds to more than a few photoelectrons, a result which makes analysis of the photoelectron number meaningless. Hence, conversion gains also need to be precisely determined for applying qCMOS cameras to photon-counting imaging.

 figure: Fig. 5.

Fig. 5. (a) Distribution image and (b) corresponding histogram of conversion gains estimated by the proposed method over a $128 \times 128$-pixel area on a qCMOS camera. Each bin width is uniformly chosen as $2.0 \times 10^{-3}$ in (b). (c) and (d): plots for relative errors of conversion gain processed correspondingly to (a) and (b), respectively. Each bin width is uniformly chosen as $1.0 \times 10^{-5}$ in (d). Estimations were performed under an illumination condition that corresponds to the mean number of initial photoelectrons of $\sim 84$.

Download Full Size | PDF

Precision of the conversion-gain estimations can be evaluated from estimation errors. The variance of the estimated conversion gain is derived from the inverse of an observed Fisher information [21] as $\left [ \left. -(\partial ^2/\partial \alpha ^2)\ln l_1(\alpha )\right |_{\alpha =\alpha ^*} \right ]^{-1}$, and we adopt the square root of this quantity as an estimation error of the conversion gain. Generally, the estimation errors tend to decrease as an increase of the number of initial electrons. We note that estimations of the conversion gains and their errors in Fig. 5 were performed under an illumination condition that corresponds to the mean number of initial photoelectrons of $\sim 84$. Figures 5(c) and (d) show distributions of the relative estimation errors that are derived in correspondence to Figs. 5(a) and (b), respectively. Blight spots observed in Fig. 5(c) correspond to pixels where $\sigma _s$ is large, meaning that the readout noise can prevent precise gain estimation. The mode of the histogram in Fig. 5(d) is $8.5\times 10^{-5}$; here, we emphasize that the PTC method requires data longer than $10^8$ frames to achieve a similar estimation error.

We also show similar results by the PTC method in Fig. 6 for comparison with those in Fig. 5. The results were obtained from two series of images (each series contains 1000 images) observed under different illumination conditions corresponding to $\sim 39$ and 84 as the mean number of initial photoelectrons. The histogram of the estimated conversion gain in Fig. 6(a) shows a Gaussian-like distribution, while we can observe in the distribution a change of the peak position and an increment of width compared to Fig. 5(a), both of which imply larger uncertainty in the estimated conversion gains by the PTC method. In fact, estimation errors in Figs. 6(c) and (d) mark values larger by at least a factor of a hundred than those in Figs. 5(c) and (d). The mode of the histogram in Fig. 6(d) becomes $3.1645 \times 10^{-2}$: this value almost equals to a value by the approximated expression for the relative error (Sec.3.1), suggesting an intrinsic limit of the PTC method in terms of precision of the conversion-gain estimation.

 figure: Fig. 6.

Fig. 6. (a) Distribution image and (b) corresponding histogram of conversion gains estimated by the PTC method. (c) and (d): plots for relative errors of conversion gain processed correspondingly to (a) and (b), respectively. We note that the PTC method requires target images acquired under at least two different illumination conditions, where the conditions were chosen as $\sim 39$ and 84 for the mean numbers of initial photoelectrons. The bin widths of (b) and (d) are chosen identically to Figs. 5(b) and (d), respectively, while the horizontal plot ranges are different at all.

Download Full Size | PDF

We attempt to visualize an effect of the precise conversion-gain calibration by the proposed method. Figure 7 (Upper) exhibits histograms of bare output signals for low and high illumination conditions, which were controlled by the ND filter and the duty ratio of the laser pulse in Fig. 3. Here, the histogram is processed by unifying outputs from the $128 \times 128$-pixel area on the camera as done for constructing the histograms in Fig. 5. Figure 7 (Lower) shows histograms of the estimated number of photoelectrons that are derived via dividing each output signal value by the estimated conversion gain in Fig. 5(a) at the corresponding pixel. The mean photoelectron numbers, $\bar {\zeta }$, for the low and high illumination conditions are determined quantitatively as $\bar {\zeta } = 39$ and 84, respectively, from the histogram in Fig. 7 (Lower). We can observe in Fig. 7 (Lower) prominent improvement of discrimination of photoelectron-number peaks, especially in the high illumination condition.

 figure: Fig. 7.

Fig. 7. Histograms generated from (Upper) bare output signals and (Lower) signal values reduced to the number of photoelectrons by using conversion gains estimated via the $l_1(\alpha )$-based proposed method. Illumination is almost uniform and time independent.

Download Full Size | PDF

To evaluate the result in Fig. 7 from a different point of view, we study statistical properties of output signals distinguishedly from effects of fluctuation in the number of photoelectrons: i.e., we consider for the moment a case that the number of photoelectrons takes a value of $n$ commonly for all pixels. Here, each of the pixels is regarded as statistically independent.

We denote the signal value at the $i$th pixel as $x^{(i)}$ $(i = 1, 2, \ldots, N)$ with $N$ as the number of pixels. Observations consist of image acquisitions of $M$ times, where each acquisition is labeled by subscript $m$ as $\{x_m^{(i)}\}_{m = 1, 2, \ldots, M}$. Under these conditions, $x_m^{(i)}$ is expressed as

$$ x_m^{(i)} = \alpha^{(i)} n + r_m^{(i)}, $$
where $\alpha ^{(i)}$ and $r_m^{(i)}$ are the conversion gain and the $m$th realization of a random variable for the readout noise at the $i$th pixel, respectively.

Here we also assume that the readout noise obeys normal distribution $N(0,\sigma _s^{(i)})$ with $\sigma _s^{(i)}$ as the standard deviation of the readout noise at the $i$th pixel. Thus the expectation value and variance of $\{x^{(i)}_{m}\}$ averaged over $N$ pixels are given as

$$\begin{aligned} \text{E}\left[ \{ x_m^{(i)} \} \right] = \frac{1}{N}\sum_{i = 1}^N \alpha^{(i)} n \end{aligned}$$
$$\begin{aligned} \text{V}\left[ \{ x_m^{(i)} \} \right] = \frac{1}{N}\sum_{i = 1}^N \sigma_s^{(i)2} + n^2 \text{V} \left[ \{ \alpha^{(i)} \} \right], \end{aligned}$$
respectively. Equation (11) shows that the apparent readout noise depends on the variance of the conversion gain multiplied by the square of the number of photoelectrons, meaning discrimination of photoelectron peaks becomes ambiguous for larger $n$. In other words, the number of photoelectron peaks that can be discriminated is also limited by the pixel-to-pixel distribution of $\alpha ^{(i)}$. In fact, Fig. 7 (Upper) shows evidence for this effect.

We can evaluate a limitation on the distribution of $\alpha ^{(i)}$ for resolving photoelectron peaks of larger $n$ from Eq. (11). The first step is dividing both sides of Eq. (11) by ${\bar {\alpha }}^2$, where $\bar {\alpha }$ is the mean of $\{\alpha ^{(i)}\}$, to handle the output signals and readout noise in the unit of the photoelectron number. Then, the first term of the right hand side of Eq. (11) is replaced with the square of 0.27 (in the unit of the photoelectron number), which is a typical value of readout noise in practical devices. By applying a typical threshold for discriminating photoelectron peaks, $\sigma _\textrm {th} = 0.30~\textrm {e}^{-}$ as introduced in Sec. 2.1, to the standard deviation of the output signal, we can estimate an upper limit on the relative variance of $\{\alpha ^{(i)}\}$. Finally, we obtain a conclusion that the standard deviation of $\{\alpha ^{(i)}/\bar {\alpha }\}$ must be less than $1.31 \times 10^{-3}$ to resolve photoelectron peaks of up to $n = 100$.

The above discussion suggests another approach for evaluating precision of the $\alpha ^{(i)}$-estimation. We can observe in Fig. 7 (Lower) independent peaks in a regime around $\sim 100$ photoelectrons, indicating that output signal values are converted into the unit of photoelectron number with a precision of $1.31 \times 10^{-3}$ at worst. This result is consistent with relative errors shown in Figs. 5(c) and (d). In other words, Fig. 7 (Lower) is regarded as an evidence for successful conversion from camera’s output signals to numbers of photoelectrons, i.e., “absolute” unit for light detection in a photon-counting regime. Thus we can conclude that the conversion gains estimated by the proposed method are correct.

Before closing discussion, we mention a possible effect that may be introduced by instability of a laser apparatus and/or by mechanical vibration of a measurement setup. If we assume that an illumination emits photons obeying Poisson statistics, such effect should appear as temporal changes in the mean photon number of the corresponding Poisson distribution. According to the discussion in the beginning of Sec. 2, the number of photoelectrons also suffers from the changes in the mean photon number to include an extra fluctuation in addition to that originated from the Poisson statistics. This suggests a possibility of estimating the illumination stability from statistical properties of the number of photoelectrons. For example, variance-to-mean ratio (VMR) [22] is known as a simple benchmark for evaluating deviation from the Poisson distribution. To calculate the VMR, it is necessary to deal with observed signal values expressed in units of photoelectron number with correct conversion gains: i.e., a sequence of observed photoelectron number is given as $\{x_m^{(i)} / \alpha ^{(i)}\}_{m = 1, 2, \ldots, M}$ for signal values observed at the $i$th pixel. Noting the necessity of subtracting the readout noise for deriving the variation of the photoelectron number [see Eq. (4)], we obtain the VMR at each pixel from the variation and mean of the sequence. For observed data in Fig. 7, the pixel-over average and standard deviation of the VMR becomes $1.04 \pm 0.05$, which is statistically indistinguishable from unity. Thus, the obtained VMR value is not different from that of the Poisson distribution, and we conclude that temporal changes of the illumination bring no visual effects to the present results.

5. Conclusion

In this paper, we reported a novel method to estimate conversion gain on the basis of maximum likelihood. The proposed method needs more than hundreds of images under time independent illumination as well as the standard deviation of the readout noise. However, one version of the proposed method does not need the average number of incident photoelectrons, thus it is suitable for practical experiments.

The proposed method was numerically and experimentally tested. The relative error of the estimated conversion gains were evaluated from the inverse of observed Fisher information. As a result, the conversion gains were estimated with accuracy of three digits from $1000$ observed images, whose number is more than $10$ times fewer than that required by known estimation methods. This accuracy is sufficient for resolving $100$ photoelectrons. Actually, the accuracy of the proposed method depends on the average number of photoelectrons, and we continue efforts to find the optimal average number of photoelectrons for estimating the conversion gain with high accuracy.

The proposed method is effective in principle for every imaging devices. However, it should be stressed that validation of its effectiveness is achieved in this study owing to a photoelectron-number discrimination ability specific to the qCMOS camera. The proposed method provides of a precise and efficient method of the conversion-gain calibration for various photon-counting image sensors. Moreover, the precise conversion-gain calibration is essential for bringing out true potential of the qCMOS camera as a detector for obtaining quantitative information in photon-counting regimes.

Acknowledgments

The authors are grateful to T. Maruno, H. Toyoda and E. Toda of Hamamatsu Photonics K.K. for their encouragement throughout this work. We also thank T. Ando for helpful discussions and members of development/product team of imaging device at System Division of Hamamatsu Photonics K.K. for their kind technical supports.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Liu, M. J. Mlodzianoski, Z. Hu, Y. Ren, K. McElmurry, D. M. Suter, and F. Huang, “sCMOS noise-correction algorithm for microscopy images,” Nat. Methods 14(8), 760–761 (2017). [CrossRef]  

2. F. Huang, T. M. P. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes, M. W. Davidson, D. Toomre, and J. Bewersdorf, “Video-rate nanoscopy using sCMOS camera–specific single-molecule localization algorithms,” Nat. Methods 10(7), 653–658 (2013). [CrossRef]  

3. L. von Diezmann, Y. Shechtman, and W. Moerner, “Three-dimensional localization of single molecules for super-resolution imaging and single-particle tracking,” Chem. Rev. 117(11), 7244–7275 (2017). [CrossRef]  

4. B. Fowler, C. Liu, S. Mims, J. Balicki, W. Li, H. Do, J. Appelbaum, and P. Vu, “A 5.5Mpixel 100 frames/sec wide dynamic range low noise CMOS image sensor for scientific applications,” Proc. SPIE 7536, 753607 (2010). [CrossRef]  

5. Z.-L. Huang, H. Zhu, F. Long, H. Ma, L. Qin, Y. Liu, J. Ding, Z. Zhang, Q. Luo, and S. Zeng, “Localization-based super-resolution microscopy with an sCMOS camera,” Opt. Express 19(20), 19156–19168 (2011). [CrossRef]  

6. M. Bigas, E. Cabruja, J. Forest, and J. Salvi, “Review of CMOS image sensors,” Microelectron. J. 37(5), 433–451 (2006). [CrossRef]  

7. S. E. Bohndiek, A. Blue, A. T. Clark, M. L. Prydderch, R. Turchetta, G. J. Royle, and R. D. Speller, “Comparison of methods for estimating the conversion gain of CMOS active pixel sensors,” IEEE Sens. J. 8(10), 1734–1744 (2008). [CrossRef]  

8. B. P. Beecken and E. R. Fossum, “Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays,” Appl. Opt. 35(19), 3471–3477 (1996). [CrossRef]  

9. J. Liu, T. Neubert, D. Froehlich, P. Knieling, H. Rongen, F. Olschewski, O. Wroblowski, Q. Chen, R. Koppmann, M. Riese, and M. Kaufmann, “Investigation on a SmallSat CMOS image sensor for atmospheric temperature measurement,” Proc. SPIE 11180, 2384–2393 (2019). [CrossRef]  

10. B. L. Preece and D. P. Haefner, “3D noise photon transfer curve,” Appl. Opt. 61(21), 6202–6212 (2022). [CrossRef]  

11. B. Pain and B. R. Hancock, “Accurate estimation of conversion gain and quantum efficiency in CMOS imagers,” Proc. SPIE 5017, 94–103 (2003). [CrossRef]  

12. J. R. Janesick, Scientific Charge Coupled Devices (SPIE, 2001).

13. J. R. Janesick, Photon transfer DN →λ (SPIE, 2007).

14. D. A. Starkey and E. R. Fossum, “Determining Conversion Gain and Read Noise Using a Photon-Counting Histogram Method for Deep Sub-Electron Read Noise Image Sensors,” IEEE J. Electron Devices Soc. 4(3), 129–135 (2016). [CrossRef]  

15. C. R. Rao, Linear Statistical Inference and its Applications (Wiley, 1973).

16. D. L. Snyder, C. W. Helstrom, A. D. Lanterman, M. Faisal, and R. L. White, “Compensation for readout noise in ccd images,” J. Opt. Soc. Am. A 12(2), 272–283 (1995). [CrossRef]  

17. E. R. Fossum, J. Ma, S. Masoodian, L. Anzagira, and R. Zizza, “The Quanta Image Sensor: Every Photon Counts,” Sensors 16(8), 1260 (2016). [CrossRef]  

18. N. Teranishi, “Required Conditions for Photon-Counting Image Sensors,” IEEE Trans. Electron Devices 59(8), 2199–2205 (2012). [CrossRef]  

19. A. Boukhayma, A. Peizerat, and C. Enz, “Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors,” Sensors 16(4), 514 (2016). [CrossRef]  

20. L. Li, J. Kim, G. Cui, Z. Xu, H. Feng, Q. Li, and Y. Chen, “Research on dark noise features of CMOS image sensor,” in Proceedings of the 2015 International Conference on Intelligent Systems Research and Mechatronics Engineering, (Atlantis, 2015), pp. 2144–2147.

21. B. Efron and D. V. Hinkley, “Assessing the Accuracy of the Maximum Likelihood Estimator: Observed Versus Expected Fisher Information,” Biometrika 65(3), 457–483 (1978). [CrossRef]  

22. R. C. David, The Statistical Analysis of Series of Events (Springer, 1966).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Probability densities of signal value $x$ [Eq. (3)] for $\sigma =0.30$ and $0.45$ $\text {e}^-$. Conversion gain and averaged photoelectron number are fixed as $\alpha = 1$ and $\zeta = 10$, respectively. We can observe a clear separation of individual peaks corresponding to the number of photoelectrons in the case of $\sigma = 0.30$ $\text {e}^-$.
Fig. 2.
Fig. 2. Behaviors of relative errors in conversion gains ($\alpha$) estimated by applying three estimation methods to simulation data sequences for different conditions of readout noise: (a) $\sigma _s = 0.27 \alpha$ and (b) $\sigma _s = 1.0 \alpha$. Open blue squares, open red circles, and green triangles exhibit the relative errors by PTC method, the proposed method based on the ideal model [choice of likelihood function as $l_0(\alpha )$], and that based on the experimental model [$l_1(\alpha )$], respectively. Search domains for $\alpha$ by the uniformed-search algorithm were chosen as (a) $\alpha \in [6.8, 7.2]$ and (b) $\alpha \in [6.0, 8.0]$, both of which were discretized to 10000 representative points. Note here that the PCH method does not work under these conditions.
Fig. 3.
Fig. 3. Schematic picture of experimental setup. The qCMOS camera [Hamamatsu: C15550-20UP (Prototype)] is illuminated with light (wavelength: 420 nm) from pulsed laser source through optical fiber and a phase diffuser plate. The output end of the optical fiber, the phase diffuser plate, and the qCMOS camera are combined rigidly by a lens tube (Thorlabs: SM1L40) to suppress influence from mechanical vibrations as far as possible. The active area of qCMOS camera consists of $4096 \times 2304$ pixel sensors, each of which dimension is $4.6 \times 4.6~{\mu \textrm {m}}^2$.
Fig. 4.
Fig. 4. Schematic picture of operations and data flow for estimating conversion gain at the $i$th pixel by the proposed method. The estimations at different pixels can be performed independently. Typical values of the condition parameters are $M = 1000$, $D = 10000$, and $[\alpha _\textrm {min}, \alpha _\textrm {max}] = [6.8, 7.3]$.
Fig. 5.
Fig. 5. (a) Distribution image and (b) corresponding histogram of conversion gains estimated by the proposed method over a $128 \times 128$-pixel area on a qCMOS camera. Each bin width is uniformly chosen as $2.0 \times 10^{-3}$ in (b). (c) and (d): plots for relative errors of conversion gain processed correspondingly to (a) and (b), respectively. Each bin width is uniformly chosen as $1.0 \times 10^{-5}$ in (d). Estimations were performed under an illumination condition that corresponds to the mean number of initial photoelectrons of $\sim 84$.
Fig. 6.
Fig. 6. (a) Distribution image and (b) corresponding histogram of conversion gains estimated by the PTC method. (c) and (d): plots for relative errors of conversion gain processed correspondingly to (a) and (b), respectively. We note that the PTC method requires target images acquired under at least two different illumination conditions, where the conditions were chosen as $\sim 39$ and 84 for the mean numbers of initial photoelectrons. The bin widths of (b) and (d) are chosen identically to Figs. 5(b) and (d), respectively, while the horizontal plot ranges are different at all.
Fig. 7.
Fig. 7. Histograms generated from (Upper) bare output signals and (Lower) signal values reduced to the number of photoelectrons by using conversion gains estimated via the $l_1(\alpha )$-based proposed method. Illumination is almost uniform and time independent.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

p L ( n | ζ ) = e ζ ζ n n ! ,
p N ( x | n , α , σ ) = 1 2 π α 2 σ 2 exp [ ( x α n ) 2 2 α 2 σ 2 ] ,
f ( x | ζ , α , σ ) = n = 0 p N ( x | n , α , σ ) p L ( n | ζ ) .
μ = α ζ v = α 2 σ 2 + α 2 ζ = σ s 2 + α 2 ζ } ,
f ( x | μ , v , α ) = 1 2 π ( ν α μ ) n = 0 e μ / α ( μ / α ) n n ! exp [ ( x α n ) 2 2 ( ν α μ ) ] .
v ζ = σ s 2 + α μ ζ ,
l 0 ( α ) = 1 ( 2 π σ s 2 ) M / 2 m = 1 M n = 0 e ζ ζ n n ! exp [ ( x m α n ) 2 2 σ s 2 ] .
α = arg max α [ log l 0 ( α ) ] .
l 1 ( α ) = 1 ( 2 π σ s 2 ) M / 2 m = 1 M n = 0 e μ ¯ / α ( μ ¯ / α ) n n ! exp [ ( x m α n ) 2 2 σ s 2 ] ,
x m ( i ) = α ( i ) n + r m ( i ) ,
E [ { x m ( i ) } ] = 1 N i = 1 N α ( i ) n
V [ { x m ( i ) } ] = 1 N i = 1 N σ s ( i ) 2 + n 2 V [ { α ( i ) } ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.