## Abstract

Laser-absorption tomography experiments infer the concentration distribution of a gas species from the attenuation of lasers transecting the flow field. Although reconstruction accuracy strongly depends on the layout of optical components, to date experimentalists have had no way to predict the performance of a given beam arrangement. This paper shows how the mathematical properties of the coefficient matrix are related to the information content of the attenuation data, which, in turn, forms a basis for a beam-arrangement design algorithm that minimizes the reliance on additional assumed information about the concentration distribution. When applied to a simulated laser-absorption tomography experiment, optimized beam arrangements are shown to produce more accurate reconstructions compared to other beam arrangements presented in the literature.

© 2012 Optical Society of America

## 1. Introduction

In many applications involving gas-phase chemistry, it is essential to obtain a controlled (often homogeneous) species concentration distribution within the flow field. For example, the performance of next-generation internal combustion engines depends strongly on obtaining an engineered fuel–air dispersion within the combustion chamber to minimize emissions and maximize fuel efficiency [1]. Likewise, highly controlled gas concentrations are essential to avoid the problem of “ammonia slip” from selective catalytic and noncatalytic reduction systems in power plants [2]. Applications like these create a pressing need for a diagnostic that can provide real-time concentration distributions of gas species in the flow field in order to pinpoint potential problems and improve the performance of these devices.

Measuring concentration through physical probing has a number of drawbacks: physical probes have limited temporal and spatial resolution, especially for turbulent flows; the sampling probe often perturbs the measurement; and in many applications it is difficult to physically access the flow field. One technology that overcomes the drawbacks of physical measurements is planar laser-induced fluorescence (PLIF). PLIF relies on the fluorescent properties of materials to provide real-time flow time visualization and species distribution analysis of the flow field. In this technique, the target species is excited by a laser sheet shone through the flow field. A detector camera then records the fluorescence intensity from a second optical plane, which in turn is proportional to the concentration of the target species [3].

Though PLIF has been successfully implemented in industry for process monitoring and diagnostics, the method has several limitations. First, the flow field must contain a chemical species with resonant wavelengths accessible to the laser; otherwise a fluorescent tracer must be added to dope the target species. This, in turn, can lead to measurement error as complete mixing between the target species and tracer may not always occur. Second, obtaining a second optical access plane is difficult or infeasible for many problem geometries.

An emerging alternative to PLIF is laser absorption tomography, which reconstructs the flow field distribution using laser line-of-sight attenuation (LOSA) measurements. In this method, the flow field is discretized into an $n$ pixel grid, and the target species concentration in each pixel is modeled as uniform. The flow field is then transected with $m$ coplanar lasers having wavelengths corresponding to an absorption line of the target species. The concentration distribution $\mathbf{x}$ is related to the LOSA measurements in $\mathbf{b}$ by an $m\times n$ matrix equation, $\mathbf{A}\mathbf{x}=\mathbf{b}$. Because the number of pixels far exceeds the number of beams, however, $\mathbf{A}$ is rank-deficient, and consequently the laser attenuation data $\mathbf{b}$ must be supplemented with additional assumed information about the concentration distribution attributes, such as smoothness and nonnegativity [4].

Experimental [5–7] and numerical [4,8,9] laser-absorption tomography experiments have shown that a good beam arrangement is essential for obtaining accurate reconstruction results. Moreover, the extreme cost and complexity of the laser and detection optics demands that these components be used to their maximum potential. Nevertheless, to date only Terzija *et al.* [6] have proposed a heuristic strategy for optimizing the beam arrangement.

This paper presents a more rigorously defined algorithm for optimizing the beam arrangement using the properties of the resolution matrix, $\mathbf{R}$, which is often used to assess how regularization limits reconstruction accuracy in discrete geophysical tomography applications [10]. We assess the merit of the proposed fitness function by investigating how closely it correlates with reconstruction error of simulated limited-data absorption tomography experiments under both noise-free and noisy data conditions. The objective function is then minimized for a given number of sensors and constraints using a genetic algorithm, and the optimized beam array is shown to outperform other arrangements reported in the literature in terms of reconstruction accuracy. To quantify the effects of beam misalignment in the experimental setup, a study was conducted in which the positions of the beams were altered within expected ranges associated with optics installation. It was shown that the system was robust to misalignment effects. Finally, a plot of reconstruction accuracy obtained from optimized beam arrangements versus the number of beams in the array demonstrates, for the first time, the trade-off between reconstruction accuracy and experimental cost and complexity. This result provides a framework for establishing the number of beams needed to achieve a given level of reconstruction accuracy and thus reduce associated experimental costs. While this research is motivated by near-IR laser absorption tomography, the algorithm presented in this paper could easily be extended to other tomographic experiments, such as x-ray and gamma-ray tomography, with similarly limited sensor geometries.

## 2. Laser Absorption Tomography

The attenuation of the $i$th laser beam transecting a participating medium is governed by the Beer–Lambert law,

Equation (1) is a Fredholm integral equation of the first kind. Deconvolving this equation for ${\kappa}_{\eta}[{\mathbf{r}}_{s,q}(u)]$ is mathematically ill-posed because an infinite set of candidate distributions could be substituted into the integral to produce the observed attenuation data [15]. Simultaneously solving these equations for multiple beams reduces, but does not eliminate, this ambiguity.

By discretizing the flow field into a set of $n$ pixels, each of which is assumed to contain a uniform concentration of the target species, Eq. (1) can be approximated as a “ray-sum,”

where ${A}_{ij}$ is the chord length of the $i$th beam subtended by the $j$th pixel and ${x}_{j}$ is the mole fraction-normalized absorption coefficient for the $j$th pixel. Writing Eq. (3) for all $m$ beams results in an $m\times n$ matrix equation, $\mathbf{A}\mathbf{x}=\mathbf{b}$, where $\mathbf{A}\in {\mathbf{\Re}}^{m\times n}$.In principle, given enough projections, $\text{Rank}[\mathbf{A}]=n$ and the concentration profile, $\mathbf{x}$, could be uniquely solved via least squares minimization of $\Vert \mathbf{b}-\mathbf{A}\mathbf{x}\Vert $. Unfortunately, due to the cost of detection equipment and spatial constraints imposed by most practical geometries, the number of projections is typically far less than the required number of pixels to achieve a reasonable level of spatial resolution ($m<n$). Thus, laser absorption tomography is usually a limited-data tomography problem and $\mathbf{A}$ has a nontrivial null space. Therefore, the true concentration profile is a sum of two components, $\mathbf{x}={\mathbf{x}}^{\mathrm{LS}}+{\mathbf{x}}^{n}$, where ${\mathbf{x}}^{\mathrm{LS}}=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}(\Vert \mathbf{b}-\mathbf{A}\mathbf{x}\Vert )$ belongs to the range of $\mathbf{A}$, while ${\mathbf{x}}^{n}$ is a nontrivial vector belonging to the null space of the problem, $\{{\mathbf{x}}^{n}\in N[\mathbf{A}]\text{:}\text{\hspace{0.17em}}{\mathbf{x}}^{n}|\mathbf{A}{\mathbf{x}}^{n}=0\}$ [15].

To span this null space, it is necessary to supplement the sensor data with additional assumed information, such as smoothness and nonnegativity. In a Bayesian context, these assumed attributes are called “priors” and correspond to the prior probabilities used to condition an indefinite likelihood function in Bayes’ equation.

McCann’s group at Manchester University [1,5–7,16,17] use a modified Landweber iterative regularization scheme to carry out the reconstruction. Landweber regularization is designed to exploit the semiconvergence property of full-rank but ill-conditioned linear problems [15]. In this application, the nontrivial null space is “filled” by passing a median filter [1,5–7,16] or wavelet filter [17] through the solution between Landweber passes, followed by a nonnegativity filter. The solution stops once a prescribed minimum level of change between iterations is satisfied. Salem *et al.* [18] used the iterative algebraic reconstruction technique with discrepancy between theoretical data (from the candidate reconstruction) and measurement data serving as stopping criteria.

Daun [4] used Tikhonov regularization to span the null space of $\mathbf{A}$; in this technique, an additional $n$ set of equations, $\lambda \mathbf{L}\mathbf{x}=0$, is introduced into the analysis, where $\mathbf{L}$ is a discrete approximation of the $\nabla $ operator,

The choice of $\lambda $ has a large impact on the recovered concentration distribution. Traditional parameter-selection techniques such as the discrepancy principal, $L$-curve curvature, and generalized cross validation [19] cannot be used because they are based on the premise that the correct solution corresponds to the smallest solution norm $\Vert \mathbf{x}\Vert $, which is not true here due to the nontrivial null space component, ${\mathbf{x}}^{n}$. Daun *et al.* [20] showed that the singular values of the augmented matrix in Eq. (5) provides some guidance for choosing $\lambda $, in that the singular values of the augmented matrix should not overwhelm the $m$ nontrivial singular values of $\mathbf{A}$.

## 3. Predicting Reconstruction Accuracy

One of the most important factors pertaining to reconstruction accuracy is the arrangement of the laser/detector pairs. The strong influence of beam arrangement on reconstruction accuracy has been demonstrated by Wright *et al.* [1], who showed that rotating a Gaussian phantom by 45° results in vastly different reconstruction simulation results. Daun *et al.* [20] also showed that the reconstruction accuracy of randomly located Gaussian phantoms using several beam arrangements varies considerably. Figure 2 presents a selection of beam configurations presented in the literature: Figs. 2(a) [1] and 2(b) [16] show four projection orthogonal/rectilinear and fan beam arrangements, respectively, while Figs. 2(c) [6] and 2(d) [6] are examples of irregular beam arrangements.

Despite the critical importance of beam arrangement to the performance of this experiment, to the best of our knowledge the only attempt at developing a design tool was by Terzija *et al.* [6]. Their heuristic technique was based on the observation that inverting Eq. (1) by Radon transformation involves integrating the attenuation data over $-R\le s\le R$ and $0\le \theta \le \pi $, where $R$ is the width of the tomography domain [21]. Following this logic, they plotted the coordinates of each beam over “radon space”, ${\mathcal{R}}^{(s,\theta )}$, as shown in Fig. 3 for the beam arrangement in Fig. 2(c). They hypothesized that maximizing the discrepancy between these points would improve reconstruction accuracy in the same way that the accuracy of a numerical integration also improves with the discrepancy of quadrature points.

In this paper, we present a more systematic approach for predicting the performance of a beam configuration based on the mathematical properties of the resolution matrix. Resolution matrices are commonly used in discrete geophysical tomography to determine whether model parameters can be independently predicted or resolved and how regularization limits reconstruction accuracy [10,22–24].

The resolution matrix is derived from the regularized inverse, ${\mathbf{A}}^{\#}$, which for Tikhonov regularization can be found by [22–26]

*regularization error*, caused by the inconsistency between the measurement data equations, $\mathbf{A}\mathbf{x}=\mathbf{b}$, and the assumed prior information, $\mathbf{L}\mathbf{x}=0$, while ${\mathbf{A}}^{\#}\mathit{\delta}\mathbf{b}$ is the

*perturbation error*, due to the amplification of measurement noise.

When regularizing full-rank, discrete, ill-posed problems, $\lambda $ is chosen to minimize the sum of these two error components. In the context of selecting $\lambda $, it is important to note a fundamental difference between standard and limited-data tomography problems. In standard tomography problems (for example, most geophysical tomography problems), $\mathbf{A}$ is full rank but ill conditioned, having $n$ nontrivial singular values that range over several orders of magnitude. In this sense, $\mathbf{A}\mathbf{x}=\mathbf{b}$ completely defines the problem physics and $\mathbf{L}\mathbf{x}=0$ is added to suppress perturbation error, ${\mathbf{A}}^{\#}\mathit{\delta}\mathbf{b}$, caused by the small singular values of $\mathbf{A}$.

In limited-data tomography problems, such as the present case, the priors are an integral part of the problem physics because $\mathbf{A}\mathbf{x}=\mathbf{b}$ is indeterminate by itself. Specifically, $\mathbf{L}\mathbf{x}=0$ relates the concentration of off-beam elements to those sampled by the beams. Another difference is that the nontrivial singular values of $\mathbf{A}$ are sufficiently large, and Daun [4] showed that ${\mathbf{x}}_{\lambda}$ is robust to perturbation error over a wide range of $\lambda $. Consequently, perturbation error, ${\mathbf{A}}^{\#}\mathit{\delta}\mathbf{b}$, is negligible, and $\mathit{\delta}\mathbf{x}$ is instead dominated by regularization error, ($\mathbf{R}-\mathbf{I}$) ${\mathbf{x}}^{\text{exact}}$.

In an ideal experiment, $\mathbf{R}=\mathbf{I}$, which implies that each unknown pixel value in $\mathbf{x}$ can be independently resolved from that measurement data, $\mathbf{b}$ [25,26]. Unfortunately, because the problem is rank deficient through the rank-nullity theorem (i.e., $m<n$), it is not possible to arrange the beams so that $\mathbf{R}=\mathbf{I}$. Instead, the addition of $\mathbf{L}\mathbf{x}=0$ forces the off-diagonal terms in $\mathbf{R}$ to be nonzero, making the estimated concentration profile of each pixel a weighted average of the concentration of the surrounding pixels. Because $\mathbf{L}$ is defined through Eq. (4), the resolution matrix $\mathbf{R}(\mathbf{\Phi})={\mathbf{A}}^{\#}(\mathbf{\Phi})\mathbf{A}(\mathbf{\Phi})$ is only a function of the beam arrangement, specified by $\mathbf{\Phi}={\{{\mathrm{\Phi}}_{1},{\mathrm{\Phi}}_{2},\dots ,{\mathrm{\Phi}}_{m}\}}^{T}$, and thus is independent of the actual concentration profile.

Given that regularization error dominates the reconstruction error in laser absorption tomography problems, it follows that the beams should be arranged to minimize the Frobenius distance between $\mathbf{R}(\mathrm{\Phi})$ and $\mathbf{I}$,

We can visualize this fitness function using contour plots of resolution matrices corresponding to the beam arrangements in Figs. 2(a) and 2(d) (specified by ${\mathbf{\Phi}}^{a}$ and ${\mathbf{\Phi}}^{d}$, respectively). (A $40\times 40$ element tomography domain is used throughout this paper.) Figure 4(a) shows that $\mathbf{R}({\mathbf{\Phi}}^{a})$ has fewer nonzero off-diagonal elements compared to $\mathbf{R}({\mathbf{\Phi}}^{d})$. In turn, the associated fitness values for these arrays are $F({\mathbf{\Phi}}^{a})=1825$ and $F({\mathbf{\Phi}}^{d})=2153$.

## 4. Results and Discussion

The correlation between reconstruction error and the diagonality of the resolution matrix is investigated using a simulated laser absorption tomography experiment with randomly located and sized Gaussian concentration profiles within the tomography plane. Simulated beam-attenuation data was generated using a high-order numerical integration scheme and then contaminated with normally distributed error corresponding to a signal-to-noise ratio of 28 dB, which was reported by Garcia-Stewart *et al.* [16] in their laser absorption tomography experiments. An example Gaussian phantom and reconstruction is shown in Fig. 5. The reconstruction error is then quantified by

#### A. Parameter Selection

As noted above, the first task in carrying out the reconstruction is to choose a value for $\lambda $. Daun *et al.* [20] showed that $\lambda $ could be chosen so that the largest $m$ of the $n$ singular values of the augmented matrix do not overwhelm the original $m$ nontrivial singular values of $\mathbf{A}$. Figure 6 shows the singular values obtained from the beam arrangement shown in Fig. 2(a) and a 1600 pixel domain for various values of $\lambda $, which indicates that the optimal value of $\lambda $ lies between 0.01 and 0.1.

Figure 7 shows reconstruction errors of the simulation for various values of $\lambda $ for both perturbed and unperturbed data cases using the beam arrangement in Fig. 2(a). The reconstruction error drops with increasing $\lambda $ up to $\lambda =0.01$ as $\lambda \mathbf{L}$ fills the null space of $\mathbf{A}$, but error begins to increase again for $\lambda $ greater than 0.1 due to oversmoothing. Reconstruction error is approximately constant over an order of magnitude of $\lambda $, in contrast with full-rank, discrete, ill-posed problems, which usually have a distinct optimum value of $\lambda $. As noted above, this is because the assumed prior provides essential information missing from the measurement equations, while in full-rank, discrete, ill-posed problems its role is to suppress amplification of measurement noise.

The similarity in reconstruction accuracy obtained using noised and unnoised data reveals that perturbation error is indeed smaller than regularization error. Similar studies were carried out on the other beam arrangements in Fig. 2. In each case the optimal value of $\lambda $ was near 0.05. Based on this result, we use $\lambda =0.05$ for the remainder of this paper.

#### B. Validation of the Fitness Function

In order to test our hypothesis that the candidate fitness function, $F(\mathbf{\Phi})$, is a good predictor of the reconstruction error, we conducted two parametric univariate studies on structured laser arrays. First we varied the adjacent beam spacing, $\omega $, of a four-projection, 32-beam array illustrated in Fig. 2(a). Figure 8 shows a plot of $F(\mathbf{\Phi})$ and the average reconstruction error of 500 randomly located Gaussian phantoms and a 1600 pixel domain for various beam spacings. These results show that $F(\mathbf{\Phi})$ tracks $\epsilon (\mathbf{\Phi})$ over the range of $\omega $.

Figure 9 shows the results of a second parametric study in which the spread angle of the fan arrangement shown in Fig. 2(b) is varied, which again shows that $F(\mathbf{\Phi})$ is a good predictor of the reconstruction accuracy.

#### C. Optimal Design of Beam Arrangements

Because $F(\mathbf{\Phi})$ has been demonstrated to be a good predictor of the reconstruction error of a particular beam arrangement, it is now possible to optimize the beam layout by solving the following multivariate minimization problem for a given number of beams:

Initially, we attempted to optimize a 32-beam arrangement using the gradient-based quasi-Newton method. Unfortunately, due to the highly multimodal nature of the objective function, this method was unable to solve Eq. (11). To overcome the difficulty associated with the multimodal nature of the problem, we instead adopt a genetic algorithm to minimize $F(\mathbf{\Phi})$.

Genetic algorithms are well established as a multivariable optimization technique that mimics natural selection in biological systems [27]. Initially, a population of $N$ candidate solutions is created using evenly distributed random numbers within the applicable range of each variable, $s\in [-R,R]$ and $\theta \in [0,\pi ]$. Next, the corresponding objective functions, Eq. (9), are calculated for each member of the population. A selection process then chooses the individuals corresponding to the smallest $F(\mathbf{\Phi})$ and then subsequently “mates” these candidates, producing a new generation of $N$ individuals. For our implementation, we use Matlab’s genetic algorithm solver provided in the Global Optimization Toolbox and the scattered crossover method for mating individuals. For each mating pair, a random binary vector containing $m$ bits is created and a new “child” $\mathbf{\Phi}$ vector is then produced using the ${\mathrm{\Phi}}_{i}={(s,\underline{\theta})}^{T}$ pair from either parent A or parent B depending on the $i$th bit of the random binary vector. In some cases, the offspring is mutated by randomly perturbing the elements of $\mathrm{\Phi}$. Mutation occurs in a relatively low number of individuals, and its likelihood is governed by a probability distribution [27]. This procedure continues until successive generations yield no improvement in $F(\mathbf{\Phi})$.

We demonstrate this methodology by optimizing an $m=32$ detection array, which consists of 64 design parameters. Through trial and error, an initial population size of 500 was found to be sufficient to adequately sample ${\mathcal{R}}^{(s,\theta )}$, and the top-performing six designs were mated to form the 500 new designs of the subsequent generation. The algorithm’s progress is measured by plotting the smallest and average $F(\mathbf{\Phi})$ obtained in each generation; Fig. 10 shows that the first two generations decrease $F(\mathbf{\Phi})$ dramatically, and the algorithm stalls after four generations. The computational cost associated with the 32-beam design algorithm was approximately 7 h using a single core of a 2 GHz i7-2630QM Intel processor with 8 GB of installed RAM.

The quality of the optimized beam array is first assessed by comparing the reconstruction error obtained using this array with those found using the beam arrangement shown in Fig. 2. Although Figs. 8 and 9 show that $F(\mathbf{\Phi})$ is an excellent predictor of $\epsilon (\mathbf{\Phi})$ if the number of beams remains constant, a comparison of the arrays containing different numbers of beams showed no correlation between the fitness function and reconstruction error. Once $F(\mathbf{\Phi})$ is scaled by the number of beams in the array ($F(\mathbf{\Phi})/m$), however, this modified objective function correlates well with the reconstruction error as shown in Fig. 11. Figure 11 also shows that the optimized array provides superior reconstruction accuracy compared to the arrays shown in Fig. 2.

The use of Gaussian phantoms in simulations meant that the *a priori* knowledge of smoothness of the concentration profile was exactly satisfied. To validate our findings for concentration profiles found in actual experimentation, a simulated laser absorption tomography experiment was carried out on a phantom generated using a large eddy simulation (LES) of a buoyant turbulent methane plume.

The computational domain is a rectangular $1.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 1.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 6\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$ enclosure, shown in Fig. 12, filled with air at 298 K and atmospheric pressure. A square $0.2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 0.2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$ duct centered at the bottom of the enclosure serves as the methane injection site. The methane plume enters the enclosure at $2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}/\mathrm{s}$, corresponding to a jet Reynolds number of 3000 based on the hydraulic diameter of the duct. The resulting plume has a densimetric Froude number of 2.2, indicating that the buoyant and momentum forces within the plume are comparable in magnitude. Further details of the simulation are provided in [20]. The laser attenuation data was collected at a height of 3 m above the base of the enclosure, and the tomography domain was set as a $1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}\times 1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$ square, centered in the cross section. Beam attenuation data was sampled at the rate of 100 Hz, or at 0.01 s intervals, for 20 s.

The reconstruction accuracies of the optimized arrangement and those shown in Fig. 2 are plotted for 500 time steps of the LES phantom data along with time average values. Figure 13 shows that, similar to the Gaussian phantom results, the arrangement produced from genetic algorithms shown in Fig. 10 possessed the best reconstruction accuracy of those tested on both a time-averaged and single-time-step basis. It should be noted that the error values obtained using the LES data were larger than those for the Gaussian phantoms because the smoothness prior conflicts with turbulent flow artifacts. A sample LES phantom and reconstructed concentration profiles of the beam arrangements tested are shown in Fig. 14.

#### D. Effect of Beam Misalignment on Reconstruction Accuracy

In actual implementation, the location and alignment of the laser/detector pairs will be subject to some uncertainty. In terms of data analysis, misalignment results in an inconsistency between the actual beam arrangement (and consequently the LOSA data in $\mathbf{b}$) and the chord length of the beams transecting each pixel, which are the elements of $\mathbf{A}$. To quantify this effect, we performed simulations in which $\mathbf{A}$ remains constant but the $(s,\theta )$ coordinates of the beams are perturbed to represent this error. The modified beam arrangement is then used to obtain the laser measurement data, $\mathbf{b}$, using high-order numerical integration.

Perturbations to $s$ and $\theta $ are sampled from unbiased normal distributions with ${\sigma}_{s}=0.025$ ($1/40$th the width of the square tomography domain) and ${\sigma}_{\theta}=0.25\xb0$ (0.004363 rad), which represent a worst-case scenario for experimental precision. Each of the beam arrangements in Fig. 2, along with the optimized array in Fig. 10, were tested using 50 modified arrangement trials, each in turn evaluated with 500 random Gaussian phantoms. The results of these simulations are summarized in Table 1. We found that error due to beam misalignment was negligible compared to the relative accuracy of the different beam arrays, and moreover the beam arrangement were equally sensitive to this source of error. This result is expected because perturbing the components of $\mathbf{\Phi}$ is equivalent to perturbing the LOSA data ($\mathit{\delta}\mathbf{b}$), and as discussed above, the reconstructions are resilient to perturbation error over a wide range of $\lambda $ because the singular values are not small.

#### E. Effect of Number of Beams on Reconstruction Accuracy

Along with the arrangement of the laser/detector pairs, the number of beams used in the arrangement has a very pronounced effect on the accuracy of the absorption tomography experiments. Until now, there has been no way to decouple the effects of arrangement and the number of beams, $m$, in a detection array, but using arrangement optimization methods previously shown, it is now possible to quantify the effects of the latter. Given the cost and complexity of the lasers, collection optics, and data-acquisition systems, it is essential to assess the number of beams needed to reconstruct the species concentration to a desired level of accuracy as additional beams can lead to unnecessary additional costs.

Beam arrangements were created using the genetic algorithm approach outlined above for $m=8\dots 60$ beams for a 1600 pixel tomography domain. In each case, the genetic algorithm was terminated after four generations, as Fig. 10 shows the algorithm seemed to stall past this point. Figure 15 shows the $F(\mathbf{\Phi})/m$ and associated reconstruction error values for various numbers of beams. From this plot we can clearly see the trade-off between number of beams, $m$, and reconstruction error, $\epsilon $, noting a very steep reduction in error from $m=8\u201320$, followed by a less significant improvement in $\epsilon $ as $m$ increases past this region. Additionally, we can note that $F(\mathbf{\Phi})/m$ tracks the associated error with each number of beams very well, further validating our fitness function hypothesis. Constructing this plots for a given tomography domain (i.e., number of pixels, $n$) allows one to determine the approximate number of beams to implement in the reconstruction to minimize experimental costs.

## 5. Conclusions and Future Work

In limited-data absorption spectroscopy experiments, the arrangement of the laser/detector pairs has a strong effect on reconstruction accuracy. This paper demonstrates how the properties of the augmented kernel matrix, constructed from the subtended chord lengths of the beams in each pixel and a smoothness prior, can be used to predict reconstruction accuracy independent of the actual gas concentration distribution. Using this relation, beam arrangements can be optimized with respect to a fitness function using a genetic algorithm. This design methodology was demonstrated by optimizing an array of 32 beams. The reconstruction accuracy obtained using this optimized configuration is superior to other arrangements presented in literature. The effects of beam misalignment were investigated, which was found to have negligible effect. This was due to the fact that beam misalignment effectively added measurement noise to the system, and the problem was shown to be robust to noise effects due to the use of Tikhonov regularization. Finally, the trade-off between the number of beams used in the arrangement, $m$, and reconstruction error, $\epsilon $, was visualized by comparing the accuracy of optimized beam arrays containing various numbers of beams.

While the optimization was done without any constraints on the location of the lasers and detectors, in practice the arrangement of these components are restricted by problem geometry. In the near future we will demonstrate how this aspect of the beam-design problem can be accommodated by imposing constraints on the multivariate minimization problem.

This research was funded by Natural Resources Canada’s Program for Energy Research and Development (NRCan-PERD). The authors thank Prof. Matthew Johnson from Carleton University for his advice and encouragement, and Mr. Georges Guerette for his assistance in implementing the ray-tracing code.

## References

**1. **P. Wright, C. A. Garcia-Stewart, J. S. Carey, F. P. Hindle, S. H. Pegrum, S. M. Colbourne, P. J. Turner, W. J. Hurr, T. J. Litt, S. C. Muray, S. D. Crossley, K. B. Ozanya, and H. McCann, “Towards in-cylinder absorption tomography in a production engine,” Appl. Opt. **44**, 6578–6592 (2005). [CrossRef]

**2. **M. T. Javed, N. Irfan, and B. M. Gibbs, “Control of combustion-generated nitrogen oxides by selective non-catalytic reduction,” J. Environ. Manag. **83**, 251–289 (2007). [CrossRef]

**3. **B. J. Kirby and R. K. Hanson, “Planar laser-induced fluorescence imaging of carbon monoxide using vibrational (infrared) transitions,” Appl. Phys. B **69**, 505–507 (1999). [CrossRef]

**4. **K. J. Daun, “Infrared species limited data tomography through Tikhonov regularization,” J. Quant. Spectrosc. Radiat. Tranfer **111**, 105–115 (2010). [CrossRef]

**5. **P. Wright, N. Terzija, J. L. Davidson, S. Garcia-Castillo, C. A. Garcia-Stewart, S. H. Pegrum, S. M. Colbourne, P. J. Turner, S. D. Crossley, T. J. Litt, S. C. Muray, K. B. Ozanya, and H. McCann, “High-speed chemical species tomography in a multi-cylinder automotive engine,” Chemical Engineering J. **158**, 2–10 (2010). [CrossRef]

**6. **N. Terzija, J. L. Davidson, C. A. Garcia-Stuart, P. Wright, K. B. Ozanyan, S. Pergrum, T. J. Litt, and H. McCann, “Image optimization for chemical species tomography with an irregular and sparse beam array,” Meas. Sci. Technol. **19**, 094007 (2008). [CrossRef]

**7. **S. Pal, K. B. Ozanyan, and H. McCann, “A computational study of tomographic measurement of carbon monoxide at minor concentrations,” Meas. Sci. Technol. **19**, 094018 (2008). [CrossRef]

**8. **D. Verhoeven, “Limited-data computed tomography algorithms for the physical sciences,” Appl. Opt. **32**, 3736–3754 (1993). [CrossRef]

**9. **S. Ibrahim, R. G. Green, K. Dutton, K. Evans, R. Abdul Rahim, and A. Goude, “Optical sensor configurations for process tomography,” Meas. Sci. Technol. **10**, 1079–1086 (1999). [CrossRef]

**10. **W. Menke, *Geophysical Data Analysis: Discrete Inverse Theory* (Academic, 1989), pp. 51–65.

**11. **M. F. Modest, *Radiative Heat Transfer* (Academic, 2003), pp. 289–291.

**12. **L. S. Rothman, I. E. Gordon, A. Barbe, D. C. Benner, P. F. Bernath, M. Birk, V. Boudon, L. R. Brown, A. Campargue, J.-P. Champion, K. Chance, L. H. Coudert, V. Dana, V. M. Devi, S. Fally, J.-M. Flaud, R. R. Gamache, A. Goldman, D. Jacquemart, I. Kleiner, N. Lacome, W. J. Lafferty, J.-Y. Mandin, S. T. Massie, S. N. Mikhailenko, C. E. Miller, N. Moazzen-Ahmadi, O. V. Naumenko, A. V. Nitikin, J. Orphal, V. I. Perevalov, A. Perrin, A. Predoi-Cross, C. P. Rinsland, M. Rotger, M. Šimečková, M. A. H. Smith, K. Sung, S. A. Tashkun, J. Tennyson, R. A. Toth, A. C. Vandaele, and J. Vander Auwera, “The HITRAN 2008 Molecular Spectroscopic Database,” J. Quant. Spectrosc. Radiat. Transfer **110**, 533–572 (2009). [CrossRef]

**13. **A. E. Klingbeil, J. B. Jeffries, and R. K. Hanson, “Temperature- and pressure-dependent absorption cross sections of gaseous hydrocarbons at 3.39 μm,” Meas. Sci. Technol. **17**, 1950–1957 (2006). [CrossRef]

**14. **L. Ma, W. Cai, A. W. Caswell, T. Kraetschmer, S. T. Sanders, S. Roy, and J. R. Gord, “Tomographic imaging of temperature and chemical species based on hyperspectral absorption spectroscopy,” Opt. Express **17**, 8602–8613 (2009). [CrossRef]

**15. **P. C. Hansen, *Rank-Deficient and Discrete Ill-posed Problems* (Siam, 1998), pp. 99–105.

**16. **C. A. Garcia-Stewart, N. Polydorides, K. B. Ozanyan, and H. McCann, “Image reconstruction algorithms for high-speed chemical species tomography,” in *Proceedings of 3rd World Conference on Industrial Process Tomography* (Virtual Centre for Industrial and Process Tomography, 2003), pp. 80–85.

**17. **N. Terzija and H. McCann, “Wavelet-based image reconstruction for hard-field tomography with severely limited data,” IEEE Sens. **11**, 1885–1893 (2011). [CrossRef]

**18. **K. Salem, E. Tsotsas, and D. Mewes, “Tomographic measurement of breakthrough in a packed bed absorber,” Chem. Eng. Sci. **60**, 517–522 (2005). [CrossRef]

**19. **R. Zdunek, “Multigrid regularized image reconstruction for limited-data tomography,” Comp. Meth. Sci. Tech. **13**, 67–77 (2007).

**20. **K. J. Daun, S. L. Waslander, and B. B. Tulloch, “Infrared species tomography of a transient flow field using Kalman filtering,” Appl. Opt. **50**, 891–900 (2011). [CrossRef]

**21. **M. Bertero and P. Boccacci, *Inverse Problems in Imaging*(Institute of Physics Publishing, 1998), pp. 200.

**22. **J. M. Lees and R. S. Crosson, “Bayesian ART versus conjugate gradient methods in tomographic seismic imaging: an application at Mount St. Helens, Washington,” in *Spatial Statistics and Imaging*, A. Possolo, ed. (Institute of Mathematical Statistics, 1991), pp. 186–208.

**23. **R. S. Crosson, “Crustal structure modeling of earthquake data 1. Simultaneous least squares estimation of hypocenter and velocity measurements,” J. Geophys. Res. **81**, 3036–3046 (1976). [CrossRef]

**24. **R. A. Wiggins, “The general linear inverse problem: implication of surface waves and free oscillations for earth structure,” Rev. Geophys. Space Phys. **10**, 251–285 (1972). [CrossRef]

**25. **J. K. MacCarthy, B. Borchers, and R. C. Aster, “Efficient stochastic estimation of the model resolution matrix diagonal and generalized cross-validation for large geophysical inverse problems,” J. Geophys. Res. **116**, B10304 (2011). [CrossRef]

**26. **J. G. Berryman, “Tomographic resolution without singular value decomposition,” Proc. SPIE **2301**, 1–13 (1994). [CrossRef]

**27. **W. Annicchiarico, J. Périaux, M. Cerrolaza, and G. Winter, *Evolutionary Algorithms and Intelligent Tools for Engineering Optimization* (WIT, 2005).