Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Direct background-oriented schlieren tomography using radial basis functions

Open Access Open Access

Abstract

Background-oriented schlieren tomography (BOST) is effective for flow field measurement; however, different from general computed tomography (CT), the BOST utilizes the deflection of rays passing through an inhomogeneous field for measurement. It is sensitive to the refractive index gradient. Therefore, an additional integration step is typically employed to obtain the refractive index. In this article, a calculation method of projection matrix is proposed based on the radial basis function (RBF). The 3D distribution of the refractive index can be reconstructed directly. This method was first verified by numerical simulation. Then, the 3D instantaneous refractive index field above a candle flame was measured. The reprojection error was calculated by ray tracing. The results illustrate the accuracy and stability of the proposed method. This research provides a new and complete solution for the 3D instantaneous flow field (refractive index, density, or temperature) measurement.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the development of aerospace and industrial design, measurement of the flow field has become increasingly important. Thus, various measurement methods have been developed in recent decades. One of these is the optical-based measurement method, which can capture the instantaneous flow field properties of the entire field. It has the advantage of being a noncontact and nonintrusive technique. Therefore, it has been widely used for the measurement of flow fields. Interferometry [13], shadowgraph [4,5], and schlieren [68] methods are among the most common approaches. Interferometry can provide quantitative results; however, it has an extremely complex setup and requires strict conditions in the surrounding environment. Although the shadowgraph method has the simplest experimental setup, it is only suitable for flow fields with large gradient variations and only provides qualitative analyses [9].

The traditional schlieren setup generally consists of a light source, lens, camera, and sharp knife edge. Since its development around the 17th century [10], the schlieren method has been adopted for the quantitative measurement of the refractive index, density, and temperature of the flow field [7,11,12]. Furthermore, schlieren tomography [1315], in which schlieren images are obtained from multiple directions, has been developed. In 2000, the background-oriented schlieren (BOS) technique was proposed by Dalziel et al. [16] and Raffel et al. [17] at almost the same time. Compared with the traditional schlieren method, BOS has modest technical requirements, which only consist of a light source, background, and camera. The BOS technique has the advantage of a very large field of view without large, expensive, and precision optics [18]. Therefore, although BOS has some disadvantages such as vibration sensitivity and limited real-time visualization [18], it still can provide an excellent strategy for full-field measurements. And BOS can provide unique solutions to some problems [18].

The BOST method utilizes the deflection of rays passing through an inhomogeneous field for measurement. Therefore, BOST differs from general CT reconstruction. Its direct reconstruction result is the distribution of the refractive index gradient. An additional step is required to obtain the distribution of the refractive index — for example, direct line integration or solving Poisson equation.

Since the BOS method was proposed, many 2D experiments have demonstrated its applicability in the measurement of the flow field. Raffel et al. [17] first used the BOS method to measure the vortex of helicopter blades. In 2001, Richard and Raffel [19] measured a compressible vortex in the wake of a cylinder. They compared the results of the BOS technique and particle image velocimetry, and similar results were obtained. The separation of the boundary layer and shear layer, as well as vortex shedding, was clearly identified in both results. Direct line integration was used to calculate the density distribution in both studies. Richard and Raffel et al. [20] applied the BOS method to a supersonic jet. These experiments prove that the BOS method can be used for flow field measurements. However, these are 2D BOS measurements and cannot be used to obtain the 3D structure of a complex flow field.

If the flow is axisymmetric, the 3D flow can be reconstructed by inverse Abel transformation from one viewing direction, which is perpendicular to the axis of symmetry of the flow. For example, Sourgen et al. [21] compared numerical simulations and BOS results using the inverse Abel transformation. In 2014, van Hinsberg and Rösgen [9] measured an under-expanded supersonic free jet by using the inverse Abel transformation. However, the Abel transformation requires the assumption of axisymmetric flow, and it is susceptible to noise. Therefore, the filtered backprojection (FBP) algorithm was employed for the reconstruction of the BOS. In 2004, Venkatakrishnan and Meier [22] measured the density field for an axisymmetric supersonic flow over a cone cylinder model. In 2005, Venkatakrishnan [23] measured an axisymmetric under-expanded jet. The density field was obtained by solving the Poisson equation and using the FBP algorithm [22,23]. In addition, the FBP algorithm can be more universally used for nonsymmetric flows.

For a non-axisymmetric flow, BOS projections should be obtained from multiple directions for BOST reconstruction. When a stationary flow is considered, the BOS projections can be obtained by rotating the position of the camera and background relative to the flow [24], in which the multidirectional projections are obtained from 19 projection angles from 0° to 90° at 5° intervals. When the projection angle is limited or there are few projection data, the FBP algorithm is prone to artifacts [25]. Experiments performed by Ota et al. [2628] used the same rotating strategy as the previously mentioned study [24], while the algebraic reconstruction technique (ART) was employed for the reconstruction. ART is more practical for reconstruction with limited data than the FBP algorithm. The density field was reconstructed from 30 projections by Ota et al. [29]. However, for the measurement of complex flow fields, this rotation strategy is only an expedient.

When a complex flow is considered, a synchronous multicamera setup is employed to record the BOS projections from different directions. The first multicamera BOS measurement of nonstationary flow was demonstrated by Atcheson et al. [30] in 2008. There were 16 cameras surrounding the hot air flow above a gas burner in a 180° arc, and the refractive index field was calculated by Poisson integration. Zeb et al. [31] employed three cameras to measure the convectional flow field. However, the reconstruction results had strong artifacts because of the small number of cameras. In 2015, Nicolas et al. [32] designed a 3D BOS experimental facility with 12 noncoplanar cameras and measured three different hot air flows. A multicamera BOS approach was also used in the measurement of compressible flow [33] and a supersonic wind tunnel [34]. Since 2018, 3D BOS has also been used in the measurement of the flame field [3538].

The deviation angle of the BOS is proportional to the integral of the refractive index gradient; thus, the direct reconstruction result obtains the distribution of the refractive index gradient. Then, an integration step is performed to calculate the refractive index field. The first method is direct line integration [17,19,25] which is a simple method, while this method produces line noise [19]. It is easy to produce noise accumulation during integration. The second extensively used method is to solve the Poisson equation [17,20,22,23,27,30,39]. However, this method is complicated and has the disadvantage of strongly smoothing the results [17]. The key to obtaining the refractive index distribution directly is to address the derivation process in the discretization. In 2015, Nicolas et al. [32] proposed a direct reconstruction method. To eliminate the integration step, they combined a finite-difference matrix (FDM) in the discretization process. Recently, this method has been adopted in many studies [35,37,38,40]. It is an excellent method; however, there are many kinds of differences, such as forward, backward, central, and Richardson differences. The order of the differences also influences the accuracy.

In this article, a calculation method of projection matrix is proposed based on the RBF for BOST. Unlike the “mechanistic” approach of Nicholas et al. [32], the projection matrix is calculated using the differentiable property of RBF. Thus, the weights in the projection matrix are the integration of partial derivatives of basis functions. The integration step is eliminated, and no additional finite-difference matrix is used. By using the 3D RBF, the derivation process can be combined with the calculation of the projection matrix. In Section 2, the BOS theory is described. In Section 3, the BOS projection model based on the 3D RBF is presented. Section 4 presents the results of the numerical simulation and the experimental reconstruction. Finally, conclusions are given in Section 5.

2. BOS theory

The BOS theory can be found in many articles, but, for the convenience of readers, it is briefly introduced here. The flow field is reconstructed by the BOS method through the deviation of the ray when propagating in an inhomogeneous field. A schematic diagram of the BOS theory is illustrated in Fig. 1. The background pattern is distorted when there is an inhomogeneous field between the background and the camera. The yellow and green lines are the ray paths with and without flow, respectively. In addition, $\Delta u$ and $\Delta v$ are the displacements of the ray in the imaging plane when there is a flow, ${l_A}$, ${l_B}$, and ${l_C}$ are the distance from the background to the probe volume, distance from the probe volume to the lens, and distance from the lens to the imaging plane (i.e., the focal length $f$), respectively. The deviation angle $\varepsilon$ is the integration of the refractive index gradient along the ray path:

$${\varepsilon ^{(\alpha )}} = \frac{1}{{{n_0}}}\int\limits_{s \in ray} {\frac{{\partial n}}{{\partial \alpha }}ds} ,\alpha \in \{ x,y,z\} .$$

Here, ${\varepsilon ^{(\alpha )}}$ is the deviation angle in the $\alpha$ direction. ${n_0}$ is the environmental refractive index and $ray$ is the ray path. Under the assumption of paraxial recording and a small deviation angle, the relationship between the deviation angle and displacement in the imaging plane is given by [41]

$$\Delta \alpha = \frac{{{l_A}{l_C}}}{{{l_A} + {l_B} - {l_C}}}{\varepsilon ^{(\alpha )}}.$$

 figure: Fig. 1.

Fig. 1. Schematic diagram of the BOS theory: the yellow and green lines are the ray path with and without the field, and the ray path indicated by the green dashed line is actually nonexistent.

Download Full Size | PDF

Because the experimental setup is relatively simple, the BOS setup can be conveniently arranged as a multidirectional measurement device. As shown in Fig. 2(a), multiple synchronized cameras pass through the probe area to capture images of the background. The displacements can be extracted by comparing the distorted and undistorted images of the background. Thus, the refractive index can be reconstructed using displacements. In some cases, it may be necessary to obtain other parameters of the flow field, such as the density and temperature. The refractive index is related to the density using the Gladstone–Dale equation [20].

$$\rho = \frac{{(n - 1)}}{{{K_{GD}}}}.$$

Here, n is the refractive index, and $\rho$ is the density. Furthermore, ${K_{GD}}$ is the Gladstone–Dale constant, which is a function of the light wavelength. Combined with the ideal gas law, the temperature T can be obtained from the density, as follows:

$$T = \frac{{PM}}{{R\rho }}.$$

 figure: Fig. 2.

Fig. 2. (a). Schematic diagram of multidirectional BOST. (b). Probe area discretization and ray path: the orange grid box is an RBF area in the green grid with a coverage range of $3d$, the blue line is the ray that can be used in the reconstruction, and the red line is the ray that cannot be used in the reconstruction.

Download Full Size | PDF

Here, P represents the atmospheric pressure, M represents the molar mass of the gas, and R represents the universal gas constant. Therefore, the temperature can be calculated using the refractive index.

3. BOST

3.1 BOS projection model based on RBF

To obtain the distribution of the refractive index directly, a BOS projection model based on the 3D RBF is proposed. The first basis-function-based CT was presented by Hanson and Wecksung [42]. This method is mainly used in 2D image reconstruction [4246], but its application in 3D field reconstruction has rarely been reported. By combining the basis function with the BOS projection model, the distribution of the refractive index can be obtained directly instead of the distribution of the refractive index gradient.

As shown in Fig. 2(b), the probe area is discretized into grids of $L \times W \times H$, where each grid is a cube with side length d. For each grid, there is a 3D RBF with a certain coverage range, and the centroid of the RBF is at the center of the grid. An RBF area with a coverage range of $3d$ is shown in Fig. 2(b). In this approach, a function is represented as a linear expansion of the basis functions. It is assumed that the refractive index distribution in the 3D space is $n({\textbf x})$, and the basis function is $g({\textbf x})$. Therefore, $n({\textbf x})$ can be approximated with a set of N basis functions,

$$n({\textbf x}) \approx \sum\limits_{i = 1}^N {{c_i}{g_i}({\textbf x})} .$$

Here, $N = L \times W \times H$ is the number of basis functions, and, ${\textbf x} = {[x,y,z]^\textrm{T}}$ is a point in the probe area. ${g_i}$ is the ith basis function, and ${c_i}$ is the coefficient of ${g_i}$. Note that typically the coverage of the basis function is larger than one grid, therefore, the basis function will overlap with the surroundings. This work employs Gaussian RBFs which are defined as follows:

$${g_i}({\textbf x}) = \left\{ {\begin{array}{cc} {\textrm{exp} \left( { - \frac{{||{{\textbf x} - {{\textbf x}_i}} ||_2^2}}{{2{\sigma^2}}}} \right),}&{{{||{{\textbf x} - {{\textbf x}_i}} ||}_2} \le {d_{th}}}\\ {0,}&{otherwise} \end{array}} \right..$$

Here, ${{\textbf x}_i} = {[{x_i},{y_i},{z_i}]^\textrm{T}}$ is the centroid of ${g_i}$, $\sigma$ is the standard deviation of ${g_i}$, ${d_{th}}$ is the threshold distance (e.g., ${d_{th}} = 3\sigma$), and ${||\cdot ||_2}$ is the Euclidean norm. The α-direction gradient of $n({\textbf x})$ for $\alpha \in \{ x,y,z\}$, is

$${\nabla _\alpha }n({\textbf x}) \approx \sum\limits_{i = 1}^N {{c_i}\frac{{\partial {g_i}({\textbf x})}}{{\partial \alpha }}} ,$$
where, the α-direction derivative of ${g_i}$ is
$$\frac{{\partial {g_i}({\textbf x})}}{{\partial \alpha }} = \left\{ {\begin{array}{cc} { - \frac{{\alpha - {\alpha_i}}}{{{\sigma^2}}}\textrm{exp} \left( { - \frac{{||{{\textbf x} - {{\textbf x}_i}} ||_2^2}}{{2{\sigma^2}}}} \right),}&{{{||{{\textbf x} - {{\textbf x}_i}} ||}_2} \le {d_{th}}}\\ {0,}&{otherwise} \end{array}} \right..$$

One can express the standard BOST measurement model using this basis. To begin, the α-direction deflection of the jth ray is given by a line-of-sight (LoS) integral over ${\nabla _\alpha }n({\textbf x})$ alone that ray,

$$\varepsilon _j^{(\alpha )} = \frac{1}{{{n_0}}}\int\limits_{j\textrm{th }ray} {\frac{{\partial n[{r _j}(s)]}}{{\partial \alpha }}ds,}$$
where ${n_0}$ is the reference refractive index and the indicator function ${r _j}:{\mathrm{\mathbb{R}}^1} \to {\mathrm{\mathbb{R}}^3}$ maps a distance alone the jth LoS to the corresponding 3D location. The model is approximated by substituting Eq. (7) into Eq. (9),
$$\varepsilon _j^{(\alpha )} = \frac{1}{{{n_0}}}\sum\limits_{i = 1}^N {{c_i}\underbrace{{\int\limits_{j\textrm{th }ray} {\frac{{\partial {g_i}[{r _j}(s)]}}{{\partial \alpha }}ds} }}_{{T_{j,i}^{(\alpha )}}}} .$$

All the coefficient of basis function and deflection data are collected into the vector, ${\textbf c} = \{ {c_i}\} _{i = 1}^N$ and ${{\mathbf \varepsilon }^{(\alpha )}} = \{ \varepsilon _j^{(\alpha )}\} _{j = 1}^K$, respectively. And ${{\textbf T}^{(\alpha )}}$ (defined in Eq. (10)) is formed for $i \in 1,2,\ldots ,N$, $j \in 1,2,\ldots ,K$, and $\alpha \in \{ x,y,z\}$. The result is a $3K \times N$ linear equation:

$${\textbf Tc} = \left[ {\begin{array}{c} {{{\textbf T}^{(x )}}}\\ {{{\textbf T}^{(y )}}}\\ {{{\textbf T}^{(z )}}} \end{array}} \right]{\textbf c} = \left[ {\begin{array}{c} {{{\mathbf \varepsilon }^{(x )}}}\\ {{{\mathbf \varepsilon }^{(y )}}}\\ {{{\mathbf \varepsilon }^{(z )}}} \end{array}} \right].$$
Where, ${\textbf T}$ is the projection matrix. By solving the linear equation, the coefficients of the RBF on each grid can be calculated. Then, the distribution of the refractive index can be obtained using Eq. (5). Through this method, if the chosen 3D RBF is derived, the 3D refractive index distribution can be directly obtained. No additional integration step or difference matrix is required because the derivation process is included in the calculation of the projection matrix ${\textbf T}$. Unfortunately, the ray path in the flow field is unknown. However, the perturbation caused by the flow field is very small, and the probe area is also small compared with the entire experimental domain. Therefore, integration can be performed along the unperturbed ray path [32,35].

3.2 Calculation of the BOS projection matrix

To calculate the BOS projection matrix ${\textbf T}$, one must first know the ray trajectory equation corresponding to the sampling point on the image. Therefore, the pinhole model is employed to describe the camera imaging process. In the pinhole model, a 3D point ${(x,y,z,1)^T}$ in the world coordinate system is imaged as ${(u,v,1)^T}$ in the imaging plane by

$$\begin{array}{c} \varsigma \left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = {\textbf P}\left[ {\begin{array}{c} x\\ y\\ z\\ 1 \end{array}} \right],\\ {\textbf P} = {{\textbf K}_{3 \times 4}}\left[ {\begin{array}{cc} {{{\textbf R}_{3 \times 3}}}&{{{\textbf t}_{3 \times 1}}}\\ {{0^T}}&1 \end{array}} \right] = \left[ {\begin{array}{cccc} {{p_{11}}}&{{p_{12}}}&{{p_{13}}}&{{p_{14}}}\\ {{p_{21}}}&{{p_{22}}}&{{p_{23}}}&{{p_{24}}}\\ {{p_{31}}}&{{p_{32}}}&{{p_{33}}}&{{p_{34}}} \end{array}} \right]. \end{array}$$

Here, ${\textbf P}$ is the camera projection matrix, ${\textbf K}$ is the intrinsic matrix of the camera, ${\textbf R}$ is a rotation matrix, ${\textbf t}$ is the translation vector, and $\varsigma$ is a nonzero scale factor. After one obtains the camera projection matrix by camera calibration, the ray trajectory equation of the sampling point ${(u,v)^T}$ can be calculated by

$$\left\{ {\begin{array}{c} {({p_{11}} - {p_{31}}u)x + ({p_{1\textrm{2}}} - {p_{3\textrm{2}}}u)y + ({p_{13}} - {p_{33}}u)z = {p_{3\textrm{4}}}u - {p_{\textrm{14}}}}\\ {({p_{21}} - {p_{31}}v)x + ({p_{\textrm{22}}} - {p_{3\textrm{2}}}v)y + ({p_{23}} - {p_{33}}v)z = {p_{3\textrm{4}}}v - {p_{\textrm{24}}}} \end{array}} \right..$$

Therefore, the points at which the ray enters and exits the probe area can be calculated. The grids that are crossed by the ray can also be calculated sequentially. The grids crossed by the rays are illustrated in Fig. 3. Then, the weights of these grids can be calculated one by one using Eq. (10). However, a ray only passes through a small part of the grids along the ray path; thus, most of the elements in the matrix ${\textbf T}$ are zero. Therefore, the matrix ${\textbf T}$ is stored as a sparse matrix. The programs are written in the c++ programming language. And each non-zero element in ${\textbf T}$ is represented by a triple, in which two variables of int type represent the position of the element and a variable of float type represents the weight. Therefore, the memory consumption can be significantly reduced.

 figure: Fig. 3.

Fig. 3. The grids crossed by the rays. For the convenience of viewing, draw a grid every 4 grids, and the grid is enlarged 3 times.

Download Full Size | PDF

The ray is discretized in the calculation. The discrete step can be set to $d/10$ [47]. A 3D RBF overlaps with the surrounding RBF. The overlap range depends on the coverage of the RBF. Therefore, the value of the RBF at a certain point should be added to the value of the surrounding RBF. To facilitate the calculation, the coverage of the RBF is generally set to an odd number. The rays passing through the top or bottom of the probe area should not be used in the reconstruction. As shown in Fig. 2(b), because there may still be a flow field above or below the probe area, the path used to calculate the integral is incomplete. Thus, rays that pass through the top or bottom of the probe area should be excluded.

The displacement in the image is a 2D vector $(\Delta u,\Delta v)$, whereas the deviation angle used in Eq. (11) is a 3D vector $({\varepsilon _x},{\varepsilon _y},{\varepsilon _z})$ — that is, the image displacement is decomposed into three directions of the world coordinate system, as given by Eq. (14).

$$\begin{array}{l} {\varepsilon _x} = {r_{11}}\Delta u + {r_{21}}\Delta v\\ {\varepsilon _y} = {r_{12}}\Delta u + {r_{22}}\Delta v.\\ {\varepsilon _z} = {r_{13}}\Delta u + {r_{23}}\Delta v \end{array}$$

Here, ${r_{ij}}$ is the element in the rotation matrix of the camera obtained by camera calibration.

3.3 Reconstruction

The Eq. (11) is commonly ill-posed in BOS theory. The projection matrix ${\textbf T}$ is typically large in BOS theory; thus, the matrix-theory-based method is computationally unpractical. Iterative reconstruction algorithms are widely used for such problems. Liu et al. [37] showed that ART outperforms the simultaneous iterative reconstruction technique and the Landweber algorithm in BOS reconstruction. Therefore, the ART algorithm was used in the experiments. The iterative process of ART is expressed as

$${{\textbf c}^{(i,k + 1)}} = {{\textbf c}^{(i,k)}} + \lambda \frac{{{\varepsilon _i} - \left\langle {{{\textbf T}_i},{{\textbf c}^{(i,k)}}} \right\rangle }}{{\left\langle {{{\textbf T}_i},{{\textbf T}_i}^T} \right\rangle }}{{\textbf T}_i}^T.$$

Here, ${{\textbf c}^{(i,k)}}$ is the result updated by the i-th equation after the k-th iteration, and the initial guess can be filled with zero. In addition, ${\varepsilon _i}$ is the i-th component of the vector ${\mathbf \varepsilon }$, ${{\textbf T}_i}$ represents the i-th row of the projection matrix ${\textbf T}$, and $\lambda$ is the relaxation parameter.

Using ART, it is relatively easy to incorporate prior knowledge into the reconstruction process. Total variation (TV) regulation [48] was originally presented for image denoising, which can suppress noise while retaining the edge and sharpness of the image [49]. The TV regularization was used in BOST for the first time by Grauer et al [35], and achieved excellent results. There are often a lot of noise in the results of ART; therefore, TV regulation is introduced into the ART reconstruction process. The TV value is expressed as

$$TV ({\textbf c}) = \sum {\sqrt {{{(\nabla {c_x})}^2} + {{(\nabla {c_y})}^2} + {{(\nabla {c_z})}^2}} } .$$

Here, $TV ({\textbf c})$ is the TV of ${\textbf c}$ in the 3D discrete form. The minimization of $TV ({\textbf c})$ can be written as

$${{\textbf c}^\ast } = \mathop {\min }\limits_{{\textbf c}^{\prime}} \sum {\sqrt {{{(\nabla {c_x})}^2} + {{(\nabla {c_y})}^2} + {{(\nabla {c_z})}^2}} } + \frac{\beta }{2}||{{\textbf c}^{\prime} - {\textbf c}} ||_2^2.$$

The result of ART is updated by TV regularization based on the split Bregman method [50] at each iteration, and the updated result is used as the initial value of the next ART iteration.

The overall process of the 3D BOST is shown in Fig. 4. The detailed steps are listed below.

  • (1) Multicamera calibration
  • (2) Capturing the undistorted image and distorted image
  • (3) Extracting the displacements by a cross-correlation algorithm based on the image deformation and multigrid [51]
  • (4) Calculating the BOS projection matrix based on the 3D RBF and ray tracing
  • (5) Converting 2D displacements into 3D deviations
  • (6) Reconstruction based on ART and TV regularization

 figure: Fig. 4.

Fig. 4. Flow chart of 3D BOST: the blue rectangles represent some kind of computation, and the others represent the data or result.

Download Full Size | PDF

4. Experiments and results

4.1 Numerical simulation

First, the feasibility of this method was validated by simulation data. Twelve simulated cameras were evenly distributed within a range of $165^\circ$. The distances from the camera to the probe area and background plate are 600 and 1300 mm, respectively. The temperature field used is flame-like:

$$\begin{aligned} T(x,y,z) &= 303.15 + 1196.85{e^{ - {{\left( {\frac{{3{x^2} + 3{y^2}}}{{{{20}^2}}} + \frac{{3{z^2}}}{{{{80}^2}}} - 0.9} \right)}^2}}}\\ &+ 696.85{e^{ - {{\left( {\frac{{3{{({x - 25} )}^2} + 3{y^2}}}{{{{10}^2}}} + \frac{{3{z^2}}}{{{{40}^2}}} - 0.9} \right)}^2}}} + 696.85{e^{ - {{\left( {\frac{{3{{({x + 25} )}^2} + 3{y^2}}}{{{{10}^2}}} + \frac{{3{z^2}}}{{{{40}^2}}} - 0.9} \right)}^2}}}.\\ &+ 696.85{e^{ - {{\left( {\frac{{3{x^2} + 3{{({y - 25} )}^2}}}{{{{10}^2}}} + \frac{{3{z^2}}}{{{{40}^2}}} - 0.9} \right)}^2}}} + 696.85{e^{ - {{\left( {\frac{{3{x^2} + 3{{({y + 25} )}^2}}}{{{{10}^2}}} + \frac{{3{z^2}}}{{{{40}^2}}} - 0.9} \right)}^2}}} \end{aligned}$$

There are four small flames surrounding a large flame evenly at a distance of 25 mm. The diameter and height of large flame are 20 mm and 80 mm, respectively. The diameter and height of the small flame are 10 mm and 40 mm, respectively. The max value of temperature is ∼1500 K. The forward projection process can be simulated according to Eqs. (1) and (2); thus, one can calculate the displacements in the imaging plane of each camera under the influence of the temperature field by ray tracing. For example, the displacements in the fourth camera are shown in Fig. 5(a).

 figure: Fig. 5.

Fig. 5. Simulated displacements in fourth camera: (a) displacements with 0% Gaussian noise, (b) displacements with 10% Gaussian noise, and (c) displacements with 20% Gaussian noise.

Download Full Size | PDF

In real experiments, the errors caused by the vibration, light fluctuation, displacement extraction, and image noise may influence the reconstruction. Therefore, to evaluate the influence of noise, we added 0%–20% Gaussian noise to the displacements. The mathematical expression for adding noise is as follows:

$$\Delta ^{\prime} = \Delta + GN\ast NL\ast MD.$$

Here, $GN$ is the Gaussian noise distributed between 0 and 1, $NL$ is the noise level, ranging from 0% to 20%, and $MD$ is the maximum displacement of the image. The displacements with different Gaussian noise are shown in Fig. 5. The size of the probe area is $100 \times 100 \times 100$ mm and is divided into $100 \times 100 \times 100$ grids. In each imaging plane, $167 \times 167$ points were selected. Therefore, the size of the matrix ${\textbf T}$ is $1004004 \times 1000000$ and is much larger in the following real experiments. Therefore, the matrix ${\textbf T}$ is stored as a sparse matrix to reduce the memory consumption.

To investigate the effect of RBF coverage on reconstruction results, the reconstruction is performed under different RBF coverages. The coverage of the RBF is determined by the $\sigma$ in Eq. (6). Moreover, the mean relative error ($MRE$) can be calculated. The MRE is given by

$$MRE = \frac{1}{N}\sum\limits_{i = 1}^N {\frac{{|{V_i^{\prime} - {V_i}} |}}{{{V_i}}}} ,$$
where $MRE$ is the mean relative error, and $V^{\prime}$ and V are the reconstructed and true values of the field, respectively. The reconstruction results are shown in Fig. 6. And reconstruction was also performed using the method of Nicolas et al. (FDM) at the same grid number. The figure shows that small RBF coverage or FDM produces much noise, and larger errors are produced at the bottom of the flame. The reconstruction of FDM at the edges is unsatisfactory (i.e., left of the figure), therefore, the $MRE$ is relatively large. Although $MRE$ is comparable, the large coverage yields smoother results with fewer artifacts. However, the memory consumption also increases with the RBF coverage. Therefore, considering the reconstruction quality and memory consumption, the coverage of the RBF is generally selected as $7d$ or $9d$.

 figure: Fig. 6.

Fig. 6. Slices of reconstruction results. (a). Reconstruction result of the method of Nicolas et al. (FDM). The reconstruction at the edges is unsatisfactory (i.e., left of the figure), therefore, the error is relatively large. (b)∼(f). Reconstruction results of our method under different RBF coverages.

Download Full Size | PDF

Slices of the reconstruction results are shown in Fig. 7. The figure shows that the BOS projection model can reconstruct the temperature field accurately. However, owing to the influence of Gaussian noise, there are also many noises in the reconstruction results of the ART algorithm. Moreover, in the BOS experiment, the displacement is relatively small; thus, the reconstruction is easily influenced by noise. TV regularization can effectively suppress these noises and make the results smoother. Figure 8 shows the profiles of the reconstruction results. The results of the TV regularization are smoother and have less noise. Especially at the edge of the reconstruction area, noise can be effectively reduced. The figure shows that the TV regularization can constrain the results effectively at different noise level. As shown in Fig. 9, the $MRE$ of the different noise levels are calculated from 0% to 20%. The $MRE$ under TV regularization is significantly reduced in both methods, however the $MRE$ of our method increases more slowly as the noise level increases.

 figure: Fig. 7.

Fig. 7. Slices of reconstruction result of our method in the plane $z = 0$: the first row is the result of ART, and the second row is the result of ART with TV regularization — each column is the reconstruction result at a different noise level (NL).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Profiles of slices presented in Fig. 7 along X = 0.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. $MRE$ under different noise levels. The x-axis is the noise level, and the y-axis is the $MRE$. The $MRE$ under TV regularization is significantly reduced.

Download Full Size | PDF

4.2 Experimental setup and calibration

The experimental setup consisted of 15 AVT Guppy F-125B cameras mounted on an ${\sim} 160^\circ$ arc surrounding the hot air above the candle flame, as shown in Fig. 10. A Sony ICX445 1292- × 964-pixel sensor was included in each camera and attached to a lens with a focal length of 12 mm. All cameras were synchronized by an external trigger signal, and the frequency of the trigger signal was 30 Hz. Three background plates painted with a sine wave pattern were fixed in a semicircle approximately 750 mm away from the candle, and the distance from the candle to the camera was approximately 650 mm. The background plates were illuminated using six light-emitting diode light sources.

 figure: Fig. 10.

Fig. 10. Experimental setup. Fifteen cameras are arranged around the hot air flow above the candle flame. The cameras numbered with green are for reconstruction, and the cameras numbered with blue are for validation. The background plates were illuminated using light-emitting diode light sources.

Download Full Size | PDF

The main problem of this multicamera calibration is that it is impossible for all the cameras to view the calibration plate. Therefore, the multicamera calibration procedure was performed using the flexible multicamera calibration method with a rotating calibration plate [52]. The 15 cameras were divided into three groups, with five cameras in each group. A chessboard calibration plate was placed in the probe area, as shown in Fig. 11(a). When the calibration plate rotated, all cameras simultaneously captured the images of the calibration plate. Within each group, the 3D world coordinates of the rotating calibration points were calculated by multiview stereo vision. Then, by using the overlapping calibration points of the adjacent group, the three groups of world coordinate systems could be transferred to a unified world coordinate system. Finally, the positions and orientations of the cameras were calculated based on the pinhole model.

 figure: Fig. 11.

Fig. 11. (a) Calibration plate and rotation device and (b) 3D coordinates of rotating points of three groups in a unified world coordinate system.

Download Full Size | PDF

The 3D coordinates of the rotating points of the three groups are shown in Fig. 11(b). The calibration results are shown in Fig. 12. The reprojection errors of all cameras were within 0.3 pixels. The absolute mean error (AME) and standard deviation (Std) of the reprojection error along the u-axis were 0.02 and 0.0257 pixels, respectively, and the AME and Std of the reprojection error along the v-axis were 0.025 and 0.033 pixels, respectively. After the intrinsic and extrinsic parameters of the cameras have been obtained, the ray path of any point in the image can be calculated. The four calculated ray paths are shown in Fig. 12(a).

 figure: Fig. 12.

Fig. 12. Calibration results: (a) positions and orientations of 15 cameras (the red, green, and blue lines in the middle represent the world coordinate system, and the yellow lines are the ray path of four sample points) and (b) reprojection error of 15 cameras.

Download Full Size | PDF

4.3 Experimental results

By comparing the distorted and undistorted images, the displacements in the imaging plane can be calculated using the cross-correlation algorithm [51]. As shown in Fig. 13, $126 \times 185$ sample points were selected in each image. The size of the probe area was $120 \times 120 \times 192$ mm, and it was divided into $200 \times 200 \times 320$ grids.

 figure: Fig. 13.

Fig. 13. Displacements of all cameras: the cyan cuboid is the projection of the probe area on the image, and the orange, green, and blue lines in the middle represent the projection of the world coordinate system.

Download Full Size | PDF

The projection matrix was calculated according to the description given in Section 3 and Nicolas et al.’s method (FDM) [32]. The reconstruction process was performed using ART and TV regularization. To validate the correctness of the method, 12 cameras (green in Fig. 10) were selected to participate in BOS reconstruction, and the other three cameras (blue in Fig. 10) were used for validation. After the reconstruction result was obtained, the reprojection displacements of the other three cameras could be calculated according to Eqs. (1) and (2) by using ray tracing. The comparisons between the measurement displacements calculated by the cross-correlation algorithm and the reprojection displacements calculated by ray tracing are shown in Fig. 14(a). The correlation coefficient (CC) was used to indicate the similarity of each pair of images. The CC of images A and B is given by

$$CC = \frac{{{\mathop{\rm cov}} (A,B)}}{{\sqrt {{\mathop{\rm var}} (A){\mathop{\rm var}} (B)} }}.$$

Here, ${\mathop{\rm cov}} (A,B)$ is the covariance of images A and B, and ${\mathop{\rm var}} (A)$ is the variance of image A. The CCs of each pair of images were calculated to be 0.94, 0.91, and 0.90, which is larger than the result of FDM. The figure shows that the measurement displacements and reprojection displacements are similar to each other. Further, we conducted a more detailed analysis of measurement displacements and reprojection displacements. We use Euclidean distance to evaluate the error between two displacements. As shown in Fig. 14(b), the position with a larger error is clearly visible in the figure. The error of FDM is larger in the lower part of the plume. Compared to FDM, our method produces smaller errors. Besides, the distribution of the reprojection errors is shown in Fig. 16. The AME and Std of the reprojection error along the u-axis are 0.049 and 0.080 pixels, and the AME and Std of the reprojection error along the v-axis are 0.027 and 0.041 pixels, respectively. Only a few errors are relatively large, and 90% of errors are less than 0.05 pixels. The comparisons between our method and Nicolas et al.’s method in different frames are listed in Table 1. The slices of results are shown in Fig. 15. The figure reveals that a complicated structure can also be reconstructed. However, we notice that there are the petal-like artifacts in the surrounding area, which is similar to the results in Ref. [32]. In this article, though the “3D mask” can improve the accuracy, the artifacts still exist. The number of petal-like artifacts are related to the number of projection angle in the linear tomography [53]. In the results of FDM, the refractive index is larger than ours in the lower part of the flow. This problem can be improved by using a “3D mask” [32]. However, the “3D mask” is not employed in our method. Besides, the gradient of refractive index is shown in Fig. 17. In this experiment, 50 frames of BOS images were captured at a frequency of 30 Hz, and BOS reconstruction was performed using the proposed method. The reconstruction results for five different frames are shown in Fig. 18. The quantitative reprojection errors were calculated as described above. The details are listed in Table 1. The reprojection errors of different frames are approximately the same; therefore, the stability of the method is verified.

 figure: Fig. 14.

Fig. 14. (a). Comparisons between measurement displacements and reprojection displacements: the first row is the measurement displacements. The second and third rows are the reprojection displacements of our method and Nicolas et al.’s method (FDM), respectively. (the rendering scalar is consistent with Fig. 13). (b). The Euclidean distance error between measurement displacements and reprojection displacements. The first row is the error of our method, and the second row is the error of Nicolas et al.’s method (FDM).

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Slices of one instantaneous reconstruction result: slices along the vertical direction from −17.7 mm to −2.7 mm, and slices along the horizontal direction from −47.7 mm to 6.3 mm. The first row is the result of our method, and the second row is the result of Nicolas et al.’s method (FDM).

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Distribution of errors between measurement displacements and reprojection displacements: (a) scatter plot of errors, (b) frequency of different errors along the u-axis direction, and (c) frequency of different errors along the v-axis direction.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Gradient of refractive index corresponding to Fig. 15. The figure shows the gradients in three directions for the results of our method and that of Nicolas et al. (FDM).

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Three-dimensional refractive index distribution of different frames. The more distributions of consecutive frames are shown in Visualization 1. And the display of distribution under different viewing directions are shown in Visualization 2.

Download Full Size | PDF

Tables Icon

Table 1. Reprojection errors of five frames. The value in “()” is the results of Nicolas et al.’s method.

5. Conclusion

A discrete projection model based on a 3D RBF was proposed for BOST. Using this method, the 3D flow field distribution can be obtained directly without an additional integration step or additional finite-difference matrix. Firstly, the accuracy of this method was verified by simulation. The RBF coverage of $7d$ or $9d$ is recommended in the BOST by the simulation. Besides, the refractive index of the hot air flow above a candle flame was reconstructed using 12 cameras. The reprojection displacements in the other three cameras were calculated by ray tracing. Comparing the measurement displacements calculated by the cross-correlation algorithm demonstrated that the reprojection displacements were similar to the measurement displacements. This method can obtain accurate results and has excellent anti-noise ability. It can provide a feasible method for flow field measurements. However, there are many rays that need to be processed; thus, the calculation of the projection matrix is time consuming, requiring approximately 23 h in the experiments, although 15 CPU threads were used. Therefore, in future research, simplification of the BOS projection matrix to reduce the time consumption and memory usage is planned. In addition, in some experimental scenarios, there may be vibrations, water mist or light source fluctuation that influence the experiment. This will have a great impact on the extraction of the BOS image displacements. Displacements may be generated where there should not be displacements. In future research, we will focus on these problems.

Funding

National Natural Science Foundation of China (62175110).

Disclosures

The authors declare no conflicts of interest.

Data Availability

The data underlying the results presented are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. L. Couch, D. A. Kalin, and T. McNeal, “Experimental investigation of image degradation created by a high-velocity flow field,” in Proceedings of SPIE - The International Society for Optical Engineering (1991), pp. 417–423.

2. B. Lam and C. Guo, “Complete characterization of ultrashort optical pulses with a phase-shifting wedged reversal shearing interferometer,” Light: Sci. Appl. 7(1), 30 (2018). [CrossRef]  

3. J. M. Weisberger, B. F. Bathel, G. C. Herring, G. M. Buck, S. B. Jones, and A. A. Cavone, “Multi-point line focused laser differential interferometer for high-speed flow fluctuation measurements,” Appl. Opt. 59(35), 11180–11195 (2020). [CrossRef]  

4. W. SchöPf, J. C. Patterson, and A. Brooker, “Evaluation of the shadowgraph method for the convective flow in a side-heated cavity,” Exp. Fluids 21(5), 331–340 (1996). [CrossRef]  

5. M. J. Hargather and G. S. Settles, “Retroreflective shadowgraph technique for large-scale flow visualization,” Appl. Opt. 48(22), 4449–4457 (2009). [CrossRef]  

6. P. Krehl and S. Engemann, “August Toepler — The first who visualized shock waves,” Shock Waves 5(1-2), 1–18 (1995). [CrossRef]  

7. A. K. Agrawal, N. K. Butuk, S. R. Gollahalli, and D. Griffin, “Three-dimensional rainbow schlieren tomography of a temperature field in gas flows,” Appl. Opt. 37(3), 479–485 (1998). [CrossRef]  

8. P. Aleiferis, A. Charalambides, Y. Hardalupas, N. Soulopoulos, A. M. K. P. Taylor, and Y. Urata, “Schlieren-based temperature measurement inside the cylinder of an optical spark ignition and homogeneous charge compression ignition engine,” Appl. Opt. 54(14), 4566–4579 (2015). [CrossRef]  

9. N. P. V. Hinsberg and T. Rösgen, “Density measurements using near-field background-oriented Schlieren,” Exp. Fluids 55(4), 1720 (2014). [CrossRef]  

10. G. S. Settles, Schlieren and shadowgraph techniques: visualizing phenomena in transparent media (Springer Science & Business Media, 2001).

11. M. R. Davis, “Measurements in a subsonic turbulent jet using a quantitative schlieren technique,” J. Fluid Mech. 46(4), 631–656 (1971). [CrossRef]  

12. W. L. Howes, “Rainbow schlieren and its applications,” Appl. Opt. 23(14), 2449–2460 (1984). [CrossRef]  

13. A. Schwarz, “Multi-tomographic flame analysis with a schlieren apparatus,” Meas. Sci. Technol. 7(3), 406–413 (1996). [CrossRef]  

14. A. Srivastava, K. Muralidhar, and P. K. Panigrahi, “Reconstruction of the concentration field around a growing KDP crystal with schlieren tomography,” Appl. Opt. 44(26), 5381–5392 (2005). [CrossRef]  

15. A. Z. Nazari, Y. Ishino, F. Ito, H. Kondo, and S. Nakao, “Quantitative Schlieren Image-Noise Reduction Using Inverse Process and Multi-Path Integration,” J. Flow Control., Meas. & Vis. 08(2), 25–44 (2020). [CrossRef]  

16. S. B. Dalziel, G. O. Hughes, and B. R. Sutherland, “Whole-field density measurements by ‘synthetic schlieren’,” Exp. Fluids 28(4), 322–335 (2000). [CrossRef]  

17. M. Raffel, C. Tung, H. Richard, Y. Yu, and G. E. A. Meier, “Background oriented stereoscopic schlieren (BOSS) for full-scale helicopter vortex characterization,” in The Millennium 9th International Symposium on Flow Visualization (2000), pp. 450.

18. G. S. Settles and M. J. Hargather, “A review of recent developments in schlieren and shadowgraph techniques,” Meas. Sci. Technol. 28(4), 042001 (2017). [CrossRef]  

19. H. Richard and M. Raffel, “Principle and applications of the background oriented schlieren (BOS) method,” Meas. Sci. Technol. 12(9), 1576–1585 (2001). [CrossRef]  

20. H. Richard, M. Raffel, M. Rein, J. Kompenhans, and G. E. A. Meier, “Demonstration of the applicability of a Background Oriented Schlieren (BOS) method,” in Laser Techniques for Fluid Mechanics, Springer145–156 (2002).

21. F. Sourgen, J. Haertig, and C. Rey, “Comparison Between Background Oriented Schlieren Measurements (BOS) and Numerical Simulations,” in 24th AIAA Aerodynamic Measurement Technology and Ground Testing Conference (2004).

22. L. Venkatakrishnan and G. E. A. Meier, “Density measurements using the Background Oriented Schlieren technique,” Exp. Fluids 37(2), 237–247 (2004). [CrossRef]  

23. L. Venkatakrishnan, “Density Measurements in an Axisymmetric Underexpanded Jet by Background-Oriented Schlieren Technique,” Aiaa J. 43(7), 1574–1579 (2005). [CrossRef]  

24. F. Sourgen, F. Leopold, and D. Klatt, “Reconstruction of the density field using the Colored Background Oriented Schlieren Technique (CBOS),” Opt. Lasers Eng. 50(1), 29–38 (2012). [CrossRef]  

25. F. Leopold, M. Ota, D. Klatt, and K. Maeno, “Reconstruction of the Unsteady Supersonic Flow around a Spike Using the Colored Background Oriented Schlieren Technique,” J. Flow Control. Meas. & Vis. 01(02), 69–76 (2013). [CrossRef]  

26. M. Ota, K. Hamada, and K. Maeno, “Quantitative 3D density measurement of supersonic flow by colored grid background oriented schlieren (CGBOS) technique,” in 27TH International Congress of the Aeronautical Sciences (2010), pp. 1182–1188.

27. M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011). [CrossRef]  

28. M. Ota, H. Kato, R. Sakamoto, and K. Maeno, Quantitative Measurement and Reconstruction of 3D Density Field by CGBOS (Colored Grid Background Oriented Schlieren) Technique (Springer Berlin Heidelberg, 2012).

29. M. Ota, K. Kurihara, K. Aki, Y. Miwa, T. Inage, and K. Maeno, “Quantitative density measurement of the lateral jet/cross-flow interaction field by colored-grid background oriented schlieren (CGBOS) technique,” J. Visualization 18(3), 543–552 (2015). [CrossRef]  

30. B. Atcheson, I. Ihrke, W. Heidrich, A. Tevs, D. Bradley, M. Magnor, and H.-P. Seidel, “Time-resolved 3d capture of non-stationary gas flows,” ACM Trans. Graph. 27(5), 1–9 (2008). [CrossRef]  

31. M. F. Zeb, M. Ota, and K. Maeno, “Quantitative Measurement of Heat Flow in Natural Heat Convection Using Color-Stripe Background Oriented Schlieren (CSBOS) Method,” Journal of JSEM. 11, s141–s146 (2011). [CrossRef]  

32. F. Nicolas, V. Todoroff, A. Plyer, G. Le Besnerais, D. Donjat, F. Micheli, F. Champagnat, P. Cornic, and Y. Le Sant, “A direct approach for instantaneous 3D density field reconstruction from background-oriented schlieren (BOS) measurements,” Exp. Fluids 57(1), 13 (2016). [CrossRef]  

33. F. Nicolas, D. Donjat, O. Léon, G. Le Besnerais, F. Champagnat, and F. Micheli, “3D reconstruction of a compressible flow by synchronized multi-camera BOS,” Exp. Fluids 58(5), 46 (2017). [CrossRef]  

34. F. Nicolas, D. Donjat, A. Plyer, F. Champagnat, G. Le Besnerais, F. Micheli, P. Cornic, Y. Le Sant, and J. M. Deluc, “Experimental study of a co-flowing jet in ONERA’s F2 research wind tunnel by 3D background oriented schlieren,” Meas. Sci. Technol. 28(8), 085302 (2017). [CrossRef]  

35. S. J. Grauer, A. Unterberger, A. Rittler, K. J. Daun, A. M. Kempf, and K. Mohri, “Instantaneous 3D flame imaging by background-oriented schlieren tomography,” Combust. Flame 196, 284–299 (2018). [CrossRef]  

36. A. Aminfar, J. Cobian-Iñiguez, M. Ghasemian, N. Rosales Espitia, D. R. Weise, and M. Princevac, “Using Background-Oriented Schlieren to Visualize Convection in a Propagating Wildland Fire,” Combust. Sci. Technol. 192(12), 2259–2279 (2020). [CrossRef]  

37. H. Liu, H. Jianqing, L. Li, and W. Cai, “Volumetric imaging of flame refractive index, density, and temperature using background-oriented Schlieren tomography,” Sci. China Technol. Sci. 64(1), 98–110 (2021). [CrossRef]  

38. H. Liu, C. Shui, and W. Cai, “Time-resolved three-dimensional imaging of flame refractive index via endoscopic background-oriented Schlieren tomography using one single camera,” Aerosp. Sci. Technol. 97, 105621 (2020). [CrossRef]  

39. G. Meier, “Computerized background-oriented schlieren,” Exp. Fluids 33(1), 181–187 (2002). [CrossRef]  

40. S. J. Grauer and A. M. Steinberg, “Fast and robust volumetric refractive index measurement by unified background-oriented schlieren tomography,” Exp. Fluids 61(3), 80 (2020). [CrossRef]  

41. M. Raffel, H. Richard, and G. E. A. Meier, “On the applicability of background oriented optical tomography for large scale aerodynamic investigations,” Exp. Fluids 28(5), 477–481 (2000). [CrossRef]  

42. K. M. Hanson and G. W. Wecksung, “Local basis-function approach to computed tomography,” Appl. Opt. 24(23), 4028–4039 (1985). [CrossRef]  

43. M. Schweiger and S. R. Arridge, “Image reconstruction in optical tomography using local basis functions,” J. Electron. Imaging 12(4), 583–593 (2003). [CrossRef]  

44. C. Byrne, D. Gordon, and D. Heilper, “Models for biomedical image reconstruction based on integral approximation methods,” in 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI) (2012), pp. 70–73.

45. S. Ellis and A. J. Reader, “Kernelised EM image reconstruction for dual-dataset PET studies,” in 2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD) (2016), pp. 1–3.

46. A. Biguri, H. Towsyfyan, R. Boardman, and T. Blumensath, “Numerically robust tetrahedron-based tomographic forward and backward projectors on parallel architectures,” Ultramicroscopy 214, 113016 (2020). [CrossRef]  

47. E. Schairer, L. K. Kushner, and J. T. Heineck, “Measurements of Tip Vortices from a Full-Scale UH-60A Rotor by Retro- Reflective Background Oriented Schlieren and Stereo Photogrammetry,” (2013).

48. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60(1-4), 259–268 (1992). [CrossRef]  

49. D. Strong and T. Chan, “Edge-preserving and scale-dependent properties of total variation regularization,” Inverse Probl. 19(6), S165–S187 (2003). [CrossRef]  

50. T. Goldstein and S. Osher, “The Split Bregman Method for L1-Regularized Problems,” SIAM J. Imaging Sci. 2(2), 323–343 (2009). [CrossRef]  

51. F. Scarano, “Iterative image deformation methods in PIV,” Meas. Sci. Technol. 13(1), R1–R19 (2002). [CrossRef]  

52. H. J. Cai, Y. Song, Y. Q. Shi, Z. Cao, Z. Y. Guo, Z. H. Li, and A. Z. He, “Flexible multicamera calibration method with a rotating calibration plate,” Opt. Express 28(21), 31397–31413 (2020). [CrossRef]  

53. C. Wei, K. K. Schwarm, D. I. Pineda, and R. Mitchell Spearrin, “Physics-trained neural network for sparse-view volumetric laser absorption imaging of species and temperature in reacting flows,” Opt. Express 29(14), 22553–22566 (2021). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The reconstruction results of consecutive frames. The time interval between different frames is 33.3 ms.
Visualization 2       The display of one reconstruction result under different viewing directions.

Data Availability

The data underlying the results presented are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Schematic diagram of the BOS theory: the yellow and green lines are the ray path with and without the field, and the ray path indicated by the green dashed line is actually nonexistent.
Fig. 2.
Fig. 2. (a). Schematic diagram of multidirectional BOST. (b). Probe area discretization and ray path: the orange grid box is an RBF area in the green grid with a coverage range of $3d$, the blue line is the ray that can be used in the reconstruction, and the red line is the ray that cannot be used in the reconstruction.
Fig. 3.
Fig. 3. The grids crossed by the rays. For the convenience of viewing, draw a grid every 4 grids, and the grid is enlarged 3 times.
Fig. 4.
Fig. 4. Flow chart of 3D BOST: the blue rectangles represent some kind of computation, and the others represent the data or result.
Fig. 5.
Fig. 5. Simulated displacements in fourth camera: (a) displacements with 0% Gaussian noise, (b) displacements with 10% Gaussian noise, and (c) displacements with 20% Gaussian noise.
Fig. 6.
Fig. 6. Slices of reconstruction results. (a). Reconstruction result of the method of Nicolas et al. (FDM). The reconstruction at the edges is unsatisfactory (i.e., left of the figure), therefore, the error is relatively large. (b)∼(f). Reconstruction results of our method under different RBF coverages.
Fig. 7.
Fig. 7. Slices of reconstruction result of our method in the plane $z = 0$: the first row is the result of ART, and the second row is the result of ART with TV regularization — each column is the reconstruction result at a different noise level (NL).
Fig. 8.
Fig. 8. Profiles of slices presented in Fig. 7 along X = 0.
Fig. 9.
Fig. 9. $MRE$ under different noise levels. The x-axis is the noise level, and the y-axis is the $MRE$. The $MRE$ under TV regularization is significantly reduced.
Fig. 10.
Fig. 10. Experimental setup. Fifteen cameras are arranged around the hot air flow above the candle flame. The cameras numbered with green are for reconstruction, and the cameras numbered with blue are for validation. The background plates were illuminated using light-emitting diode light sources.
Fig. 11.
Fig. 11. (a) Calibration plate and rotation device and (b) 3D coordinates of rotating points of three groups in a unified world coordinate system.
Fig. 12.
Fig. 12. Calibration results: (a) positions and orientations of 15 cameras (the red, green, and blue lines in the middle represent the world coordinate system, and the yellow lines are the ray path of four sample points) and (b) reprojection error of 15 cameras.
Fig. 13.
Fig. 13. Displacements of all cameras: the cyan cuboid is the projection of the probe area on the image, and the orange, green, and blue lines in the middle represent the projection of the world coordinate system.
Fig. 14.
Fig. 14. (a). Comparisons between measurement displacements and reprojection displacements: the first row is the measurement displacements. The second and third rows are the reprojection displacements of our method and Nicolas et al.’s method (FDM), respectively. (the rendering scalar is consistent with Fig. 13). (b). The Euclidean distance error between measurement displacements and reprojection displacements. The first row is the error of our method, and the second row is the error of Nicolas et al.’s method (FDM).
Fig. 15.
Fig. 15. Slices of one instantaneous reconstruction result: slices along the vertical direction from −17.7 mm to −2.7 mm, and slices along the horizontal direction from −47.7 mm to 6.3 mm. The first row is the result of our method, and the second row is the result of Nicolas et al.’s method (FDM).
Fig. 16.
Fig. 16. Distribution of errors between measurement displacements and reprojection displacements: (a) scatter plot of errors, (b) frequency of different errors along the u-axis direction, and (c) frequency of different errors along the v-axis direction.
Fig. 17.
Fig. 17. Gradient of refractive index corresponding to Fig. 15. The figure shows the gradients in three directions for the results of our method and that of Nicolas et al. (FDM).
Fig. 18.
Fig. 18. Three-dimensional refractive index distribution of different frames. The more distributions of consecutive frames are shown in Visualization 1. And the display of distribution under different viewing directions are shown in Visualization 2.

Tables (1)

Tables Icon

Table 1. Reprojection errors of five frames. The value in “()” is the results of Nicolas et al.’s method.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

ε ( α ) = 1 n 0 s r a y n α d s , α { x , y , z } .
Δ α = l A l C l A + l B l C ε ( α ) .
ρ = ( n 1 ) K G D .
T = P M R ρ .
n ( x ) i = 1 N c i g i ( x ) .
g i ( x ) = { exp ( | | x x i | | 2 2 2 σ 2 ) , | | x x i | | 2 d t h 0 , o t h e r w i s e .
α n ( x ) i = 1 N c i g i ( x ) α ,
g i ( x ) α = { α α i σ 2 exp ( | | x x i | | 2 2 2 σ 2 ) , | | x x i | | 2 d t h 0 , o t h e r w i s e .
ε j ( α ) = 1 n 0 j th  r a y n [ r j ( s ) ] α d s ,
ε j ( α ) = 1 n 0 i = 1 N c i j th  r a y g i [ r j ( s ) ] α d s T j , i ( α ) .
T c = [ T ( x ) T ( y ) T ( z ) ] c = [ ε ( x ) ε ( y ) ε ( z ) ] .
ς [ u v 1 ] = P [ x y z 1 ] , P = K 3 × 4 [ R 3 × 3 t 3 × 1 0 T 1 ] = [ p 11 p 12 p 13 p 14 p 21 p 22 p 23 p 24 p 31 p 32 p 33 p 34 ] .
{ ( p 11 p 31 u ) x + ( p 1 2 p 3 2 u ) y + ( p 13 p 33 u ) z = p 3 4 u p 14 ( p 21 p 31 v ) x + ( p 22 p 3 2 v ) y + ( p 23 p 33 v ) z = p 3 4 v p 24 .
ε x = r 11 Δ u + r 21 Δ v ε y = r 12 Δ u + r 22 Δ v . ε z = r 13 Δ u + r 23 Δ v
c ( i , k + 1 ) = c ( i , k ) + λ ε i T i , c ( i , k ) T i , T i T T i T .
T V ( c ) = ( c x ) 2 + ( c y ) 2 + ( c z ) 2 .
c = min c ( c x ) 2 + ( c y ) 2 + ( c z ) 2 + β 2 | | c c | | 2 2 .
T ( x , y , z ) = 303.15 + 1196.85 e ( 3 x 2 + 3 y 2 20 2 + 3 z 2 80 2 0.9 ) 2 + 696.85 e ( 3 ( x 25 ) 2 + 3 y 2 10 2 + 3 z 2 40 2 0.9 ) 2 + 696.85 e ( 3 ( x + 25 ) 2 + 3 y 2 10 2 + 3 z 2 40 2 0.9 ) 2 . + 696.85 e ( 3 x 2 + 3 ( y 25 ) 2 10 2 + 3 z 2 40 2 0.9 ) 2 + 696.85 e ( 3 x 2 + 3 ( y + 25 ) 2 10 2 + 3 z 2 40 2 0.9 ) 2
Δ = Δ + G N N L M D .
M R E = 1 N i = 1 N | V i V i | V i ,
C C = cov ( A , B ) var ( A ) var ( B ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.