Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution and penetration depth of reflection-mode time-domain near infrared optical tomography using a ToF SPAD camera

Open Access Open Access

Abstract

In a turbid medium such as biological tissue, near-infrared optical tomography (NIROT) can image the oxygenation, a highly relevant clinical parameter. To be an efficient diagnostic tool, NIROT has to have high spatial resolution and depth sensitivity, fast acquisition time, and be easy to use. Since many tissues cannot be penetrated by near-infrared light, such tissue needs to be measured in reflection mode, i.e., where light emission and detection components are placed on the same side. Thanks to the recent advance in single-photon avalanche diode (SPAD) array technology, we have developed a compact reflection-mode time-domain (TD) NIROT system with a large number of channels, which is expected to substantially increase the resolution and depth sensitivity of the oxygenation images. The aim was to test this experimentally for our SPAD camera-empowered TD NIROT system. Experiments with one and two inclusions, i.e., optically dense spheres of 5mm radius, immersed in turbid liquid were conducted. The inclusions were placed at depths from 10mm to 30mm and moved across the field-of-view. In the two-inclusion experiment, two identical spheres were placed at a lateral distance of 8mm. We also compared short exposure times of 1s, suitable for dynamic processes, with a long exposure of 100s. Additionally, we imaged complex geometries inside the turbid medium, which represented structural elements of a biological object. The quality of the reconstructed images was quantified by the root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and dice similarity. The two small spheres were successfully resolved up to a depth of 30mm. We demonstrated robust image reconstruction even at 1s exposure. Furthermore, the complex geometries were also successfully reconstructed. The results demonstrated a groundbreaking level of enhanced performance of the NIROT system based on a SPAD camera.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging of turbid media is highly relevant in many areas, but this is particularly true for imaging biological tissue for clinical and preclinical applications [14]. One clinical application of paramount importance is to image the oxygenation in biological tissue. To be an efficient diagnostic tool, NIROT has to have high spatial resolution and depth sensitivity, fast acquisition time, and be easy to use.

To perform this, a near infrared optical tomography (NIROT) instrument consists of light emitters (laser or LED) and light receivers (camera, photodiodes, or photon multiplier) [5]. Two categories of geometrical arrangement are commonly used: transmission and reflection mode. In transmission mode, the tissue is placed between the sources and the detectors. However, in most applications, the transmission measurement cannot yield enough information about contrast in depth [6] and in addition, many tissues cannot be penetrated by near-infrared light, i.e., there are only a limited number of tissues that can be imaged, which decreases the clinical value. Therefore, such tissues need to be measured in reflection mode, i.e., light emission and detection components are placed on the same side. This has a substantially higher clinical relevance, because most areas of the human body are accessible [6].

To achieve higher resolution and penetration depth and reduced measurement time, many groups have turned to the time domain (TD) modality, which provides rich information due to the measurement of the photon time of flight (ToF) [3,710]. However, these NIROT systems were based on photomultiplier tubes, which led to large systems with a limited number of detectors [4] and hence a relatively low image quality. To improve the image quality, it was quite obvious that one option was to substantially increase the number of detectors [9]. Unfortunately, this is impossible for systems based on photomultiplier tubes. Silicon PhotoMultipliers (SiPMs) have gained interest due to the large sensitive area, high fill-factor, high time resolution, and low consumption [8]. However, the spatial resolution of SiPM arrays are typically lower compared to single-photon avalanche diode (SPAD) arrays [11]. To address this problem, we have turned to the novel single-photon avalanche diode (SPAD) array technology, where we designed arrays specifically for NIROT, which have several orders of magnitude more detectors than state-of-the-art systems available to date [1214].

A previous study [9] demonstrated an enhanced performance of the SPAD-camera based approach in transmission mode and reconstructed the shape of a two-dimensional (2D) opaque object, i.e., black paper shapes embedded between two 25 mm diffusive slabs. Unfortunately, the study was limited by the prerequisite of precisely knowing the depth of the embedded opaque objects. In addition, the transmission-mode design impedes the transfer to clinical applications.

In contrast, we have developed a reflection-mode TD NIROT system which has 1024 SPAD detectors [1417], i.e., 32 times more than any previous systems based on photomultipliers. It employs a 3D image reconstruction algorithm without prior structural and/or depth information [9,18,19] about the object. The system has been previously tested in various solid phantom studies [16,2022] and also in an in vivo study [23]. The tests showed a high performance of imaging for a diffusive volume with inclusions at a depth of 15 mm. Here, we aim to systematically test the depth sensitivity and resolution of this TD NIROT system.

Previous studies [2426] have investigated the depth sensitivity or resolution of NIROT by liquid phantom experiments. Liquid phantoms have the advantage of higher flexibility compared to solid phantoms. Therefore, we present a systematic characterization of the depth sensitivity and resolution for the novel TD NIROT system with a custom-made liquid phantom with movable inclusions.

2. Methods and materials

2.1 SPAD camera-based imaging system

The TD NIROT hardware includes two major parts: the imaging instrument and the imaging probe.

Similar to a medical ultrasound imaging device, the probe (Fig. 1(a)) is designed for the application site, i.e., the target’s surface. It includes a custom-designed tube-shaped probe, which arranges 11 light emission points equidistantly in a circle with a $\sim 22 mm$ radius around the camera’s field of view (FoV). This probe has a bio-compatible silicone surface and the tube structure shields the FoV from ambient light. The light re-emerging from the tissue in the FoV is projected by a NIR lens SY110M 1.68 mm F1.8 (Theia Technologies, US) onto the custom-made $32\times 32$-pixel single-photon avalanche diodes (SPAD) sensor [14,17]. Photon arrival times are transferred to a Xilinx Kintex 7 FPGA board (XEM7360, Opal Kelly, USA). The FPGA builds histograms of photon arriving times for each pixel and transfers them to the control computer.

 figure: Fig. 1.

Fig. 1. (a) Schematic depiction of the liquid phantom setup with the TD NIROT. Photos of the (b) letter phantoms and (c) spherical inclusions and a 1 euro coin. (d) 3D view of the finite element model used for image reconstruction: sensitivity map of the phase component (Eq. (1)) for the sources (yellow dots) and selected detectors (purple dots). Note that the volume is a cylindrical model of $ \varnothing 90 \times 50 mm^3$. We made it smaller than the actual liquid phantom to speed up the image reconstruction. e) Temporal data processing and image reconstruction. 11 images of $32\times 32$ pixel with each pixel containing 256 time gates were obtained. We selected a number of pixels within the field of view and the temporal windows where the signal is above its noise level. Note that the dark spots in the images are hot pixels. The selected temporal distributions were transformed into frequency domain (FD) and FD signal at 100 MHz were selected for the image reconstruction. The blue area marks the reconstructed inclusion and the yellow dots indicate the ground-truth shape of the letter B.

Download Full Size | PDF

The imaging instrument consists of the remaining hardware. This includes the light source, a super-continuum laser SuperK Extreme EXR-15 (NKT, Denmark), which emits picosecond light pulses with a broad spectrum in the near infrared (NIR) region. An acousto-optic tunable filter (AOTF) selects specific wavelengths needed for spectral measurements. A fiber switch distributes the pulses to 11 optical fibers to guide the light to the probe and the tissue. The instrument also contains a laptop that controls the whole system. More details about the system’s hard- and software can be found in [15,16,20,23].

The arrangement of 11 sources and detectors enables symmetrical sensitivity to the central axis as shown in Fig. 1(d). Sensitivity shown in Fig. 1(d) is defined by the sum of the partial derivative of the mean time of flight $\theta (s,d)$ with respect to the absorption $\mu _{a}$ for all sources $s$ and selected detectors $d$. The values were later normalized to the maximum values.

$$sensitivity = \sum _{s,d}\left| \dfrac{\partial \theta \left(s,d\right) }{\partial \mu_{a}}\right|.$$

2.2 Liquid phantom experiments

The measurements were conducted on a liquid phantom in reflection mode. The phantom was equipped with a holder movable in 3-axis, which enables free movement of one or multiple inclusions [27]. It also had a mechanical feature to control the distance between two inclusions. The background optical properties were adjusted by adding Intralipid 20% solution (IL) (Fresenius Kabi AG, Bad Homburg, Germany) and India ink to distilled water. The optical properties were assessed by the commercial frequency-domain (FD) NIRS device Imagent (ISS Inc., Champaign, IL, USA).

2.2.1 One-inclusion measurement: sensitivity

We demonstrate the sensitivity in a series of measurements with one spherical PDMS inclusion of a radius of $5 mm$ (Fig. 1(c)) [20]. The inclusion was placed at 5 depths (from $10 mm$ to $30 mm$ at a step of $5 mm$) and 3 lateral distances from the center of FoV (from $0 mm$ to $10 mm$ at a step of $5 mm$). In total, 15 positions were measured at wavelength $725 nm$ (Fig. 2). Since the sensitivity map (Fig. 1(d)) is centrally symmetrical, we can extrapolate the depth sensitivity to the other directions in the whole volume. We use both short acquisition ($1 s$ exposure time) and long acquisition ($100 s$ exposure time) for the liquid phantom measurements. The optical properties of the liquid background and inclusions (spheres) are listed in Table 1. The optical properties of the liquid are similar to the head of a preterm infant [28].

 figure: Fig. 2.

Fig. 2. Cross-sectional view (x = $45 mm$) of the liquid phantom for the experiment with one (a) and two (b) inclusions. For (a) the spherical silicone inclusion has a radius of $5 mm$ and was placed at 3 distances ($0 mm$, $5 mm$, $10 mm$) from the center and at 5 depths ($10 mm$, $15mm$, $20 mm$, $25 mm$ and $30 mm$). For (b) the two spherical inclusions were placed at a distance of $8 mm$ at the same 5 depths. The spheres in the figure are at a depth of $20 mm$. The sources are marked in yellow and detectors in purple as in Fig. 1(d).

Download Full Size | PDF

Tables Icon

Table 1. Optical properties of the phantoms

2.2.2 Two-inclusion measurements: resolution

The imaging resolution was tested on a liquid phantom containing two inclusions. Two identical spheres with a radius of $5 mm$ were placed at a lateral distance of $8 mm$ at 5 depths and 3 lateral positions (Fig. 2(b)), and measured at $689 mm$. The optical properties are listed in Table 1. Both the liquid mixture and the two spheres are made newly and have different optical properties from the previous phantom. We use both short (1s) and long (100s) exposure times.

2.2.3 Imaging complex structures

Real-word objects, either biological or technical, have much more complex shapes. As our intention was to challenge the TD NIROT, we placed non-trivial structures inside the liquid phantom, which can be compared to complex structures inside tissue, e.g., large blood vessels, small bones, brain ventricles, etc. Four silicone phantoms were made in the shapes of the letters B, O, R and L (Fig. 1(c)) with a 3D printed mold. The strokes of the letters have a cylindrical shape with a circular cross-section of only a $\sim 1.5 mm$ radius. The dimension of the letters are $\sim 22.5 mm \times 18 mm$. The optical properties of the four letter phantoms are $\mu _a=0.023mm^{-1}$ and $\mu _s'=0.64mm^{-1}$ at wavelength $\lambda = 725 nm$. The optical properties of the liquid were again the ones shown in Table 1. The letters were placed in the liquid mixture at a depth of $10 mm$ and measured with the imaging instrument one by one. The exposure time was $100s$ for each source. The reconstructed results were segmented to compare with the ground-truth shapes.

2.3 Image reconstruction

We used the model-based image reconstruction. The forward problem of light-matter interaction was approximated by the diffusion equation (DE) [2]. The re-emitted photon density distribution $\phi (r,t)$ at position $r$ and time point $t$ is:

$$[-\nabla\cdot\kappa(r)\nabla+\mu_a(r)+\frac{1}{v}\frac{\partial}{\partial t}]\phi(r,t)=q(r,t),$$
with the air-tissue boundary described by Robin conditions [29]. The DE in frequency domain (FD) is written as
$$[-\nabla\cdot\kappa(r)\nabla+\mu_a(r)+\frac{i\omega}{v}]\Phi(r,\omega)=Q(r,\omega),$$
where $v$ is the speed of light in the medium. $\Phi (r,\omega )$ represents the fluence at position r and frequency $\omega$ in FD. $q$ and $Q$ are the isotropic source term for TD and FD. $\kappa$ is the diffusion coefficient where $\kappa = [3(\mu _s')]^{-1}$ for TD DE [30] and $\kappa = [3(\mu _a+\mu _s')]^{-1}$ for frequency domain (FD) DE [31]. As illustrated in Fig. 1(e), the temporal data (256 time gates with a temporal resolution 48.8 ps) at the 290 detectors within the FoV for all 11 sources, i.e., 3190 temporal data, were transformed into frequency domain (FD) with the discrete Fourier transform [18]. We used only one frequency $100 MHz$ to speed up the image reconstruction. The 3D image reconstruction was achieved by solving a least-squares minimization problem:
$$\boldsymbol{{\mu_a}}^*= arg \min_{\boldsymbol{\mu_a}} \{ \left\lVert{\Phi^S(\boldsymbol{\mu_a}) - \Phi^M} \right\rVert ^2_2+ \beta \left\lVert{\boldsymbol{\mu_a} - \boldsymbol{\mu_{a0}}}\right\rVert ^2_2 \},$$
where the Tikhonov regularization was applied on the minimization of the quadratic difference of simulated $\Phi ^S(\mu _a)$ and measured $\Phi ^M$ data with the regularization parameter $\beta$. $\left \lVert {\cdot }\right \rVert ^2_2$ denotes the L2 norm. The optimal solution $\boldsymbol {{\mu _a}}^*$ was reached by Newton$'$s method [2]. We used a GPU-facilitated finite element method (FEM)-based model to solve the DE [3234]. A $ \varnothing 90\times 50$ cylindrical mesh of 316360 nodes was created for generating the forward results. To decrease the ill-posedness of the inverse problem, a coarser homogeneous mesh of 3441 nodes with the bulk optical properties was used as the initial guess for the image reconstruction. The use of a coarser mesh also speeds up the optimization and reduces the memory burden. More details about the image reconstruction were described previously [16].

2.4 Evaluation metrics

To evaluate the performance of the image reconstruction, we applied three commonly used metrics: the root mean squared error (RMSE), peak signal-to-noise ratio (PSNR) and Dice similarity. We denote the reference image, i.e., the 3D distribution of absorption coefficients, as $\bf {x}$, and $x_i, i \in N = \left \{ 1, 2,\ldots, n\right \}$ is the value in the i-th mesh node for a n-node mesh. $\bf {\hat x}$ and $\hat x_i$ represent the reconstructed image and mesh node value. The RMSE evaluates the error of the reconstruction. The smaller its value is, the smaller the deviation of the reconstructed image from the ground truth.

$$RMSE = \sqrt {\dfrac{1}{n}\sum ^{n}_{i=1} (\hat{x}_i - x_i ) ^{2}} ,$$

The PSNR assesses the reconstructed image quality. The quality is better with a larger PSNR value.

$$PSNR = 10\log _{10} \dfrac{\max [ \bf{\hat x} ]^{2}}{ \dfrac{1}{n}\sum ^{n}_{i=1} (\hat{x}_i - x_i ) ^{2} },$$

The Dice similarity quantifies the accuracy of the reconstructed inclusion shape and position. The Dice index is 1 when the reconstructed result corresponds to the ground truth. The reconstructed images were segmented using a threshold equal to the half of the sum of the maximum and background values.

$$Dice= \dfrac{2\left| S\left( {\bf x}\right) \cap S\left( {\bf {\hat {x}}}\right) \right| }{\left| S\left({\bf x}\right) \right| +\left| S\left({\bf \hat x}\right) \right|},$$
where $S(\bf {x})$ denotes the segmented image with the threshold defined as $\frac {1}{2} [ max ({\bf x} ) + med ({\bf x} ) ]$ and $|\cdot |$ denotes the cardinality of a set.

3. Results

3.1 One-inclusion measurement: depth sensitivity

The number of photons acquired by the SPAD camera was approximately $1.2\times 10^7$ photons per second at $725 nm$ and $1.0\times 10^7$ photons per second at $689 nm$. For the one-inclusion experiment, Fig. 3 and Fig. 4 show the two cross-sectional and one-dimensional distribution at each depth of the inclusion of the reconstructed $\mu _a$ for the $100s$ - exposure measurement. The same information are plotted for the $1s$ exposure case as Fig. 5 and Fig. 6. For the depth of $25 mm$ at all 3 distances (a-c), the inclusion regions were more distinct from the background for the long exposure than the $1s$ case. The deepest inclusion was generally reconstructed shallower than the actual location, but for the longer exposure time this effect was smaller, i.e., the sphere was imaged closer to the actual location. The evaluation metrics, i.e., PSNR, RMSE and Dice similarity are shown in Fig. 7. With only $1s$ exposure, the metrics were comparable to the case of $100s$ exposure (Fig. 7). The Dice similarity was higher with longer exposure time for a lateral distance $=10mm$.

 figure: Fig. 3.

Fig. 3. Results of the one-inclusion measurement: Image reconstructions of the sphere are shown at all 15 positions. Data were measured with $100s$ exposure time. The green circle indicates the ground truth, i.e., the position of the sphere. The sphere is detected at all depths and the signal-to-noise ratio decreases with depth.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. One-dimensional diagram of the reconstructed absorption coefficient $\mu _a$ and contrast at 5 depths and 3 distances from the center for an exposure time of $100s$.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The same results of the image reconstructions of the sphere at all 15 positions in the one-inclusion measurement, but for the 1s exposure time.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. One-dimensional comparison for reconstruction at 5 depths and 3 distances from the center for the exposure time of $1s$

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The evaluation metrics for the one-inclusion measurement and the different depths.

Download Full Size | PDF

3.2 Two-inclusion measurements: resolution

The 3D images for the two-inclusion case are depicted in Fig. 8(a) ($100 s$) and Fig. 8(b) ($1 s$). The 1D distribution of $\mu _a$ is shown in Fig. 9(a) ($100 s$) and Fig. 9(b) ($1 s$). The two inclusions at the depth of $10 mm$ to $25 mm$ were successfully resolved with correct depth for both the short and long exposure cases. The results for $10 mm$ showed more artifacts, which may be caused by the fact that they are placed too close to the liquid-PDMS interface. At the depth of 30 mm, the two reconstructed inclusions appeared higher than the actual locations. Nevertheless, the two inclusions were resolved.

 figure: Fig. 8.

Fig. 8. Image reconstruction results for (a) 100s-exposure and (b) 1s-exposure experiment on a liquid phantom with two spheres located in the center of the FoV ($y = 45mm$ in the model) at depths of $10 mm$, $15 mm$, $20 mm$, $25 mm$ and $30 mm$. The green circles mark the true location of the inclusions

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. One-dimensional distribution of reconstructed absorption coefficients $\mu _a$ across the inclusions at various depths (d = $10mm$ to $30 mm$ at a step of $5 mm$) for an exposure time of (a.1) $100s$ and (b.1) $1s$ compared with the ground truth (GT) values. (a.2) and (b.2) give an enlarged view for the reconstructed 1D distribution at the depth of 30 mm.

Download Full Size | PDF

3.3 Reconstruction result of complex shapes

Figure 10 shows the segmented 3D image reconstructions of the letters. The letters B, O and L were correctly visualized, while R has a small deformation due to its more complex structure. Nevertheless, the locations of all letters were correctly recovered.

 figure: Fig. 10.

Fig. 10. Image reconstruction results for the letters: The yellow markings indicate the true shape of the four letters

Download Full Size | PDF

4. Discussion

We demonstrated the enhanced quality of 3D images generated with the novel reflection-mode TD NIROT system in a series of liquid phantom measurements. The system was capable of detecting a small ischemic-mimicking inclusion placed in various depths and distances from the center of FoV, i.e., depth from $10 mm$ to $30 mm$ and lateral distance from 0 to $10 mm$. It also resolved two identical inclusions at a distance of $8 mm$ at depths from $10mm$ to $30mm$. Moreover, complex structures of the shapes of letters were successfully imaged (Fig. 10).

We have shown the ability of the system to detect and image in 3D inclusions of $5 mm$ in radius (i.e., $0.5 cm^3$), located within $9.5 cm^3$ volume-of-view (VoV). Furthermore, we demonstrated that two inclusions placed on $8 mm$ distance from each other were resolved at the depth of $30 mm$, i.e., imaged as $2$ separate objects. Thus, we confirmed the spatial resolution of the system of at least $8 mm$ down to $30 mm$ depth. To the best of our knowledge, this is the first published attempt to determine the resolution of a NIROT system within whole VoV, both in lateral and longitudinal directions.

We compared system performance at two exposure times of $1s$ and $100s$ to study the influence of the numbers of detected photons on the depth sensitivity and resolution. The performance of the fast acquisition for the one-inclusion measurement was comparable to the long-exposure measurement, especially for the central positions. We also noticed that in some cases, e.g., at depth = $15mm$ at the central position, the short-exposure measurement yielded better image quality than the longer exposure. This may be explained by the slight instability of the liquid phantom, which leads to tiny changes in the optical properties over time. This is an important aspect for in vivo measurements, where the optical properties depend on physiological processes that change continuously. Thus, for in vivo application it is certainly important to use short exposure times. The performance for the more difficult cases, i.e., longer lateral distances and deeper locations, was improved more evidently with increased exposure time (Fig. 36). The mismatch $\mu _s'$s between the liquid mixture and the sphere(s) (Table 1) has influence on the reconstruction results given the fact that the initial guess of the $\mu _s'$ was assumed as the same as the liquid mixture [16]. Since such a mismatch may also occur in real tissue, our intention was to test such conditions. In the two-inclusion measurement, we noticed more artifacts for a shallow depth $=10mm$ (Fig. 8). A possible source of these artifacts is the silicone interface between the liquid and the imaging probe, i.e., the mismatch of refractive indices and optical properties between the silicone membrane and the liquid. The influence is more prominent when the spheres are closer to the silicone interface.

Measurements on the liquid phantom enable a highly flexible placement of the inclusion compared to when using a solid phantom. However, this placement is not as stable as in a solid phantom and the positioning of the inclusions is not as accurate, especially in the case of the letters. The letters were fixed by two or three fish ropes in the turbid liquid, but arguably could have slightly tilted due to the constant stirring of the liquid. Real biological tissues are highly heterogeneous and hypoxic regions are of variable shapes. Letters have much more complex structures compared to commonly used simple shapes, e.g., spheres, cylinders or cubes. Therefore, we used them to mimic the complex hypoxic regions in the tissue in order to validate the performance of the system for such a task.

A theoretical estimation has been made that using TD data can reach a maximum depth of 6 cm for temporal data with time duration of 9.6 ns for a reflection-mode system [10]. However, this has never been shown experimentally. Previous experimental studies have shown the capability to resolve two black spheres of $5 mm$ diameter at a lateral distance of $10 mm$ at a depth of $17.5 mm$ in reflection mode [35] and resolved 2D complex black shapes with a known object depth of $25 mm$ in transmission mode [9]. However, the extremely high absorption coefficients of the inclusions used in these experiments is unrealistic and not representative of physiological conditions and also violate the Rytov approximation [2,36]. In contrast, we have used realistic optical properties. Another previous experimental study has used realistic tissue optical properties, but failed to resolve the deep inclusions due to a limited number of source-detector channels [24]. To the best of our knowledge this is the first time that accurate 3D reconstructions of complex structures and small inclusions at a depth of 25 mm - 30 mm have been achieved in a reflection-mode measurement and without a priori information.

5. Conclusion

The TD NIROT system was systematically tested in a series of liquid phantom experiments. We have demonstrated that the system can detect inclusions with high accuracy in locations up to a depth of $25 mm$ and $10 mm$ away from the center of the FoV. The system is also sensitive to inclusions at a depth of $30 mm$. The TD NIROT was capable of resolving two small spheres at a distance of $8 mm$ up to a depth of $30 mm$. A short exposure time of only $1s$ is sufficient for yielding a performance comparable to a long exposure time of $100s$. In addition, we have shown that the system can image complex asymmetric objects with hollows, i.e., letters, with ischemia-mimicking optical properties. We have demonstrated that the rich information provided by the SPAD camera enhances depth imaging and resolution in scattering medium in reflection mode, and enables relatively short exposure time. The combination of these features makes SPAD camera-based TD NIROT highly promising for real-word application in clinical fields.

Funding

Universität Zürich (K-84302-02-01, MEDEF21-025); National Council of Science and Technology; Innovationspool of the University Hospital Zurich; Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (159490, 197079).

Acknowledgement

The authors would like to thank Lucius Miller and Ramon Mindel for their contributions in the control software and experimental setups.

Disclosures

Martin Wolf: OxyPrem AG (I, P, S), Alexander Kalyanov: OxyPrem AG (E).

Data availability

Data presented in this paper are available from the authors upon reasonable request.

References

1. B. Chance, “Optical method,” Annu. Rev. Biophys. 20, 1–30 (1991). [CrossRef]  

2. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]  

3. Y. Yamada, H. Suzuki, and Y. Yamashita, “Time-domain near-infrared spectroscopy and imaging: a review,” Appl. Sci. 9(6), 1127 (2019). [CrossRef]  

4. R. J. Cooper, E. Magee, N. Everdell, S. Magazov, M. Varela, D. Airantzis, A. P. Gibson, and J. C. Hebden, “Monstir ii: a 32-channel, multispectral, time-resolved optical tomography system for neonatal brain imaging,” Rev. Sci. Instrum. 85(5), 053105 (2014). [CrossRef]  

5. H. Jiang, Diffuse Optical Tomography: Principles and Applications (Taylor and Francis, 2010).

6. A. Puszka, L. Hervé, A. Planat-Chrétien, A. Koenig, J. Derouard, and J.-M. Dinten, “Time-domain reflectance diffuse optical tomography with Mellin-Laplace transform for experimental detection and depth localization of a single absorbing inclusion,” Biomed. Opt. Express 4(4), 569–583 (2013). [CrossRef]  

7. A. Pifferi, D. Contini, A. D. Mora, A. Farina, L. Spinelli, and A. Torricelli, “New frontiers in time-domain diffuse optics, a review,” J. Biomed. Opt. 21(9), 091310 (2016). [CrossRef]  

8. A. Dalla Mora, L. Di Sieno, A. Behera, P. Taroni, D. Contini, A. Torricelli, and A. Pifferi, “The SIPM revolution in time-domain diffuse optics,” Nucl. Instrum. Methods Phys. Res., Sect. A 978, 164411 (2020). [CrossRef]  

9. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13, 575 (2019). [CrossRef]  

10. A. D. Mora, D. Contini, S. Arridge, F. Martelli, A. Tosi, G. Boso, A. Farina, T. Durduran, E. Martinenghi, A. Torricelli, and A. Pifferi, “Towards next-generation time-domain diffuse optics for extreme depth penetration and sensitivity,” Biomed. Opt. Express 6(5), 1749 (2015). [CrossRef]  

11. F. Villa, F. Severini, F. Madonini, and F. Zappa, “SPADs and SIPMs arrays for long-range high-speed light detection and ranging (Lidar),” Sensors 21(11), 3839 (2021). [CrossRef]  

12. C. Bruschini, H. Homulle, I. M. Antolovic, S. Burri, and E. Charbon, “Single-photon avalanche diode imagers in biophotonics: review and outlook,” Light: Sci. Appl. 8(1), 87 (2019). [CrossRef]  

13. P. Bruza, A. Petusseau, A. Ulku, J. Gunn, S. Streeter, K. Samkoe, C. Bruschini, E. Charbon, and B. Pogue, “Single-photon avalanche diode imaging sensor for subsurface fluorescence Lidar,” Optica 8(8), 1126–1127 (2021). [CrossRef]  

14. S. Lindner, C. Zhang, I. M. Antolovic, J. M. Pavia, M. Wolf, and E. Charbon, “Column-parallel dynamic TDC reallocation in SPAD sensor module fabricated in 180 nm CMOS for near infrared optical tomography,” in Int. Image Sensor Workshop, (2017), pp. 86–89.

15. A. Kalyanov, J. Jiang, S. Lindner, L. Ahnen, A. Di Costanzo, J. Mata Pavia, S. Sanchez Majos, and M. Wolf, “Time domain near-infrared optical tomography with time-of-flight SPAD camera: the new generation,” Biophotonics Congress: Biomedical Optics Congress 2018 (2018).

16. J. Jiang, A. Di Costanzo Mata, S. Lindner, C. Zhang, E. Charbon, M. Wolf, and A. Kalyanov, “Image reconstruction for novel time domain near infrared optical tomography: towards clinical applications,” Biomed. Opt. Express 11(8), 4723–4734 (2020). [CrossRef]  

17. S. Lindner, C. Zhang, I. M. Antolovic, A. Kalyanov, J. Jiang, L. Ahnen, A. di Costanzo, J. M. Pavia, S. S. Majos, E. Charbon, and M. Wolf, “A novel 32 × 32, 224 mevents/s time resolved SPAD image sensor for near-infrared optical tomography,” in Biophotonics Congress: Biomedical Optics Congress 2018 (Microscopy/Translational/Brain/OTS), (Optical Society of America, 2018), p. JTh5A.6.

18. J. Jiang, M. Wolf, and S. S. Majos, “Fast reconstruction of optical properties for complex segmentations in near infrared imaging,” J. Mod. Opt. 64(7), 732–742 (2017). [CrossRef]  

19. M. Zhang, K. M. S. Uddin, S. Li, and Q. Zhu, “Target depth-regularized reconstruction in diffuse optical tomography using ultrasound segmentation as prior information,” Biomed. Opt. Express 11(6), 3331–3345 (2020). [CrossRef]  

20. J. Jiang, A. Di Costanzo Mata, S. Lindner, E. Charbon, M. Wolf, and A. Kalyanov, “Dynamic time domain near-infrared optical tomography based on a spad camera,” Biomed. Opt. Express 11(10), 5470–5477 (2020). [CrossRef]  

21. W. Ren, J. Jiang, A. Di Costanzo Mata, A. Kalyanov, J. Ripoll, S. Lindner, E. Charbon, C. Zhang, M. Rudin, and M. Wolf, “Multimodal imaging combining time-domain near-infrared optical tomography and continuous-wave fluorescence molecular tomography,” Opt. Express 28(7), 9860–9874 (2020). [CrossRef]  

22. J. Jiang, A. di Costanzo, S. Lindner, M. Wolf, and A. Kalyanov, “Tracking objects in a diffusive medium with time domain near infrared optical tomography,” in Biophotonics Congress: Biomedical Optics 2020 (Translational, Microscopy, OCT, OTS, BRAIN) (Optical Society of America, 2020), p. JTu3A.18.

23. J. Jiang, A. Di Costanzo Mata, S. Lindner, E. Charbon, M. Wolf, and A. Kalyanov, “2.5 Hz sample rate time-domain near-infrared optical tomography based on spad-camera image tissue hemodynamics,” Biomed. Opt. Express 13(1), 133–146 (2022). [CrossRef]  

24. A. Puszka, L. Di Sieno, A. D. Mora, A. Pifferi, D. Contini, A. Planat-Chrétien, A. Koenig, G. Boso, A. Tosi, L. Hervé, and J. M. Dinten, “Spatial resolution in depth for time-resolved diffuse optical tomography using short source-detector separations,” Biomed. Opt. Express 6(1), 1–10 (2015). [CrossRef]  

25. J. Zouaoui, L. D. Sieno, L. Hervé, A. Pifferi, A. Farina, A. D. Mora, J. Derouard, and J.-M. Dinten, “Quantification in time-domain diffuse optical tomography using Mellin-Laplace transforms,” Biomed. Opt. Express 7(10), 4346–4363 (2016). [CrossRef]  

26. A. Puszka, L. Di Sieno, A. Dalla Mora, A. Pifferi, D. Contini, G. Boso, A. Tosi, L. Herve, A. Planat-Chretien, A. Koenig, and J. M. Dinten, “Time-resolved diffuse optical tomography using fast-gated single-photon avalanche diodes,” Biomed. Opt. Express 4(8), 1351–1365 (2013). [CrossRef]  

27. A. Kalyanov, J. Jiang, E. Russomanno, M. Ackermann, A. Di Costanzo Mata, R. Mindel, L. Miller, and M. Wolf, “Development and validation of liquid heterogeneous phantom for time domain near-infrared optical tomography,” Advances in Experimental Medicine and Biology, accepted (2022).

28. T. Arri, S. Muehlemann, M. Biallas, H. Bucher, and M. Wolf, “Precision of cerebral oxygenation and hemoglobin concentration measurements in neonates measured by near-infrared spectroscopy,” J. Biomed. Opt. 16(4), 047005 (2011). [CrossRef]  

29. S. R. Arridge and M. Schweiger, “Photon-measurement density functions. Part 2: finite-element-method calculations,” Appl. Opt. 34(34), 8026 (1995). [CrossRef]  

30. K. Furutsu and Y. Yamada, “Diffusion approximation for a dissipative random medium and the applications,” Phys. Rev. E 50(5), 3634–3640 (1994). [CrossRef]  

31. S. R. Arridge, “Photon-measurement density functions. Part I: Analytical forms,” Appl. Opt. 34(31), 7395–7409 (1995). [CrossRef]  

32. S. Wojtkiewicz, “Nirfaster,” Github, 2020, https://github.com/nirfaster/NIRFASTer.

33. M. Doulgerakis-Kontoudis, A. Eggebrecht, S. Wojtkiewicz, J. Culver, and H. Dehghani, “Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU,” J. Biomed. Opt. 22(12), 1 (2017). [CrossRef]  

34. H. Dehghani, M. E. Eames, P. K. Yalavarthy, S. C. Davis, S. Srinivasan, C. M. Carpenter, B. W. Pogue, and K. D. Paulsen, “Near infrared optical tomography using nirfast: Algorithm for numerical model and image reconstruction,” Commun. Numer. Meth. Engng. 25(6), 711–732 (2009). [CrossRef]  

35. T. Shimokawa, T. Kosaka, O. Yamashita, N. Hiroe, T. Amita, Y. Inoue, and M. aki Sato, “Hierarchical Bayesian estimation improves depth accuracy and spatial resolution of diffuse optical tomography,” Opt. Express 20(18), 20427–20446 (2012). [CrossRef]  

36. S. R. Arridge and J. C. Schotland, “Optical tomography: forward and inverse problems,” Inverse Probl. 25(12), 123010 (2009). [CrossRef]  

Data availability

Data presented in this paper are available from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Schematic depiction of the liquid phantom setup with the TD NIROT. Photos of the (b) letter phantoms and (c) spherical inclusions and a 1 euro coin. (d) 3D view of the finite element model used for image reconstruction: sensitivity map of the phase component (Eq. (1)) for the sources (yellow dots) and selected detectors (purple dots). Note that the volume is a cylindrical model of $ \varnothing 90 \times 50 mm^3$. We made it smaller than the actual liquid phantom to speed up the image reconstruction. e) Temporal data processing and image reconstruction. 11 images of $32\times 32$ pixel with each pixel containing 256 time gates were obtained. We selected a number of pixels within the field of view and the temporal windows where the signal is above its noise level. Note that the dark spots in the images are hot pixels. The selected temporal distributions were transformed into frequency domain (FD) and FD signal at 100 MHz were selected for the image reconstruction. The blue area marks the reconstructed inclusion and the yellow dots indicate the ground-truth shape of the letter B.
Fig. 2.
Fig. 2. Cross-sectional view (x = $45 mm$) of the liquid phantom for the experiment with one (a) and two (b) inclusions. For (a) the spherical silicone inclusion has a radius of $5 mm$ and was placed at 3 distances ($0 mm$, $5 mm$, $10 mm$) from the center and at 5 depths ($10 mm$, $15mm$, $20 mm$, $25 mm$ and $30 mm$). For (b) the two spherical inclusions were placed at a distance of $8 mm$ at the same 5 depths. The spheres in the figure are at a depth of $20 mm$. The sources are marked in yellow and detectors in purple as in Fig. 1(d).
Fig. 3.
Fig. 3. Results of the one-inclusion measurement: Image reconstructions of the sphere are shown at all 15 positions. Data were measured with $100s$ exposure time. The green circle indicates the ground truth, i.e., the position of the sphere. The sphere is detected at all depths and the signal-to-noise ratio decreases with depth.
Fig. 4.
Fig. 4. One-dimensional diagram of the reconstructed absorption coefficient $\mu _a$ and contrast at 5 depths and 3 distances from the center for an exposure time of $100s$.
Fig. 5.
Fig. 5. The same results of the image reconstructions of the sphere at all 15 positions in the one-inclusion measurement, but for the 1s exposure time.
Fig. 6.
Fig. 6. One-dimensional comparison for reconstruction at 5 depths and 3 distances from the center for the exposure time of $1s$
Fig. 7.
Fig. 7. The evaluation metrics for the one-inclusion measurement and the different depths.
Fig. 8.
Fig. 8. Image reconstruction results for (a) 100s-exposure and (b) 1s-exposure experiment on a liquid phantom with two spheres located in the center of the FoV ($y = 45mm$ in the model) at depths of $10 mm$, $15 mm$, $20 mm$, $25 mm$ and $30 mm$. The green circles mark the true location of the inclusions
Fig. 9.
Fig. 9. One-dimensional distribution of reconstructed absorption coefficients $\mu _a$ across the inclusions at various depths (d = $10mm$ to $30 mm$ at a step of $5 mm$) for an exposure time of (a.1) $100s$ and (b.1) $1s$ compared with the ground truth (GT) values. (a.2) and (b.2) give an enlarged view for the reconstructed 1D distribution at the depth of 30 mm.
Fig. 10.
Fig. 10. Image reconstruction results for the letters: The yellow markings indicate the true shape of the four letters

Tables (1)

Tables Icon

Table 1. Optical properties of the phantoms

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

s e n s i t i v i t y = s , d | θ ( s , d ) μ a | .
[ κ ( r ) + μ a ( r ) + 1 v t ] ϕ ( r , t ) = q ( r , t ) ,
[ κ ( r ) + μ a ( r ) + i ω v ] Φ ( r , ω ) = Q ( r , ω ) ,
μ a = a r g min μ a { Φ S ( μ a ) Φ M 2 2 + β μ a μ a 0 2 2 } ,
R M S E = 1 n i = 1 n ( x ^ i x i ) 2 ,
P S N R = 10 log 10 max [ x ^ ] 2 1 n i = 1 n ( x ^ i x i ) 2 ,
D i c e = 2 | S ( x ) S ( x ^ ) | | S ( x ) | + | S ( x ^ ) | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.