Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convolutional sparse coding for compressed sensing photoacoustic CT reconstruction with partially known support

Open Access Open Access

Abstract

In photoacoustic tomography (PAT), imaging speed is an essential metric that is restricted by the pulse laser repetition rate and the number of channels on the data acquisition card (DAQ). Reconstructing the initial sound pressure distribution with fewer elements can significantly reduce hardware costs and back-end acquisition pressure. However, undersampling will result in artefacts in the photoacoustic image, degrading its quality. Dictionary learning (DL) has been utilised for various image reconstruction techniques, but they disregard the uniformity of pixels in overlapping blocks. Therefore, we propose a compressive sensing (CS) reconstruction algorithm for circular array PAT based on gradient domain convolutional sparse coding (CSCGR). A small number of non-zero signal positions in the sparsely encoded feature map are used as partially known support (PKS) in the reconstruction procedure. The CS-CSCGR-PKS-based reconstruction algorithm can use fewer ultrasound transducers for signal acquisition while maintaining image fidelity. We demonstrated the effectiveness of this algorithm in sparse imaging through imaging experiments on the mouse torso, brain, and human fingers. Reducing the number of array elements while ensuring imaging quality effectively reduces equipment hardware costs and improves imaging speed.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Photoacoustic imaging (PAI) technology has flourished in recent years. [16] PAT stimulates the photoacoustic signal in the imaging region with wide-field illumination and obtains an image of the target area through inverse reconstruction. It can quantify blood oxygen saturation by measuring the concentration of deoxyhemoglobin (HbR) and oxyhemoglobin (HBO2). PAT combines high ultrasonic resolution and strong optical contrast in a single modality, providing a safe, efficient and inexpensive method for functional imaging with imaging depths of up to several centimetres [7]. PAT has been effectively utilized to the imaging of in vivo vascular structures, functional hemodynamic imaging, and the visualization of breast [8,9] and brain tumors [1012], among others. It holds tremendous promise for clinical diagnosis [13].

PAT's implementation modes include linear array, circular array, and convex array, among others. Each mode has its own application-specific scenario [14]. This article addresses the issue of high-quality imaging using sparse circular array. That is, a set of sparsely distributed ultrasonic transducers are positioned around the imaging target to detect the photoacoustic signal in the circular area and reconstruct the image using the inverse algorithm. In practical system applications, a comprehensive information collection within the imaging area requires a dense distribution of array elements. Single element scanning imaging systems require hundreds or even thousands of scans. In addition, it places greater demands on the data collection system, which substantially increases the costs of the equipment. Sparse imaging has therefore become a research priority in the subject of array photoacoustic imaging, which necessitates the use of a small number of array elements to collect photoacoustic signals in the imaging area and perform high-quality image. Under the sparse sampling condition, traditional methods which include filtered back-projection (FBP) reconstruction algorithm will generate under-sampling artifacts, such as streaking artifacts or grating lobes [15].

To recover the original signal or image from under-sampled measurements, numerous sparse reconstruction algorithms have been developed. There are numerous classic iterative reconstruction algorithms that can tackle sparse reconstruction problems, such as the algebraic reconstruction algorithm (ART), [16] the joint algebraic reconstruction algorithm (SART), etc [17]. In recent years, the theory of CS has swiftly evolved to restore the original signal from a sparse signal effectively and precisely [18]. Behind its complex mathematical expressions, CS theory contains very subtle notions. CS has generated an enchanted outcome by violating the Nyquist sampling law on the basis of a creative concept supported by rigorous mathematical proof [19,20]. During the course of signal sampling, using a small number of sampling points achieves the same effect as complete sampling.

The DL-based image reconstruction algorithm is a significant branch of computer science that has produced numerous results [2123]. However, the majority of DL-based methods typically learn offset versions of features that contain the same features, which will produce related artefacts in the reconstructed image, and learning dictionaries has a substantial effect on the imaging results [24]. Liu et al. proposed an advanced reconstruction model using the K-VSD dictionary learning technique and adapt the model into the 3D PACT system [22]. Brendt Wohlberg proposed a new solution to the problem of high computational complexity in Convolutional Spatial Computing (CSC) in the Fourier domain with a novel method [25]. The concept of CSC was initially introduced by Zeiler et al [26]. Peng Bao et al. introduced CSC to CT reconstruction in 2019 [27]. Considering that the image will be sparser in the gradient domain, we add the gradient domain constraint to the model, which we refer to as CS-CSCGR. The theory of partial known support (PKS), which alludes to the position of non-zero pixels in the sparse transform domain, was recently introduced in CS. Recently, N. Vaswani et al. devised a modified-CS approach where an extra prior beyond sparsity was used: partially known support (PKS), that is, some known nonzero positions in the transformed sparse domain [28]. PKS can further reduce the amount of measurements required for high-fidelity reconstruction, a considerable benefit that PACT is also very interested in, according to theoretical studies and applications in MRI.

Our work introduced PKS into the CS-CSCGR reconstruction framework and enhanced the reconstruction effect. Simulation and experimental data indicate that CS-CSCGR-PKS is superior to other image reconstruction algorithms due to its ability to achieve high-quality photoacoustic image reconstruction under sparse under-sampling and converge quickly after a small number of iterations. These benefits of CS-CSCGR-PKS can enhance imaging speed, reduce the cost of devices, and be utilized in a variety of biomedical applications.

2. Theoretical background

2.1 Penalized least squares for circular-array PA

The following equation describes the formation and transmission of photoacoustic signals during the photoacoustic imaging process [29].

$$\left( {{\nabla^2} - \frac{1}{{{c^2}}}\frac{{{\partial^2}}}{{\partial {t^2}}}} \right)p({r,t} )={-} {p_0}(r )\frac{{d\delta (t )}}{{dt}}$$
where $p({r,t} )$ refers to the photoacoustic pressure signal at the position r and time t, $\delta (t)$ is the pulsed laser excitation, ${p_0}(r )$ refers to the initial acoustic pressure, and c refers to the sound velocity. We can obtain Eq. (2) according to Eq. (1).
$$p({r,t} )= \frac{1}{{4\pi {c^2}}}\frac{\partial }{{\partial t}}\left[ {\frac{1}{{ct}}\int {dr^{\prime}{p_0}({r^{\prime}} )\delta \left( {t - \frac{{|{r - r^{\prime}} |}}{c}} \right)} } \right]$$
where $r^{\prime}$ represents the location of the area of interest.

The measurement matrix K of the ring array photoacoustic tomography system is shown below.

$$K{({m,t} )_{({i,j} )}} = \frac{1}{{2\pi c}}\delta \left( {t - \frac{{|{{r_{i,j}} - {r_m}} |}}{c}} \right),\textrm{ }m = 1,2 \cdots ,p;\textrm{ }t = s\varDelta t,\textrm{ s = 1,2,} \cdots \textrm{,}{q_s}$$
where ${r_{i,j}}$ indicates coordinate values of the region of interest, ${r_m}$ and p indicates the position and the total number of the ultrasonic transducer, and ${q_s}$ indicates the sampling point of the photoacoustic signal. The basic CS reconstruction algorithm model is as follows [30,31].
$$\mathop {\arg \min }\limits_u ||{y - Ku} ||_2^2 + \beta R(u )$$
where y indicates the photoacoustic signal measurements, u indicates the photoacoustic image, K refers to the measurement matrix, $\beta $ refers to a hyperparameter, and $R(u )$ indicates the regularization term.

2.2 Convolutional sparse coding with partially known support

The CSC-based method considers the reconstructed image to be the sum of convolutions of the among a group of filters and the corresponding feature map [27].

$$\min \sum\limits_{i = 1}^N {{{\|{{M_i}} \|}_1}} \textrm{ }s.t.\textrm{ }{\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2} < \varepsilon$$
Where u indicates the photoacoustic image to be reconstructed, ${\ast} $ indicates the convolutional operation, ${\{{{f_i}} \}_{i = 1,2, \ldots ,N}}$ indicates the filter set, ${\{{{M_i}} \}_{i = 1,2, \ldots ,N}}$ indicates the feature maps, and $\varepsilon $ refers to the noise level. And CSC-based compressive sensing reconstruction model can avoid block aggregation artifacts of traditional DL methods.

Recent studies have incorporated PKS into CS reconstruction models, where PKS is regarded as a non-zero assisted image reconstruction in the sparse transformation domain. CS-PKS has been effectively implemented and demonstrated its ability to recover signals with fewer measurements in fields such as PAT.

Let ${M_i} \in {R^N}$ be the feature maps corresponding to filters ${f_i}$, and the support is ${T_i} = \sup p({{M_i}} )= {T_{i0}} \cup \varDelta $, where ${T_{i0}} \subset \{{1,2, \ldots ,N} \}$ is the known support of ${T_i}$ and $\varDelta = \{{1,2, \ldots ,N} \}$ is the unknown support of ${T_i}$. The introduction of PKS can constrain feasible solutions to a smaller range, thus producing accurate reconstructions with fewer samples. Therefore, the CS optimization process of PKS is shown below.

$$\min \sum\limits_{i = 1}^N {{{\|{{M_{i\varDelta }}} \|}_1}} \textrm{ }s.t.\textrm{ }{\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2} < \varepsilon$$
where ${M_{i\varDelta }}$ indicates the feature map based on the known support and $\varepsilon $ is the allowable noise level.

3. Method

As demonstrated below, CSC and CSC-PKS were devised for circular array PAT based on the CS and PKS theories described previously.

$$\mathop {\arg \min }\limits_{u,\{{{M_i}} \},\{{{f_i}} \}} \frac{1}{2}\|{y - Ku} \|_2^2 + \beta \left( {\frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{{M_i}} \|}_1}} } \right)$$
$$\mathop {\arg \min }\limits_{u,\{{{M_i}} \},\{{{f_i}} \}} \frac{1}{2}\|{y - Ku} \|_2^2 + \beta \left( {\frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{{M_{i\varDelta }}} \|}_1}} } \right)$$
where $\lambda $ is the regularization parameter.

It is preferable to impose gradient constraints on the feature maps rather than the image domain constraints. Next, we enhance the optimisation problem by imposing additional constraints on CS-CSC-PKS via gradient regularisation on the characteristic graph.

$$\mathop {\arg \min }\limits_{u,\{{{M_i}} \},\{{{f_i}} \}} \frac{1}{2}\|{y - Ku} \|_2^2 + \beta \left( \begin{array}{l} \frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{{M_{i\varDelta }}} \|}_1}} \\ \textrm{ + }\frac{\tau }{2}\sum\limits_{i = 1}^N {\left\|{\sqrt {{{({{g_0} \ast {M_i}} )}^2} + {{({{g_1} \ast {M_i}} )}^2}} } \right\|}_2^2 \end{array} \right)$$
where ${g_0}$ and ${g_1}$ are the gradient-calculating filters across the axes of x and y, respectively, and $\tau $ acts as a hyperparameter. In this work, the filters $\{{{f_i}} \}$ were trained by the full-sampled photoaccoustic dataset. So Eq. (9) becomes the new problem:
$$\mathop {\arg \min }\limits_{u,\{{{M_i}} \}} \frac{1}{2}\|{y - Ku} \|_2^2 + \beta \left( \begin{array}{l} \frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{{M_{i\varDelta }}} \|}_1}} \\ \textrm{ + }\frac{\tau }{2}\sum\limits_{i = 1}^N {\left\|{\sqrt {{{({{g_0} \ast {M_i}} )}^2} + {{({{g_1} \ast {M_i}} )}^2}} } \right\|}_2^2 \end{array} \right)$$

We can use alternating-direction multiplier method (ADMM) to solve the Eq. (1)0. The first step is to use a group of initialized feature maps $\{{\overline {{M_i}} } \}$ to obtain the intermediate reconstructed image u. Equation (1)0 becomes:[27]

$$\mathop {\arg \min }\limits_u \|{y - Ku} \|_2^2 + \beta \left\|{\sum\limits_{i = 1}^N {{f_i} \ast \overline {{M_i}} } - u} \right\|_2^2$$

The second step is to use a fixed filter $\{{{f_i}} \}$ to represent the reconstructed image u obtained in the previous calculation, which means calculating $\{{{M_i}} \}$. Then we update the optimization equation.

$$\mathop {\arg \min }\limits_{\{{{M_i}} \}} \frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{{M_{i\varDelta }}} \|}_1}} + \frac{\tau }{2}\sum\limits_{i = 1}^N {\left\|{\sqrt {{{({{g_0} \ast {M_i}} )}^2} + {{({{g_1} \ast {M_i}} )}^2}} } \right\|} _2^2$$

The definition of ${M_{i\varDelta }}$ in Eq. (1)0 is given by:

$${({{M_{i\varDelta }}} )_j} = \left\{ {\begin{array}{{c}} {{M_{i,j}}\qquad \quad\textrm{ }{M_{i,j}} \in \varDelta }\\ {0\qquad\qquad\textrm{ otherwise}} \end{array}} \right.$$

To solve Eq. (10), we introduce a matrix ${W_i}$ whose elements are set 1 when the corresponding gradient domain feature image pixels belong to the unknown support $\varDelta $ in the ${({k - 1} )^{th}}$ iteration, and set 0 otherwise. Equation (12) is rewritten as.

$$\mathop {\arg \min }\limits_{\{{{M_i}} \}} \frac{1}{2}\left\|{\sum\limits_{i = 1}^N {{f_i} \ast {M_i} - u} } \right\|_2^2 + \lambda \sum\limits_{i = 1}^N {{{\|{W_i^{k - 1} \odot {M_i}} \|}_1}} + \frac{\tau }{2}\sum\limits_{i = 1}^N {\left\|{\sqrt {{{({{g_0} \ast {M_i}} )}^2} + {{({{g_1} \ast {M_i}} )}^2}} } \right\|} _2^2$$

We refer to ${W_i}$ as a partial support matrix by ${T_{i0}}$, which can be obtained by setting a threshold. This paper sets the position greater than a certain value on the gradient domain feature map as a known support.

$$T_{i0}^{(k )} = \{{j:|{x_j^{(k )}} |> {\xi^{(k )}}} \}$$
where $x_j^{(k )}$ is the ${j^{th}}$ element of the feature map ${M_i}$, k refers to the total number of repetitions and $\xi $ refers to the threshold used to obtain ${W_i}$. The iterative restoration procedure can function well for rapidly degrading feature map $M_i^{(k )}$ by setting ${\xi ^{(k )}} = {{{{\|{M_i^{(k )}} \|}_\infty }} / {{\eta ^{(k )}}}}$, where $\eta $ is a hyperparameter. Next, we will further simplify.

Let ${F_i}{M_i} = {f_i}\ast {M_i}$, $F = ({{F_1}{F_2} \cdots {F_N}} )$ and $M = {({{M_1}{M_2} \cdots {M_N}} )^T}$. Similarly, we can obtain ${G_l}{M_i} = {g_l} \ast {M_i}$ and update the gradient part at Eq. (14).

$$\frac{\tau }{2}\sum\limits_{i = 1}^N {({\|{{G_0}{M_i}} \|_2^2 + \|{{G_1}{M_i}} \|_2^2} )} $$

Let $W = ({{W_1}\textrm{ }{W_2}\textrm{ } \cdots \textrm{ }{W_N}} )$ and introduce ${\Phi _l}$, (14) could be revised as below.

$$\mathop {\arg \min }\limits_M \frac{1}{2}\|{FM - u} \|_2^2 + \lambda {\|{W \odot M} \|_1} + \frac{\tau }{2}\|{{\Phi _0}M} \|_2^2 + \frac{\tau }{2}\|{{\Phi _1}M} \|_2^2$$
where
$${\Phi _l} = \left( {\begin{array}{ccc} {{G_l}}&0& \cdots \\ 0&{{G_l}}& \cdots \\ \vdots & \vdots & \ddots \end{array}} \right)$$

ADMM solves Eq. (14) through the introduction of a dual variable B, the procedure is outlined below [32].

$$\mathop {\arg \min }\limits_{M,B} \frac{1}{2}\|{FM - u} \|_2^2 + \frac{\tau }{2}\|{{\Phi _0}M} \|_2^2 + \frac{\tau }{2}\|{{\Phi _1}M} \|_2^2 + \lambda {\|{W \odot B} \|_1},\textrm{ }s.t.\textrm{ }M = B$$
where ${\odot} $ is the Hadamard product and then we can solve the Eq. (19) as follows.
$${M^{j + 1}} = \mathop {\arg \min }\limits_M \frac{1}{2}\|{FM - u} \|_2^2 + \frac{\tau }{2}\|{{\Phi _0}M} \|_2^2 + \frac{\tau }{2}\|{{\Phi _1}M} \|_2^2 + \frac{\rho }{2}\|{M - {B^j} + {C^j}} \|_2^2$$
$${B^{j + 1}} = \mathop {\arg \min }\limits_B \lambda {\|{W \odot B} \|_1} + \frac{\rho }{2}\|{{M^{j + 1}} - B + {C^j}} \|_2^2$$
$${C^{j + 1}} = {C^j} + {M^{j + 1}} - {B^{j + 1}}$$
where $\rho $ is a hyperparameter.

The overall CS-CSCGR-PKS algorithm for circular-array PACT reconstruction is shown below.

boe-15-2-524-i001

4. Results

To verify the benefits of the method proposed in this article, we first perform experimental verification using simulation data. The simulation environment is configured with 256 array elements uniformly distributed around the 10 centimeters diameter imaging area, a sampling frequency of 25 MHz, and a sound velocity of 1540 meters per second. We utilize an open-source photoacoustic image database of the human brain. The dataset consists of ten distinct digital photoacoustic images of the human brain, as depicted in Fig. 1.

 figure: Fig. 1.

Fig. 1. Digital human brain data

Download Full Size | PDF

We utilize all 256 array components for full sampling, while the sparsity is 8. As a result, sparse reconstruction is dependent on the photoacoustic signals gathered by 32 array components that are evenly dispersed around the perimeter of the imaging region. In this paper, the peak signal-to-noise ratio (PSNR), root mean square deviation (RMSE), and structural similarity (SSIM) are employed to evaluate the algorithm's performance. Initially, this data set was used to train predetermined filters. After training the filter using a photoacoustic digital image of the human brain, 32 trained filters of size are obtained. As shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Filters trained with photoacoustic digital human brain data set

Download Full Size | PDF

We evaluate our approach in comparison to a number of well used and successful methods, such as FBP, CS-DL, and CS-CSCGR. Figure 3 depicts both the original photoacoustic digital image of the human brain and the images that were restored through the aforementioned techniques. The images that are produced by the FBP reconstruction technique contain a large number of artefacts, which makes it challenging to employ them in a practical setting. Clear photoacoustic pictures that have a certain degree of detail restoration can be obtained through the use of sparse reconstruction techniques that are based on CS. When compared to CS-DL, both CS-CSCGR and CS-CSCGR-PKS improve the quality of the restored image, make the image sharper, and minimize the amount of speckle artefacts. It is difficult to tell the difference between CS-CSCGR and CS-CSCGR-PKS with the human eye; nevertheless, the absolute difference image (depicted in Fig. 4) reveals that CS-CSCGR-PKS demonstrates superior performance.

 figure: Fig. 3.

Fig. 3. Reconstruction results of digital human brain images. (a) The reference image and the images reconstructed by (b) FBP, (c) CS-DL, (d) CS-CSCGR, (e) CS-CSCGR-PKS.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.

Download Full Size | PDF

Figure 5 depicts the blue dashed region of Fig. 3(a) to illustrate the reconstruction details. Multiple minuscule blood vessels are represented by the white arrows in the digital human brain. On the basis of the algorithm presented in this article, the most precise reconstruction results are achievable. The results of FBP reconstruction are essentially unavailable, and microscopic blood vessels in images reconstructed using CS-DL are nearly nonexistent. Figure 5(d) and (e) show that both CS-CSCGR and CS-CSCGR-PKS have outstanding sparse reconstruction performance, preserving the majority of image details, and CS-CSCGR-PKS achieves superior imaging.

 figure: Fig. 5.

Fig. 5. Detail display of blue dashed frames in Fig. 3(a): (a) The reference image; reconstruction image based on (b) FBP, (c) CS-DL, (d) CS-CSCGR, (e) CS-CSCGR-PKS.

Download Full Size | PDF

Simultaneously, we performed a quantitative analysis of the imaging outcomes. Table 1 displays the quantitative results of digital photoacoustic images of the human brain. We discovered that CS-CSCGR-PKS is the best of the three indicators. The iterative steps of the reconstruction algorithm based on compressive sensing are uniformly set to 50.

Tables Icon

Table 1. Quantitative analysis of the results of digital human brain imaging

The small animal imaging system used in this article (Fig. 6) is assembled based on a programmable ultrasound imaging system (Prodigy, S-sharp, Hong Kong, China), and the system schematic is shown in Fig. 6. The system mainly consists of a pulse laser module, a multi-channel data acquisition module, and a signal processing and timing control module. Convert the initial sound pressure signal into a photoacoustic image and test the algorithm proposed in this article. Concurrently, to demonstrate the efficacy of the algorithm, it was applied to photoacoustic imaging of the mouse torso. The imaging system is depicted in Fig. 6. Considering the comparatively deep penetration depth of 1064 nm laser, we employ 1064 nm laser to excite photoacoustic signals in subsequent studies. Using a laser with a wavelength of 1024 nm and a repetition rate of 20 Hz, we measured that the laser energy distribution density of the tissue specimen was 14.1 mJ/cm2, which is within the ANSI safety standard. An eight-way optical fiber equitably divides the laser beam into eight beams, and the annular irradiation of the imaging target area is achieved by optical path shaping. At the output end of the optical fiber, each laser beam should be designed as a narrow and elongated beam spot. Each fiber-emitted laser is collimated into a parallel beam through a convex lens (Thorlabs LA1131-YAG, f = 50.8) and then converged in a single direction through a flat cylindrical lens (Thorlabs LG1131-LU, f150) to form a high energy narrow beam spot. The circular device contains 256 array elements to collect photoacoustic signals. The imaging area is a 10-centimeter-diameter circular region.

 figure: Fig. 6.

Fig. 6. The system schematic

Download Full Size | PDF

In the context of photoacoustic imaging, a multi-wavelength tunable optical parametric oscillation (OPO) pulse laser (SpitLight-600, Innolas, Munich, Germany) was employed (Fig. 7(c)). Additionally, a mouse clamping mechanism was devised with sole purpose of facilitating imaging of the mouse (Fig. 7(b)). In order to optimize the imaging quality, it is imperative to perform debugging procedures to verify the alignment between the cross-sectional profile of the ring array photoacoustic signal acquisition and the cross-sectional profile of the ring light path illumination. The usage of a specialized motion controller facilitates the axial displacement of the circular array probe and the imaging target, hence simplifying the processes of debugging and three-dimensional imaging.

 figure: Fig. 7.

Fig. 7. Small Animal Photoacoustic Imaging System: (a) overall physical image of imaging system; (b) the mouse clamping mechanism; (c) optical parametric oscillator.

Download Full Size | PDF

We use the FBP, CS_DL, CS_CSCGR, and CS_CSC_PKS imaging algorithms for imaging based on 32 elements (1/8 of total 256 elements), and the label image is a fully sampled photoacoustic mouse torso image. The imaging results are depicted in Fig. 8. Figure 8(a) shows the position of the nude mouse to be imaged and the imaging cross-section. Figure 8(b)-8(c) show the images restored by the BP algorithm with 256 and 32 detecting elements, respectively. Under-sampling artifacts appear in the Fig. 8(c). Figure 8(d)-(f) show the images reconstructed by CS algorithm. They can greatly suppress the generation of under sampling artifacts and improve imaging quality. If a more accurate system matrix is used, the image reconstruction results of the CS method can be further improved.

 figure: Fig. 8.

Fig. 8. Reconstruction results of nude mouse torso: (a) the nude mouse; (b) the reference image and the images reconstructed by (c) FBP, (d) CS-DL, (e) CS-CSCGR, (f) CS-CSCGR-PKS.

Download Full Size | PDF

Based on the CS_CSCGR and CS_CSCGR_PKS reconstruction algorithms, we find that high-quality photoacoustic images can be obtained. Nonetheless, based on the absolute difference image (illustrated in Fig. 9), it is demonstrated that our algorithm produces the greatest quality reconstructed image.

 figure: Fig. 9.

Fig. 9. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.

Download Full Size | PDF

To demonstrate the superiority of CS-CSCGR-PKS, the results of the mouse thoracic imaging are quantitatively analyzed in Table 2. We discovered that CS-CSCGR-PKS had the maximum score for each of the three metrics. It is consistent with photoacoustic digital imaging of the human brain. By comparing Fig. 9(a)-(d), it can be found that the light body image reconstructed based on the algorithm proposed in this paper is the closest in pixel intensity distribution to the light acoustic image reconstructed based on full ring array elements. Finally, the goal of achieving a sparse array imaging effect that is close to that of a full ring array imaging effect has been achieved.

Tables Icon

Table 2. Quantitative analysis of the results of the photoacoustic mouse torso imaging

The second in vivo experiment was conducted on blood vessels in the brain of nude mouse. The imaging results are depicted in Fig. 10. Figure 10(a) shows the brain of the nude mouse. Figure 10(b)-8(c) show the images restored by the BP algorithm with 256 and 32 detecting elements, respectively. Figure 10(d)-(f) show the images reconstructed by CS algorithm. At the same time, we also presented the difference images relative to the original image (Fig. 11).

 figure: Fig. 10.

Fig. 10. Reconstruction results of brain blood vessels in nude mouse. (a) The reference image; (b) full sampling reconstruction of photoacoustic images and the images reconstructed based on 32 elements by (c) FBP, (d) CS-DL, (e) CS-CSCGR, (f) CS-CSCGR-PKS.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.

Download Full Size | PDF

From the Fig. 11, it can be observed that the imaging results of the mouse brain are similar to mouse thoracic, based on CS_ CSCGR_ The photoacoustic image reconstructed by the PKS algorithm is closest to the label image. The absolute error image shows that the algorithm proposed in this paper has advantages in detail restoration, and compared to CS_ CSCGR requires fewer iteration steps to achieve the best imaging results.

We also conducted quantitative analysis on brain imaging of nude mouse. To demonstrate the superiority of CS-CSCGR-PKS, the results of mouse brain imaging are quantitatively analyzed in Table 3.

Tables Icon

Table 3. Quantitative analysis of the results of the photoacoustic mouse brain imaging

We first calculate the histogram of the differential image as shown in Fig. 12, and label is the reconstructed image based on 256 elements. Figures 12(A)-(D) show the difference histograms between the reconstructed image and the label image using data from 32 transducer components by BP, CS-DL, CS-CSCGR, and CS-CSCGR-PKS, respectively. We find that the error pixels of the reconstructed image based on CS-CSCGR-PKS are basically in the range of 0∼0.1, so it is more accurate. From Figs. 12(E)-(H), we can find that, for the images reconstructed with CS, the majority of the difference amplitudes is within the lower ranges. Figures 12 (I)-(L) show the similar results.

 figure: Fig. 12.

Fig. 12. The amplitude histogram of the reconstructed image's difference image. (A)–(D) The distribution histogram of pixel error images in the digital human brain; (E)–(H) The distribution histogram of pixel error images in mouse torso; (I)–(L) The distribution histogram of pixel error images in the mouse brain.

Download Full Size | PDF

The imaging of human joints is a significant application of photoacoustic circular array imaging. Through the use of external contrast agents, it is able to perform morphological, microvascular, and functional imaging of joints, as well as molecular imaging. We also apply it to finger photoacoustic imaging. The laser beam is divided equitably into eight beams by eight optical fibers, and circular illumination of the imaging target area is achieved via optical path shaping. The collection of photoacoustic signals is carried out by an array of ultrasonic transducers arranged in a loop around them. Using a laser with a wavelength of 1024 nm and a repetition rate of 20 Hz, we measured that the laser energy distribution density of the tissue specimen was 15.2 mJ/cm2, which is within the ANSI safety standard. By installing a motor on the z-axis, a circular array can accomplish three-dimensional scanning. By scanning the left middle finger 100 times, we were able to obtain subcutaneous vascular imaging.

Similarly to photoacoustic digital human brain imaging, the BP and CS-CSCGR-PKS methods are used to reconstruct the finger image of each slice. After processing all cross-sectional images, a three-dimensional image of finger blood vessels is obtained by superimposing all scanned along the axial images of the finger. Figure 13 (B) depicts a 3D image of the finger's vasculature reconstructed using the BP method based on 256 elements. Figure 13 (C) and (D) depict 3D vascular images of digits reconstructed with BP and CS-CSCGR-PKS using 32 elements (1/8 of the total of 256 elements). To demonstrate the high-quality imaging capability of the CS-CSCGR-PKS method under scarce sampling conditions, we choose two out of one hundred representative images to illustrate the reconstruction effect.

 figure: Fig. 13.

Fig. 13. Reconstructed images of the finger. (A) Photograph of the finger; (B)-(D) 3D images restored by FBP based on 256 elements, FBP based on 32 elements and CS-CSCGR-PKS based on 32 elements; (a)-(f) B-scan images along the white dashed lines in (B)–(D).

Download Full Size | PDF

The above results demonstrate that the CS-CSCGR-PKS algorithm proposed in this article is efficacious. Following is a quantitative examination of the outcome of the appeal.

Further, a local comparison is made by drawing a line along the cross section of the finger image. Figure 14 shows that the amplitude details of the photoacoustic image restored based on CS-CSCGR-PKS are closer to the target image. Figure 14(A) shows the initial sound pressure amplitude distribution of the blue dashed line, whereas Fig. 14(B) shows the initial sound pressure amplitude distribution of the green dashed line. The results show that: (1) Compared to BP, reconstructed images based on CS-CSCGR-PKS are of higher quality; (2) The CS-CSCGR-PKS reconstruction image using only 32 transducers is basically equivalent to the BP image using all ultrasound unit data.

 figure: Fig. 14.

Fig. 14. Local comparison of light absorption intensity along a defined straight line in the cross-section image of the finger. (A) The blue dashed line; (B) The green dashed line.

Download Full Size | PDF

5. Discussion

In the field of rapid photoacoustic imaging, applying CS to sparse imaging is a highly efficient technique. In order to accomplish high-quality image restoration in the context of the sparse acquisition of photoacoustic signals in circular array photoacoustic tomography systems, we have designed a novel and effective sparse reconstruction architecture for photoacoustic images. CS-CSCGR-PKS applies convolutional sparse encoding in the gradient domain of images and introduces PKS. Compared to DL's reconstruction algorithm, it can avoid plaque aggregation artefacts, and gradient domain constraints can further suppress artefact generation. In order to accelerate the convergence of the reconstruction algorithm and reduce the number of iterations, this article implements PKS in the algorithm to accelerate convergence under certain known support conditions, thereby further reducing artefacts and obtaining higher quality photoacoustic images. This algorithm can restore high-quality photoacoustic images, as demonstrated by simulation, in vivo mouse and human finger experiments, and its efficacy has been confirmed. Research has demonstrated that CS-CSCGR-PKS has promising application prospects within the field of photoacoustic sparse imaging. On the basis of the algorithm proposed in this article, high-quality photoacoustic image can be accomplished using a small number of array components, which can be used for photoacoustic signal acquisition and effectively reduce the equipment's hardware cost.

In the process of validating the efficacy of this algorithm, simulation data, mouse torso imaging data, and human finger imaging data are used to validate its efficacy. The results demonstrate that under high sparsity, the photoacoustic image reconstructed using FBP lacks details and will produce more artefacts, making its practical application challenging. The algorithm for reconstructing sparse photoacoustic signals based on CS can recover details and reduce artefacts in the background. While conducting convolutional sparse coding in gradient domain, the proposed algorithm CS-CSCGR-PKS adds some prior information, which further improves the reconstruction effect. In this paper, the photoacoustic image reconstructed with full sampling serves as the label, while PSNR, RMSE, and SSIM serve as the indicators. In order to demonstrate the enhancement effect more intuitively, we also calculated the image difference between the reconstructed image and the label image as well as performed mathematical statistics. The results indicate that the photoacoustic image reconstructed using CS-CSCGR-PKS has improved in every metric, allowing it to restore details and eliminate artefacts more effectively.

Our algorithm has some remaining defects. In experiments related to this article, the filter and regularization parameters must be predetermined and unaltered for the duration of algorithm execution. There is no optimal method presently available for optimizing these regularization parameters. Therefore, the next stage of research is to identify the algorithm's optimal pertinent parameters. Utilizing pertinent optimization algorithms for real-time optimization of filter and regularization parameters will be the focus of future research. In addition, imaging speed is a significant indicator of PAT. This algorithm increases computational complexity and requires more time when compared to FBP and other traditional reconstruction algorithms. In future research, we will investigate the possibility of combining parallel computation with other imaging-speed-enhancing technologies.

6. Conclusion

CS-based sparse reconstruction algorithms can reconstruct high precision photoacoustic images under the condition of sparse photoacoustic signals, as compared to conventional image reconstruction algorithms. To accomplish high-quality sparse reconstruction of photoacoustic images, CS-CSCGR-PKS performs convolutional sparse encoding in the gradient domain and adds some prior information. This aids in lowering equipment hardware costs, accelerating convergence, and enhancing imaging speed. It significantly advances the use of circular array photoacoustic imaging in the imaging of tiny animals and humans.

Funding

National Key Research and Development Program of China (2022YFC2402400); National Natural Science Foundation of China (Grant No.62275062).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data may be obtained from the authors upon reasonable request.

References

1. V. Wang L, “Hu S. Photoacoustic tomography: in vivo imaging from organelles to organs,” science 335(6075), 1458–1462 (2012). [CrossRef]  

2. Z. Qin, Y. Liu, J. Chi, et al., “The sparse array elements selection in sparse imaging of circular-array photoacoustic tomography,” J. Innovative Opt. Health Sci. 15(05), 2250030 (2022). [CrossRef]  

3. P. Zhang, L. Li, L. Lin, et al., “High-resolution deep functional imaging of the whole mouse brain by photoacoustic computed tomography in vivo,” J. Biophotonics 11(1), e201700024 (2018). [CrossRef]  

4. V. Wang L and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods 13(8), 627–638 (2016). [CrossRef]  

5. G. Li, L. Li, L. Zhu, et al., “Multiview Hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt. 20(06), 1–066010 (2015). [CrossRef]  

6. X. Zhang, F. Ma, Y. Zhang, et al., “Sparse-sampling photoacoustic computed tomography: deep learning vs. compressed sensing,” Biomedical Signal Processing and Control 71(103233), 103233 (2022). [CrossRef]  

7. V. Wang L, “Prospects of photoacoustic tomography,” Med. Phys. 35(12), 5758–5767 (2008). [CrossRef]  

8. F. Zhang H, K. Maslov, G. Stoica, et al., “Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging,” Nat. Biotechnol. 24(7), 848–851 (2006). [CrossRef]  

9. X. Wang, Y. Pang, G. Ku, et al., “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

10. A. Oraevsky A, V. Savateeva E, V. Solomatin S, et al., “Optoacoustic imaging of blood for visualization and diagnostics of breast cancer,” Biomedical Optoacoustics III. SPIE 4618, 81–94 (2002). [CrossRef]  

11. L. Li M, T. Oh J, X. Xie, et al., “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96(3), 481–489 (2008). [CrossRef]  

12. A. De La Zerda, C. Zavaleta, S. Keren, et al., “Carbon nanotubes as photoacoustic molecular imaging agents in living mice,” Nat. Nanotechnol. 3(9), 557–562 (2008). [CrossRef]  

13. A. Ermilov S, T. Khamapirad, A. Conjusteau, et al., “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt. 14(2), 024007 (2009). [CrossRef]  

14. N. Huynh, F. Lucka, Z. Zhang E, et al., “Single-pixel camera photoacoustic tomography,” J. Biomed. Opt. 24(12), 1 (2019). [CrossRef]  

15. C. Zhang and Y. Wang, “Deconvolution reconstruction of full-view and limited-view photoacoustic tomography: a simulation study,” J. Opt. Soc. Am. A 25(10), 2436–2443 (2008). [CrossRef]  

16. R. Gordon, R. Bender, and T. Herman G, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography,” J. Theor. Biol. 29(3), 471–481 (1970). [CrossRef]  

17. H. Andersen A and C. Kak A, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrasonic imaging 6(1), 81–94 (1984). [CrossRef]  

18. S. Vilov, B. Arnal, E. Hojman, et al., “Super-resolution photoacoustic and ultrasound imaging with sparse arrays,” Sci. Rep. 10(1), 4637 (2020). [CrossRef]  

19. J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging 28(4), 585–594 (2009). [CrossRef]  

20. M. Lustig, D. Donoho, and M. Pauly J, “Sparse MRI: The application of compressed sensing for rapid MR imaging.,” Magn. Reson. Med. 58(6), 1182–1195 (2007). [CrossRef]  

21. S. Govinahallisathyanarayana, B. Ning, R. Cao, et al., “Dictionary learning-based reverberation removal enables depth-resolved photoacoustic microscopy of cortical microvasculature in the mouse brain,” Sci. Rep. 8(1), 985 (2018). [CrossRef]  

22. F. Liu, X. Gong, V. Wang L, et al., “Dictionary learning sparse-sampling reconstruction method for in-vivo 3D photoacoustic computed tomography,” Biomed. Opt. Express 10(4), 1660–1677 (2019). [CrossRef]  

23. M. Cao, J. Yuan, S. Du, et al., “Full-view photoacoustic tomography using asymmetric distributed sensors optimized with compressed sensing method,” Biomedical Signal Processing and Control 21, 19–25 (2015). [CrossRef]  

24. J. Tong, K. Li, W. Lin, et al., “Automatic lumen border detection in IVUS images using dictionary learning and kernel sparse representation,” Biomedical Signal Processing and Control 66(102489), 102489 (2021). [CrossRef]  

25. B. Wohlberg, “Efficient convolutional sparse coding,” //2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)IEEE, 20147173–7177.

26. D Zeiler M, D Krishnan, W Taylor G, et al., “Deconvolutional networks[C],” //2010 IEEE Computer Society Conference on computer vision and pattern recognition. IEEE, 2010: 2528–2535.

27. P. Bao, W. Xia, K. Yang, et al., “Convolutional sparse coding for compressed sensing CT reconstruction,” IEEE transactions on medical imaging 38(11), 2607–2619 (2019). [CrossRef]  

28. J. Meng, V. Wang L, L. Ying, et al., “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express 20(15), 16510–16523 (2012). [CrossRef]  

29. Z. Guo, C. Li, L. Song, et al., “Compressed sensing in photoacoustic tomography in vivo,” J. Biomed. Opt. 15(2), 021311 (2010). [CrossRef]  

30. A. Jahanshahi J, H. Danyali, and S. Helfroush M, “Compressive sensing based the multi-channel ECG reconstruction in wireless body sensor networks,” Biomedical Signal Processing and Control 61, 102047 (2020). [CrossRef]  

31. L. Donoho D, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

32. S. Boyd, N. Parikh, E. Chu, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning 3(1), 1–122 (2010). [CrossRef]  

Data availability

Data may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Digital human brain data
Fig. 2.
Fig. 2. Filters trained with photoacoustic digital human brain data set
Fig. 3.
Fig. 3. Reconstruction results of digital human brain images. (a) The reference image and the images reconstructed by (b) FBP, (c) CS-DL, (d) CS-CSCGR, (e) CS-CSCGR-PKS.
Fig. 4.
Fig. 4. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.
Fig. 5.
Fig. 5. Detail display of blue dashed frames in Fig. 3(a): (a) The reference image; reconstruction image based on (b) FBP, (c) CS-DL, (d) CS-CSCGR, (e) CS-CSCGR-PKS.
Fig. 6.
Fig. 6. The system schematic
Fig. 7.
Fig. 7. Small Animal Photoacoustic Imaging System: (a) overall physical image of imaging system; (b) the mouse clamping mechanism; (c) optical parametric oscillator.
Fig. 8.
Fig. 8. Reconstruction results of nude mouse torso: (a) the nude mouse; (b) the reference image and the images reconstructed by (c) FBP, (d) CS-DL, (e) CS-CSCGR, (f) CS-CSCGR-PKS.
Fig. 9.
Fig. 9. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.
Fig. 10.
Fig. 10. Reconstruction results of brain blood vessels in nude mouse. (a) The reference image; (b) full sampling reconstruction of photoacoustic images and the images reconstructed based on 32 elements by (c) FBP, (d) CS-DL, (e) CS-CSCGR, (f) CS-CSCGR-PKS.
Fig. 11.
Fig. 11. Result of difference between reconstructed image and label image. (a) FBP, (b) CS-DL, (c) CS-CSCGR, (d) CS-CSCGR-PKS.
Fig. 12.
Fig. 12. The amplitude histogram of the reconstructed image's difference image. (A)–(D) The distribution histogram of pixel error images in the digital human brain; (E)–(H) The distribution histogram of pixel error images in mouse torso; (I)–(L) The distribution histogram of pixel error images in the mouse brain.
Fig. 13.
Fig. 13. Reconstructed images of the finger. (A) Photograph of the finger; (B)-(D) 3D images restored by FBP based on 256 elements, FBP based on 32 elements and CS-CSCGR-PKS based on 32 elements; (a)-(f) B-scan images along the white dashed lines in (B)–(D).
Fig. 14.
Fig. 14. Local comparison of light absorption intensity along a defined straight line in the cross-section image of the finger. (A) The blue dashed line; (B) The green dashed line.

Tables (3)

Tables Icon

Table 1. Quantitative analysis of the results of digital human brain imaging

Tables Icon

Table 2. Quantitative analysis of the results of the photoacoustic mouse torso imaging

Tables Icon

Table 3. Quantitative analysis of the results of the photoacoustic mouse brain imaging

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

( 2 1 c 2 2 t 2 ) p ( r , t ) = p 0 ( r ) d δ ( t ) d t
p ( r , t ) = 1 4 π c 2 t [ 1 c t d r p 0 ( r ) δ ( t | r r | c ) ]
K ( m , t ) ( i , j ) = 1 2 π c δ ( t | r i , j r m | c ) ,   m = 1 , 2 , p ;   t = s Δ t ,  s = 1,2, , q s
arg min u | | y K u | | 2 2 + β R ( u )
min i = 1 N M i 1   s . t .   i = 1 N f i M i u 2 < ε
min i = 1 N M i Δ 1   s . t .   i = 1 N f i M i u 2 < ε
arg min u , { M i } , { f i } 1 2 y K u 2 2 + β ( 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N M i 1 )
arg min u , { M i } , { f i } 1 2 y K u 2 2 + β ( 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N M i Δ 1 )
arg min u , { M i } , { f i } 1 2 y K u 2 2 + β ( 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N M i Δ 1  +  τ 2 i = 1 N ( g 0 M i ) 2 + ( g 1 M i ) 2 2 2 )
arg min u , { M i } 1 2 y K u 2 2 + β ( 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N M i Δ 1  +  τ 2 i = 1 N ( g 0 M i ) 2 + ( g 1 M i ) 2 2 2 )
arg min u y K u 2 2 + β i = 1 N f i M i ¯ u 2 2
arg min { M i } 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N M i Δ 1 + τ 2 i = 1 N ( g 0 M i ) 2 + ( g 1 M i ) 2 2 2
( M i Δ ) j = { M i , j   M i , j Δ 0  otherwise
arg min { M i } 1 2 i = 1 N f i M i u 2 2 + λ i = 1 N W i k 1 M i 1 + τ 2 i = 1 N ( g 0 M i ) 2 + ( g 1 M i ) 2 2 2
T i 0 ( k ) = { j : | x j ( k ) | > ξ ( k ) }
τ 2 i = 1 N ( G 0 M i 2 2 + G 1 M i 2 2 )
arg min M 1 2 F M u 2 2 + λ W M 1 + τ 2 Φ 0 M 2 2 + τ 2 Φ 1 M 2 2
Φ l = ( G l 0 0 G l )
arg min M , B 1 2 F M u 2 2 + τ 2 Φ 0 M 2 2 + τ 2 Φ 1 M 2 2 + λ W B 1 ,   s . t .   M = B
M j + 1 = arg min M 1 2 F M u 2 2 + τ 2 Φ 0 M 2 2 + τ 2 Φ 1 M 2 2 + ρ 2 M B j + C j 2 2
B j + 1 = arg min B λ W B 1 + ρ 2 M j + 1 B + C j 2 2
C j + 1 = C j + M j + 1 B j + 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.