## Abstract

In most photoacoustic tomography (PAT) reconstruction approaches, it is assumed that the receiving transducers have omnidirectional response and can fully surround the region of interest. These assumptions are not satisfied in practice. To deal with these limitations, we present a novel deconvolution based photoacoustic reconstruction with sparsity regularization (DPARS) technique. The DPARS algorithm is a semi-analytical reconstruction approach in which the projections of the absorber distribution derived from a deconvolution-based method are computed and used to generate a large linear system of equations. In these projections, computed over limited viewing angles, the directivity effect of the transducer is taken into account. The distribution of absorbers is computed using a sparse representation of absorber coefficients obtained from the discrete cosine transform. This sparse representation helps improve the numerical conditioning of the system of equations and reduces the computation time of the deconvolution-based approach by one order of magnitude relative to Tikhonov regularization. The algorithm has been tested in simulations, and using two-dimensional and three-dimensional experimental data obtained with a conventional ultrasound transducer. The results show that DPARS, when evaluated using contrast-to-noise ratio and root-mean-square errors, outperforms the conventional delay-and-sum (DAS) reconstruction method.

© 2017 Optical Society of America

## 1. Introduction

Photoacoustic (PA) imaging is a new and rapidly growing imaging modality that has numerous potential applications in the imaging of biological tissue [1–4]. The principle of PA imaging is based on the generation of ultrasound (US) waves by laser illumination of tissue. Short and controlled laser pulses illuminate the tissue. The absorbed energy results in a rapid temperature rise leading to pressure waves generated through thermoelastic expansion. The waves travel through tissue and are detected at specific locations by an US transducer array. The detected signals are analyzed to derive images of the distribution of laser energy absorption contrast in tissue.

Many of the exact PA tomography (PAT) reconstruction algorithms require full-view PA imaging systems that enclose the sample with unfocused sensors [5]. For the enclosed detection surface, the time reversal (TR) algorithm works well [6,7]. However, TR does not account for sensor directivity effects [8]. If focused, conventional linear arrays that are typically available on clinical US machines could be used for PAT, a broad range of applications could be easily enabled by relatively simple modifications of conventional US machines. For example, PAT breast scanning could be accomplished by rotating an array of transducers around the breast as in the automated breast US system presented in [9]. In prostate imaging, it is possible to capture the acoustic waves that could be generated in the prostate through trans-urethral illumination [2] from limited angles. Thus, a model-based PAT reconstruction method that works with PA wave fields that are physically focused and that are based only on limited-view acquisition would be very useful to have.

The conventional linear transducer array has been used in a few PA studies [10,11]. In one approach, a linear transducer is moved in a plane for a planar detection geometry. Multiple two-dimensional (2-D) images are captured and combined to form a three-dimensional (3-D) tomographic image. However, the reconstructed multi-slice images may suffer from poor elevational resolution [12] because linear transducers receive signals from a limited volume and not just a plane [13,14]. For the same reason, the conventional Delay-and-Sum (DAS), a back projection algorithm, which is inspired by US imaging, cannot provide accurate images. The best performance of the DAS method is achieved when the sensors provide omni-directional response. In most cases, when a linear array of transducers are used, back projection algorithms have been used for PA image reconstruction [4,12–16]. The back projection algorithm is fast and reliable [17], but it is not model-based and may cause artifacts in the reconstructed images [18]. To reduce the artifacts, the method can be modified with deconvolution algorithms.

Deconvolution algorithms have the potential to include different constraints on the transducers. Many studies showed the effectiveness of the deconvolution based PA image reconstruction. For example, a deconvolution algorithm has been used to remove the artifacts caused by the pulse width, and by the transducer working bandwidth [19]. The algorithm was also used for deblurring purposes [20]. Generally, spatial impulse response (SIR) or the acousto-electric impulse response (EIR) are employed in the deconvolution algorithms. Wang et al. [21] showed that PAT spatial resolution can be enhanced with impulse responses. In contrast to the SIR, finding the EIR is challenging [22].

A few model based image reconstruction methods have been reported to compensate for the sensor directionality caused by the size and shape of elements. Sensor directionality was investigated in [8]. Due to the directionality of transducers, spatial resolution can be decreased. To reduce the directivity effect caused by the transducer size, the rectangular-shaped transducer of an ideal size was investigated in [23] and a 2-D Fourier transform based method was developed to remove the PA signals coming from outside the imaging area of the transducer. In general, the transducer surface is divided into a number of patches and superimposition of the PA signals is used in the reconstruction [24]. Further improvements for 3-D models are described in [25]. However, these methods are not applicable to transducers with the directional response caused by an external acoustic lens, as is the case with the conventional US transducers.

The limited view PAT was introduced in [26]. For a stable reconstruction, the angle view (or solid angle view for 3-D) of the region of interest (ROI) should be at least $\pi $ (or$2\pi $) [27]. For lower angle views, the limited-view PAT causes data incompleteness artifacts in the reconstructed images. To remove such artifacts, compressed sensing (CS) and data compression based reconstruction algorithms have been suggested [28–30]. The CS algorithms are capable of image reconstruction from limited-view and noisy measurements [31,32], and in the presence of non-uniform and random laser illumination [33,34]. Data compression based PA algorithms find a sparse representation of the measured signals [35–37] and CS-based PA algorithms find a sparse representation of the PA images [32,38,39] with sparsifying transforms. For example, the Discrete Cosine Transform (DCT) was used in [40] to sparsify the measured data, which drastically reduced the computing time in their 2-D PA imaging study. While the sparse representation of the measured signals leads to a more effective solution, it does not reduce the number of unknowns. The limited view and the directionality of the transducers are constraints that make the system of equations for PAT reconstruction rank-deficient. To the best of our knowledge, the rank-deficient PA system consisting of the limited-view PAT with focused US transducer has not been addressed. While, the reconstructed image may not be sparse in its pixel representation, it has a sparse representation in the DCT-domain [41]. The DCT which is widely-used in CS-studies and has been employed in JPEG and MPEG systems for still and motion image compression [42], has not been used to address the rank deficiency in limited view, focused transducer, PAT reconstruction.

In this paper, a novel deconvolution-based PA reconstruction with sparsity regularization (DPARS) is presented. We show that the method works with the transducers that are physically focused in the elevational direction with an external acoustic lens as in traditional linear transducers designed for conventional US (not PA) imaging. The transducer beam shape affects the reconstruction and should be taken into account during the reconstruction process. The distribution of the absorption coefficients is computed by solving a large linear system of equations derived from the projections of the absorption coefficients. These projections account for the transducer directivity. In addition, we consider a limited-view PA imaging system that covers less than 180 (or 360 for 3-D) degrees around the sample. To reduce the number of equations, as well as to improve numerical conditioning and computing time, we use a sparse DCT representation of the distribution of absorption coefficients. To the best of our knowledge, this is the first model-based PAT reconstruction method that accounts for both sensor directionality and limited angle of view. We compare and report the performance of the proposed method with model-based and non-model-based algorithms. We presented the initial results of our inversion approach in [43]. Further analyses and complete details are presented in this paper.

## 2. Materials and methods

In this paper, scalar parameters, vectors, and matrices are shown in normal, italic bold, and roman bold fonts, respectively.

#### 2.1 Theory

In PAT, we illuminate the entire ROI with an electromagnetic (EM) pulse of intensity $I(t)$. The laser pulse results in exciting an initial acoustic pressure ${p}_{0}\left(r\right)$. The initial pressure is linearly related to a spatial EM absorber function $A(r)$ by ${p}_{0}\left(r\right)=\text{}\Gamma A(r)$ where $\Gamma $ is the Gruneisen parameter related to the physical properties of the absorbers in the object; $\Gamma ={c}^{2}\beta /{C}_{p}$ where *c* is the speed of sound, $\beta $ is the isobaric volume expansion coefficient, and ${C}_{p}$ is the specific heat. The initial pressure propagates through the medium. The PA pressure $p(r,t)$ at vector position $r$ and time *t* is described by the following non-homogeneous wave equation [44]:

Image reconstruction techniques are used to form 2-D and 3-D images using the PA signals detected at specific locations around the ROI to find the initial pressure source distribution ${p}_{0}\left(r\right)$. A solution to $p(r,t)$ of Eq. (1) can be obtained in the time domain as [45]:

*i*, and ${t}^{\prime}=t-\left|r-{r}^{i}\right|/c$. The following formulation from Eqs. (3) through (5) follows the deconvolution approach from [46], but using a global coordinate system. According to the deconvolution algorithm, the PA pressure is the superposition of point source responses. Using Eq. (2), the PA pressure from a point source at time $t$ can be obtained as ${p}_{p}(t)$. The point source is located at position ${r}^{p}$. Thus, the PA pressure can be calculated as follows

*k*is a parameter corresponding to the physical properties of the point source being imaged, including $\Gamma $,

*c*, and the initial location of the point source. As shown in Fig. 1, the integral of the absorber function $A(r)$ is computed over the sphere with radius of $\left|{r}^{}-{r}^{i}\right|$ centered at the position ${r}^{i}$.

The detected signal by the US transducer *i*, ${p}_{d}^{i}\left(t\right)$, is the convolution of the pressure, ${p}_{}^{i}\left(t\right)$ and the impulse response of the transducer, $h(t)$. The generalization of Eq. (3) for the detected signals by the transducers will be

Let ${R}^{i}\left(t\right)$ be the preprocessed signal of an individual transducer *i*,

The transducer *i* is directional and treats the PA pressures based on which direction they arrive at the transducer. Let the function ${D}^{i}\left(r\right)$ indicate the directivity effect of the transducer *i* for any location $r$ in the medium with respect to the transducer location, ${r}^{\text{i}}$. Then we can rewrite Eq. (6) as follows

The spatial distribution of absorbers is discretized by a mesh of equally sized cells as shown in Fig. 2 where $\stackrel{\u2322}{A}$ is a vector of size ${N}_{p}$ representing the nodal values of the absorber distribution or the image in 2-D or 3-D space. We assume that the ROI fully encloses the region of the sample that receives significant illumination, i.e., that the incoming signals to the transducer from outside its imaging region are negligible. The preprocessed signal that is computed from the de-convolution algorithm for the transducer *i*, ${R}^{i}\left({t}_{}\right)$, is sampled as ${R}^{i}\left({t}_{m}\right)$, *m* = $1,2,\dots ,M$. The integral in Eq. (7) can be estimated by summing the integrand values at a finite set of points. ${N}_{m}$ points are picked on the surface, ${S}^{i}\left(c{t}_{m}\right)$shown with red cross signs in Fig. 2. The values at the sampled points are assembled into a vector $\stackrel{\u2323}{A}$. These are equally spaced on a surface equidistant to the acoustic element *i*. We can write Eq. (7) in discretized form as follows

*N*is the number of transducers, ${\stackrel{}{\stackrel{\u2323}{A}}}_{\gamma}$ and ${\stackrel{}{\stackrel{\u2323}{D}}}_{\gamma}^{i}$ are the values of the absorber function $A(r)$ and the directivity function ${D}^{i}(r)$ at the integration points shown by red cross signs in Fig. 2, and $\Delta S=2\pi {r}_{m}/{N}_{m}$ in the 2-D case and $\Delta S=4\pi {r}_{m}^{2}/{N}_{m}$ where ${r}_{m}=c{t}_{m}$. ${\stackrel{}{\stackrel{\u2323}{A}}}_{\gamma}$ can be written with respect to the nodal values ${\stackrel{}{\widehat{A}}}_{{\gamma}_{j}}$ using Eq. (9). Each integration point ${\stackrel{}{\stackrel{\u2323}{A}}}_{\gamma}$ is surrounded with the nodes inside the set ${N}_{s}(\gamma )$ of the image (the set ${N}_{s}(\gamma )$ has 4 members for the 2-D case and 8 for the 3-D case). For example, Fig. 2 shows one ${\stackrel{}{\stackrel{\u2323}{A}}}_{\gamma}$ is enclosed with four ${\stackrel{}{\widehat{A}}}_{{\gamma}_{j}}{}^{\prime}$s for a 2-D image. The ${\stackrel{}{\stackrel{\u2323}{A}}}_{\gamma}$ value is expressed as a linear combination of the surrounding nodes, ${\stackrel{}{\widehat{A}}}_{{\gamma}_{j}}$ as follows

By substituting Eq. (9) into Eq. (8), we obtain the following:

*i*, and ${N}_{s}(\gamma )$ is the set of neighbors of ${\stackrel{\u2323}{A}}_{\gamma}$. For example, in Fig. 2, the points in ${S}^{i}(c{t}_{m})$ are shown with red cross signs, and ${N}_{s}\left(\gamma \right)=\left\{15,16,21,22\right\}$ for $\gamma =4$. For each

*i*, and

*m*, we obtain one linear equation with at most 4 or 8 times ${N}_{m}$ of the unknowns. The linear equation fills up the row $m+\left(i-1\right)N$ of the coefficient matrix

**F**, and the value ${R}^{i}\left({t}_{m}\right)$ fills up the same row of the vector

**. Considering the number of transducers/transducer locations,**

*R**N*and the length of sampled preprocessed signal of each transducer

*M*, we will have a system of0 $N\times M$ linear equations as follows

#### 2.2 Sparsity regularization

In a regularization process we add constraints to make the system determined and well-conditioned. When using regularization, the number of measurements required for good image reconstruction may be far less than prescribed by Shannon’s sampling theory [47]. As noted in Eq. (15), there are more unknowns than there are equations. Assumptions such as sparsity of the distribution of absorbers $A(r)$ can drastically reduce the number of unknowns. The main advantage of the DCT is that the transform holds energy compaction properties. In the DCT-domain, the coefficients corresponding to the highest frequencies can be neglected to achieve dimensionality reduction [48]. As a result, in process of solving the inverse problem, the number of unknowns and memory usage are drastically reduced and therefore the processing time and numerical conditioning are enhanced. The DCT has been used in different inverse problems such as reconstructing subsurface geologic features [49] and reconstructing tissue elasticity [50]. A similar approach can be used in the PA inverse problem. The general formulation of DCT for a 1-D signal $f({n}_{d})$ of length ${N}_{d}$ is defined as

We can make the transform orthonormal by multiplying the $\stackrel{}{\tilde{f}}(0)$ term by $1/\sqrt{{N}_{d}}$, and other terms by the scale factor of $\sqrt{2/{N}_{d}}$. For signals in 2-D or 3-D cases, DCT coefficients can be calculated by a composition of the coefficients along each dimension.

The DCT as a linear transformation can be represented by matrices:

**T**

In this method, we assume that most of the energy concentration in the DCT domain is in the lower spatial frequency portion of the domain. For example, a circular region in the lower frequency of the domain can be selected. Therefore, we can take just a fraction of low-frequency DCT coefficients to recover the original distribution of absorbers. The fraction is named the regularization parameter or cutoff ratio ${r}_{c}$.

Each column of the transform matrix represents a node of the spatial frequency domain. We select a subset of ${N}_{{p}^{\prime}}$ columns of **T** corresponding to the selected dominant coefficients and call the new truncated transform matrix **T _{tr}** . If ${\tilde{A}}_{tr}$ is the truncated version of $\tilde{A}$ in the DCT domain, then:

Now we can rewrite Eq. (15) as

Thus, DCT sparsity regularization is applied to the PA inverse problem by replacing the absorber distribution $\stackrel{\u2322}{A}$ with ${T}_{tr}{\tilde{A}}_{tr}$. To solve the inverse problem stated in the Eq. (20), we can take $F{T}_{tr}$ as the coefficient matrix, and solve it to find ${\tilde{A}}_{tr}$. We then reconstruct the image using Eq. (19). A schematic of the inversion approach is shown in the diagram of Fig. 3(b).

It is well known in signal processing that a sharp cutoff in the Fourier domain introduces ringing in the spatial domain. Therefore, we propose that instead of a step change in the DCT domain (rectangular window) as described in Eq. (19), a smooth windowing function be applied as follows:

For the simulation study, we also compare our simulation results with those achieved by the commonly used Tikhonov regularization method [51]. To improve the conditioning of the problem, we penalize the norm of the absorber distribution resulting in the following equation

**I**is the identity matrix, and $\lambda $ is Tikhonov regularization parameter.

We evaluate our approach using simulation and experimental data, as described in the next sections.

#### 2.3 Transducer calibration

Sensor directivity is important in the US transducer design [52,53]. The directivity function of the transducer *i*, ${D}^{i}\left(r\right)$ is defined as the PA signal detected by a transducer when a point source is at the arbitrary position $r$ normalized by the PA signal detected along the transducer element axis. The DPARS inversion algorithm takes into account the directivity information of the transducer, which can be obtained from a calibration procedure.

As shown in Fig. 5(a), we imaged a point source at different locations with respect to the transducer. A black bead with diameter of 1mm illuminated by the laser was used as the point source. The locations of the point source were specified in the axial/elevational/lateral (x,y,z) coordinate system that was attached to the transducer (see Fig. 5(a)). The transducer was moved with respect to the point source. With this approach, the light intensity received by the point source is constant. The value on the elevational axis represents the deviation of the point source from the main plane of the US probe. We imaged the point target at the axial range of 26 mm to 86 mm in increments of 5 mm, and for each axial value, we covered the elevational range of 0 to 14 mm in increments of 1 mm. Then, we used linear interpolation to have the calibration result at an arbitrary axial and elevational value. Figures 5(b) and 5(d) show the maximum intensity detected by the transducer with respect to the location of the point source along the elevation and lateral directions, respectively. Figures 5(c) and 5(e) also show the transducer calibration result in which we compensated the result for attenuation. Figure 6 shows images corresponding to three calibration experiments. For example, in Figs. 6(a) and 6(c) the point source was at 14 mm out of the imaging plane; however, the point is still visible in the reconstructed images.

#### 2.4 Simulation methods

We employed the open source MATLAB toolbox k-Wave to calculate PA waves generated from a point source at a specific distance from the detector through the medium in the time-domain [54]. The PA wave response to a distribution of sources was computed by superposition. The intensities of the acoustic waves decrease as they propagate through the tissue and travel to the transducer locations.

To gain insight into 2-D and 3-D PAT, we ran two simulation studies. In a 2-D tomography study, a planar simulated ROI of 105 mm × 55 mm was selected, with a speed of sound of 1540 m/s, and with a distribution of absorbers that comprises a circle and three points, Fig. 7. As shown in Fig. 7(a), the simulated receiving transducer array for 2-D PAT is rotated in the plane of the ROI. The simulated transducer array includes 128 sensing elements in a 38 mm width, and models a standard linear transducer L14-5 (Ultrasonix Medical Corporation, Richmond, BC) which has a width of 38 mm with 128 elements and a center frequency of 7.5 MHz. The sensing elements do not have omni-directional response; rather, their directivity is modeled using measurements of the actual L14-5 transducer response (Fig. 7(b)). The main directions of transducers are indicated with red arrows in Fig. 7(a). The PAT reconstruction has both a limited angle of view (120°) and a sparse set of views (three views at 60° from each other). The pixel size used in the reconstruction is 0.1 mm × 0.1 mm.

We also simulated the reconstruction of a single transversal plane of a 3-D PAT system that uses a rotating linear transducer array; we ran a circular tomography in which we have 60 views over 60 degrees as shown in Fig. 8(a). The simulated sensing elements are positioned on a circle while pointing towards the center. An ROI of 70 mm × 54 mm was selected in the transversal plane, with a distribution of absorbers that simulate a vessel structure and an inclusion, as shown in Fig. 8(a). We used the same pixel size and speed of sound as in the 2-D PAT simulation. Then, we added White Gaussian noise to the PA signals (Figs. 9 and 10).

It is clear from the geometry of the transducer, that the transducer elevational lens will drastically affect the reconstruction. In our circular tomography simulation, we used the sensitivity of the transducer element shown in Fig. 8(b), which was obtained by moving a point source in the elevational direction of the L14-5 transducer, at a distance of 30 mm from the transducer face. The directivity effect of the transducer element is more pronounced in the elevational direction (Fig. 8(b)) than in the lateral direction (Fig. 7(b)).

#### 2.5 Experimental set-up

The schematic of the PAT setup is shown in Fig. 3(a). A Q-switched Nd:YAG laser at 532 nm wavelength (Continuum, Inc., Santa Clara, CA, USA) with a pulse width of 5 ns, and a repetition rate of 10 Hz was used to illuminate the sample. An optical parametric oscillator (OPO) from the same manufacturer was used to generate the 700 nm wavelength. The density of the laser light was kept below 40 mJ cm^{−2}. The laser burn paper test showed that the laser beam is approximately circularly shaped, with a diameter of 2 cm. PA signals were detected by a L14-5 linear transducer array. The Ultrasonix SonixDAQ, a parallel data acquisition system, was used to record pre-beam formed radio-frequency (RF) digital data. The sampling rate is 40 MHz with 12-bit resolution, and the acquisition was performed for all 128 elements at the same time. To perform tomography with the linear transducer, many research groups illuminate a rotating object; however, we, in a more realistic scenario, assumed that the laser beam was pointed to a fixed sample, and we move the transducer instead of the sample. The sample and the transducer were kept inside a water bath for better acoustic wave coupling.

#### 2.6 Experimental 2-D and 3-D PAT

We fabricated a 2-D phantom by printing a circle and three points on a piece of paper (Fig. 11(a)) as in our simulation study. We embedded the paper between two layers of gelatin. We mounted the transducer on a rotary stage with its imaging plane aligned with the mid-plane of the 2-D phantom. As shown in Fig. 11(a), we recorded three PA data sets at 60° angle increments.

Prior to the experiment, we need to have the geometrical information of the 2-D tomography set-up. Thus, we imaged a calibration phantom. The phantom contains a point source. The phantom was fixed. We took images of the point source in different positions of the probe. Tracking the location of the point source in different images produces a circle. The center of the circle shows the center of rotation of the 2-D tomography set-up.

For 3-D PAT, the L14-5 transducer can be maneuvered with any tracked scanning system [55,56]. In our experiment, the probe was mounted such that its lateral axis coincides with the rotation axis of a motorized rotary stage (Figs. 12(a) and 12(b)), and was rotated on a cylindrical surface surrounding the 3-D ROI, with the US plane passing through cylinder axis. A piece of pencil-lead shown in Fig. 12(a) was used as an absorber and mounted on a transparent fishing line. We recorded the PA signals through 140° in increments of 2°.

To have the geometrical information of the 3-D tomography set-up, the calibration phantom contains a piece of black thread. The thread can be aligned normal to the ground using a weight. The phantom was fixed while we took images of the thread in different positions of the probe. Then we can find two opposite positions of probe that have the thread in-plane. The centerline of the two images illustrate the rotational axis of the 3-D tomography set-up.

## 3. Simulation results

To have a fair evaluation of our DPARS reconstruction approach, we back-projected the preprocessed signals in the DAS reconstruction process.

For 2-D PAT, we have three views, each acquired with 128 elements, as described in Section 2.3 and illustrated in Figs. 7(a) and 7(b). The reconstructed images with DPARS, FDPARS and DAS methods are shown in Figs. 7(d)-7(f), respectively.

In the circular tomography, due to the sharp directivity effect of the transducer (Fig. 8(b)), the best result of the DAS method was achieved when we back-projected the transducer element signals only along the main direction of transducer elements. This is equivalent to the assumption that the linear probe only receives signals form within the US plane. Then we used linear interpolation to fill the gaps and form the DAS image. Model-based and non-model-based algorithms are employed for the PA reconstruction. The reconstructed images are shown in Figs. 8(e)-8(h) using DPARS, FDPARS, Tikhonov regularization, and DAS, respectively. Figure 8(d) shows the result of DPARS if we do not consider the directivity effect. Figure 9 shows the reconstructed images in the presence of noise. To compare the reconstruction methods, we calculated the contrast-to-noise ratio (CNR) and the RMS error relative to the gold standard of Fig. 8(c) or 7(c), corresponding to each method. Prior to these calculations, we need to have a comparable dynamic range for the reconstructed images. For this, we multiplied each image pixel values by a factor $\alpha $ in order to minimize the error (${e}_{r}$) between the actual and reconstructed images:

where $A$ and ${A}^{0}$ are the vectorized reconstructed and actual images, respectively, and the superscript**represents transpose. This modification of the dynamic range is applied equally to all approaches. The**

*T**RMS*error was calculated using the following equation

*i*, respectively, and ${N}_{p}$ is the number of image pixels. To compute

*CNR*, we use Eq. (28):where ${m}_{i}$ and ${m}_{b}$ are the means of intensities and ${\sigma}_{i}^{}$ and ${\sigma}_{b}^{}$ are the standard deviations at the inclusion location and background respectively.

Table 1 aims to compare the inversion approaches in terms of CNR and RMS error for 2-D PAT simulation. For circular tomography, DAS results have the RMS error and CNR of 0.18 and 0.92, respectively. These values can be compared with the values reported for model-based regularization methods, Fig. 10. The RMS error and CNR of the Tikhonov regularization result, Fig. 8(g), are 0.16 and 1.72, respectively; however, those are 0.16 and 2.16 for DPARS. Using the FDPARS, the RMS error and CNR can be improved up to 0.10 and 2.8, respectively.

We added White Gaussian noise to the data to acquire Signal to Noise Ratios (SNR) of 20, 30, and 40 dB. Wiener deconvolution was used as the deconvolution operator in which the noise-to-signal power ratio was set to 0.1. The inverse problem was solved with different values of the regularization parameter. The results are shown in Fig. 10. Reconstructed images with DPARS, FDPARS, Tikhonov regularization and DAS for SNR of 20 dB are shown in Fig. 9.

## 4. Experimental results

In addition to the numerical simulation, we experimentally consider the inversion approach.

Figures 11(c), 11(d), and 11(f) show the reconstructed images using the DPARS, FDPARS, and DAS methods. The lateral plane transducer calibration data were fed to the DPARS method. The circle is not fully visible in the DAS image but the DPARS method provides a reasonable image. In addition, Table 2 represents the CNR and the RMS errors for each image reconstruction method. We also consider the linear system in the absence of regularization. Figure 11(e) shows the result. The image has the CNR and the RMS error of 1.27 and 0.2, respectively. Comparing these values with the values reported in Table 2 illustrates that sparsity regularization improves the image reconstruction in terms of both factors. The resolution was calculated based on the full width at half maximum (FWHM) of the point source. The resolution of the DPARS image was 0.41 mm, and of the FDPARS image was 0.45 mm.

In 3-D PAT, the linear probe rotates around an axis parallel to the lateral direction of the probe. Figures 12(c) and 12(d) show the reconstructed images with the DPARS and DAS methods. Both DPARS and DAS methods were applied in 3-D space. Figure 13 shows the transversal sections of the 3-D image at different lateral transducer coordinates. Table 2 also shows the CNR and RMS errors corresponding to the reconstruction approaches. The resolution of the DPARS and FDPARS images were calculated as 0.35 mm and 0.4 mm, respectively.

## 5. Discussion

Our work has considered realistic PAT scanning geometries in which directional PA sensors do not fully enclose the object. The sparse representation of the absorber distribution provided by the DCT makes our DPARS inversion approach an effective reconstruction method with good RMS and CNR figures and significantly lower artifacts than in DAS reconstructions. If we do not consider the directivity effect, DPARS is unable to recover the absorbers (Fig. 8(d)), and if we do not use sparsity regularization, the linear system of equations is rank deficient due to the lack of information caused by the directivity effect and the limited angle of view.

We also performed a noise analysis for the circular tomography simulation study. The noise analysis also shows the robustness of the regularization methods. Figure 10 shows the optimized range of the cutoff ratio that determines the number of DCT coefficients used in the PAT reconstruction. With the SNR of 20 dB, the DAS reconstruction method results in a CNR of 0.88 and RMS error of 0.23 that can be enhanced to 2.2 and 0.15, respectively, with FDPARS. Comparing these values with those in Fig. 10, DPARS outperforms DAS and Tikhonov regularization for a considerable range of cutoff ratios in the presence of noise as we set ${r}_{c}$ to 0.3 and ${r}_{dec}$ to 0.15 for all the reconstructed images with DPARS and FDPARS. Wiener deconvolution reduced the impact of noise on the inversion approaches. The optimal cutoff ratio ${r}_{c}$ and decaying radius ${r}_{dec}$ depend on the pattern of the image and the noise level, which we believe do not change significantly in a specific application. Therefore, once a range of acceptable values of cutoff ratio is found for specific application using simulations, it does not have to be changed for different data sets of the same application. For higher quality images with longer processing time, the number of DCT coefficients can always be increased.

In our reconstruction, we have assumed that the reconstruction ROI that receives significant illumination is fully enclosed within the transducer. Otherwise, the probe detection surface will receive signals from outside the region considered for the reconstruction and the reconstruction will have artifacts. To deal with this problem, the transducer directivity can be used to define the ROI, with the assumption that absorbers outside this ROI do not produce significant PA waves.

For 2-D PAT, due to the softer directivity effect inside the US plane of the linear probe, we expected to have a better performance from the DAS reconstruction. As Figs. 7(f) and 11(f) show, DAS artifacts appear as lines parallel to the face of the transducer. Three lines corresponding to three positions of the transducer are visible.

The DPARS algorithm provides an image with fewer artifacts and considerable higher CNR. In our experimental 2-D tomography, we acquired CNR and RMS values that are close to those of the simulation study. The artifact in the DPARS image was similar to that in the simulation study. The artifacts show a wave with low frequency. However, DAS could not recover the circle completely. Noise, non-uniformity of the laser beam as well as errors in the assumed geometry of the detection surface may all affect the reconstruction accuracy. We used same detection surface during reconstruction process of the DAS and DPARS. Therefore, the geometrical errors are identical for both methods. These will be studied in greater detail in the future, but our first results indicate that sparsity regularization enables us to compensate for such errors.

Generally, an ideal sharp cutoff window is applied to the DCT coefficients. As a result, ringing artifacts appear near sharp transitions in the DPARS images. It has been shown in all the simulation and experiment results of FDPARS that the ringing artifacts are reduced by windowing in the DCT domain. The images are improved mostly in terms of CNR, because ringing effects are more severe near step discontinuities.

In the DPARS reconstruction of the inclusion and vessel structure of Fig. 8(e), we had a linear system of equations with 10,904 unknowns and 8,168 equations. For such an under-determined problem we need to use some form of regularization. Sparsity regularization helps improve the conditioning of the problem by reducing the number of unknowns to 741. The condition number of the system after sparsity regularization is equal to 40, while it was 123.7 for the Tikhonov regularized system. Sparsity regularization also improves the processing time. We performed our processing in Matlab® using a standard (Intel® Core i7-4770 CPU @ 3.4GHz) PC with 32GB RAM. The computing time required to assemble the linear system, Eq. (15), was 17 s. This time depends on how efficient the code is and can be reduced. The computing time required to solve the resulting system of equations was 19 s when the DCT sparsity regularization was used, and 228 s for the Tikhonov regularization.

We started the 3-D PAT with a small sample to consider the feasibility of the reconstruction approach. DPARS provided an image with better CNR and RMS error. For such 3-D experiments, the high number of data acquisitions, and the uncertainty in the transducer location could cause artifacts in the reconstructed images. However, DPARS results in images with considerable higher resolution than DAS, as shown in Fig. 12.

## 6. Conclusion

We presented DPARS, a new model-based, deconvolution-based PA reconstruction method that employs sparsity regularization. We have demonstrated that the method works with limited angle views and in the presence of sensor directivity. The method creates images by solving large linear systems of equations by reducing the number of unknowns using a sparse representation of the distribution of absorbers. Sparsity regularization enhances the numerical conditioning and shortens the computing time required to generate an image. The DPARS algorithm is categorized as a semi-analytical method.

There are many ways in which our sparsity regularization approach could be improved in future work. For example, instead of using a disk in the frequency domain as the sparsity pattern, prior knowledge of the absorber distribution, provided, for example, by the DAS image, can be used to select a customized sparsity pattern. We will pursue applications of our method in future work to breast and prostate imaging. Another topic to investigate is the use of DPARS for data of the single positioned linear probe.

## Funding

Natural Sciences and Engineering Research Council of Canada (NSERC); Collaborative Health Research Projects (CHRP 446576-13); Canadian Institutes of Health Research (CIHR); Collaborative Health Research Projects (CPG-127772), and the C.A. Laszlo Chair in Biomedical Engineering.

## Acknowledgments

We would also like to acknowledge Analogic Corporation as well as Dr. M. Honarvar and Mr. L. Pan for their help.

## References and links

**1. **C. Sim, H. Kim, H. Moon, H. Lee, J. H. Chang, and H. Kim, “Photoacoustic-based nanomedicine for cancer diagnosis and therapy,” J. Control. Release **203**, 118–125 (2015). [CrossRef] [PubMed]

**2. **M. A. Lediju Bell, X. Guo, D. Y. Song, and E. M. Boctor, “Transurethral light delivery for prostate photoacoustic imaging,” J. Biomed. Opt. **20**(3), 036002 (2015). [CrossRef] [PubMed]

**3. **Y.-S. Chen, W. Frey, S. Kim, K. Homan, P. Kruizinga, K. Sokolov, and S. Emelianov, “Enhanced thermal stability of silica-coated gold nanorods for photoacoustic imaging and image-guided therapy,” Opt. Express **18**(9), 8867–8878 (2010). [CrossRef] [PubMed]

**4. **A. Agarwal, S. W. Huang, M. O’Donnell, K. C. Day, M. Day, N. Kotov, and S. Ashkenazi, “Targeted gold nanorod contrast agent for prostate cancer detection by photoacoustic imaging,” J. Appl. Phys. **102**(6), 064701 (2007). [CrossRef]

**5. **C. Li and L. V. Wang, “Photoacoustic tomography and sensing in biomedicine,” Phys. Med. Biol. **54**(19), R59–R97 (2009). [CrossRef] [PubMed]

**6. **J. Xia and L. V. Wang, “Small-Animal Whole-Body Photoacoustic Tomography: A Review,” IEEE Trans. Biomed. Eng. **61**(5), 1380–1389 (2014). [CrossRef] [PubMed]

**7. **Y. Xu and L. V. Wang, “Time reversal and its application to tomography with diffracting sources,” Phys. Rev. Lett. **92**(3), 033902 (2004). [CrossRef] [PubMed]

**8. **B. T. Cox and B. E. Treeby, “Effect of sensor directionality on photoacoustic imaging: a study using the k-wave toolbox,” in *BiOS* (International Society for Optics and Photonics, 2010), paper 75640I.

**9. **R. Z. Azar, C. Leung, T. K. Chen, K. Dickie, J. Dixon, K.-K. Chan, and L. Pelissier, “An automated breast ultrasound system for elastography,” in 2012 IEEE International Ultrasonics Symposium (IEEE, 2012), pp. 1–4. [CrossRef]

**10. **S. Ma, S. Yang, and H. Guo, “Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction,” J. Appl. Phys. **106**(12), 123104 (2009). [CrossRef]

**11. **H. K. Zhang, M. A. L. Bell, X. Guo, H. J. Kang, and E. M. Boctor, “Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach using beamformed ultrasound data,” Biomed. Opt. Express **7**(8), 3056–3068 (2016). [CrossRef] [PubMed]

**12. **J. Kang, E. K. Kim, G. R. Kim, C. Yoon, T. K. Song, and J. H. Chang, “Photoacoustic imaging of breast microcalcifications: A validation study with 3-dimensional ex vivo data and spectrophotometric measurement,” J. Biophotonics **8**(1-2), 71–80 (2013). [PubMed]

**13. **R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys. **30**(5), 856–860 (2003). [CrossRef] [PubMed]

**14. **D. W. Yang, D. Xing, S. H. Yang, and L. Z. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express **15**(23), 15566–15575 (2007). [CrossRef] [PubMed]

**15. **L. Pan, R. Rohling, P. Abolmaesumi, S. Salcudean, and S. Tang, “Differentiating fatty and non-fatty tissue using photoacoustic imaging,” in *SPIE BiOS* (International Society for Optics and Photonics, 2014), pp. 894354–894356.

**16. **S. Ma, S. Yang, and H. Guo, “Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction,” J. Appl. Phys. **106**(12), 123104 (2009). [CrossRef]

**17. **D. Yang, D. Xing, H. Gu, Y. Tan, and L. Zeng, “Fast multielement phase-controlled photoacoustic imaging based on limited-field-filtered back-projection algorithm,” Appl. Phys. Lett. **87**(19), 194101 (2005). [CrossRef]

**18. **A. Rosenthal, D. Razansky, and V. Ntziachristos, “Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography,” IEEE Trans. Med. Imaging. **29**, 1275–1285 (2010).

**19. **N. A. Rejesh, H. Pullagurla, and M. Pramanik, “Deconvolution-based deblurring of reconstructed images in photoacoustic/thermoacoustic tomography,” J. Opt. Soc. Am. A **30**(10), 1994–2001 (2013). [CrossRef] [PubMed]

**20. **J. Prakash, A. S. Raju, C. B. Shaw, M. Pramanik, and P. K. Yalavarthy, “Basis pursuit deconvolution for improving model-based reconstructed images in photoacoustic tomography,” Biomed. Opt. Express **5**(5), 1363–1377 (2014). [CrossRef] [PubMed]

**21. **K. Wang, S. A. Ermilov, R. Su, H.-P. Brecht, A. A. Oraevsky, and M. A. Anastasio, “An imaging model incorporating ultrasonic transducer properties for three-dimensional optoacoustic tomography,” IEEE Trans. Med. Imaging **30**(2), 203–214 (2011). [CrossRef] [PubMed]

**22. **Q. Sheng, K. Wang, T. P. Matthews, J. Xia, L. Zhu, L. V. Wang, and M. A. Anastasio, “A constrained variable projection reconstruction method for photoacoustic computed tomography without accurate knowledge of transducer responses,” IEEE Trans. Med. Imaging **34**(12), 2443–2458 (2015). [CrossRef] [PubMed]

**23. **K. P. Köstli and P. C. Beard, “Two-dimensional photoacoustic imaging by use of Fourier-transform image reconstruction and a detector with an anisotropic response,” Appl. Opt. **42**(10), 1899–1908 (2003). [CrossRef] [PubMed]

**24. **K. Mitsuhashi, K. Wang, and M. A. Anastasio, “Investigation of the far-field approximation for modeling a transducer’s spatial impulse response in photoacoustic computed tomography,” Photoacoustics **2**(1), 21–32 (2014). [CrossRef] [PubMed]

**25. **D. Queirós, X. L. Déan-Ben, A. Buehler, D. Razansky, A. Rosenthal, and V. Ntziachristos, “Modeling the shape of cylindrically focused transducers in three-dimensional optoacoustic tomography,” J. Biomed. Opt. **18**(7), 076014 (2013). [CrossRef] [PubMed]

**26. **Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. **31**(4), 724–733 (2004). [CrossRef] [PubMed]

**27. **G. Paltauf, R. Nuster, and P. Burgholzer, “Weight factors for limited angle photoacoustic tomography,” Phys. Med. Biol. **54**(11), 3303–3314 (2009). [CrossRef] [PubMed]

**28. **J. Meng, L. V. Wang, L. Ying, D. Liang, and L. Song, “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express **20**(15), 16510 (2012). [CrossRef]

**29. **K. J. Francis, P. Rajalakshmi, and S. S. Channappayya, “Distributed compressed sensing for photo-acoustic imaging,” in 2015 IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 1513–1517. [CrossRef]

**30. **H. Gao, J. Feng, and L. Song, “Limited-view multi-source quantitative photoacoustic tomography,” Inverse Probl. **31**(6), 065004 (2015). [CrossRef]

**31. **M. Cao, J. Yuan, S. Du, G. Xu, X. Wang, P. L. Carson, and X. Liu, “Full-view photoacoustic tomography using asymmetric distributed sensors optimized with compressed sensing method,” Biomed. Signal Process. Control **21**, 19–25 (2015). [CrossRef]

**32. **X. Liu, D. Peng, W. Guo, X. Ma, X. Yang, and J. Tian, “Compressed Sensing Photoacoustic Imaging Based on Fast Alternating Direction Algorithm,” Int. J. Biomed. Imaging **2012**, 206214 (2012). [CrossRef] [PubMed]

**33. **S. Bu, Z. Liu, T. Shiina, K. Kondo, M. Yamakawa, K. Fukutani, Y. Someda, and Y. Asao, “Model-based reconstruction integrated with fluence compensation for photoacoustic tomography,” IEEE Trans. Biomed. Eng. **59**(5), 1354–1363 (2012). [CrossRef] [PubMed]

**34. **D. Liang, H. F. Zhang, and L. Ying, “Compressed-sensing photoacoustic imaging based on random optical illumination,” Int. J. Funct. Inform. Personal. Med. **2**(4), 394 (2009). [CrossRef]

**35. **N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Patterned interrogation scheme for compressed sensing photoacoustic imaging using a Fabry Perot planar sensor,” in *SPIE BiOS* (International Society for Optics and Photonics, 2014), p. 894327.

**36. **P. Burgholzer, M. Sandbichler, F. Krahmer, T. Berer, and M. Haltmeier, “Sparsifying transformations of photoacoustic signals enabling compressed sensing algorithms,” in *SPIE BiOS*, A. A. Oraevsky and L. V. Wang, eds. (International Society for Optics and Photonics, 2016), p. 970828.

**37. **S. Tzoumas, A. Rosenthal, C. Lutzweiler, D. Razansky, and V. Ntziachristos, “Spatiospectral denoising framework for multispectral optoacoustic imaging based on sparse signal representation,” Med. Phys. **41**(11), 113301 (2014). [CrossRef] [PubMed]

**38. **Z. Guo, C. Li, L. Song, and L. V. Wang, “Compressed sensing in photoacoustic tomography in vivo,” J. Biomed. Opt. **15**(2), 021311 (2010). [CrossRef] [PubMed]

**39. **J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging **28**, 585–594 (2009).

**40. **Y. Zhang, Y. Wang, and C. Zhang, “Efficient discrete cosine transform model-based algorithm for photoacoustic image reconstruction,” J. Biomed. Opt. **18**(6), 066008 (2013). [CrossRef] [PubMed]

**41. **M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. **58**(6), 1182–1195 (2007). [CrossRef] [PubMed]

**42. **Y. Yang, N. P. Galatsanos, and A. K. Katsaggelos, “Regularized reconstruction to reduce blocking artifacts of block discrete cosine transform compressed images,” IEEE Trans. Circ. Syst. Video Tech. **3**(6), 421–432 (1993). [CrossRef]

**43. **H. Moradi, S. Tang, and S. E. Salcudean, “Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization,” in *SPIE BiOS*, A. A. Oraevsky and L. V. Wang, eds. (International Society for Optics and Photonics, 2016), paper 97082D.

**44. **M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. **71**(1), 016706 (2005). [CrossRef] [PubMed]

**45. **R. A. Kruger, P. Liu, Y. R. Fang, and C. R. Appledorn, “Photoacoustic ultrasound (PAUS)--reconstruction tomography,” Med. Phys. **22**(10), 1605–1609 (1995). [CrossRef] [PubMed]

**46. **Y. Wang, D. Xing, Y. Zeng, and Q. Chen, “Photoacoustic imaging with deconvolution algorithm,” Phys. Med. Biol. **49**(14), 3117–3124 (2004). [CrossRef] [PubMed]

**47. **E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. **23**(3), 969–985 (2007). [CrossRef]

**48. **E. Bingham and H. Mannila, “Random projection in dimensionality reduction: Applications to image and text data,” in Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’01 (ACM Press, 2001), pp. 245–250. [CrossRef]

**49. **B. Jafarpour, V. K. Goyal, D. B. McLaughlin, and W. T. Freeman, “Transform-domain sparsity regularization for inverse problems in geosciences,” Geophysics **74**(5), R69–R83 (2009). [CrossRef]

**50. **M. Honarvar, R. S. Sahebjavaher, S. E. Salcudean, and R. Rohling, “Sparsity regularization in dynamic elastography,” Phys. Med. Biol. **57**(19), 5909–5927 (2012). [CrossRef] [PubMed]

**51. **A. Tikhonov and V. Arsenin, “Solutions of ill-posed problems,” (1977).

**52. **S.-C. Wooh and Y. Shi, “Optimum beam steering of linear phased arrays,” Wave Motion **29**(3), 245–265 (1999). [CrossRef]

**53. **E. J. Gottlieb, J. M. Cannata, C.-H. Hu, and K. K. Shung, “Development of a high-frequency (> 50 mhz) copolymer annular-array, ultrasound transducer,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control **53**(5), 1037–1045 (2006). [CrossRef] [PubMed]

**54. **B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. **15**(2), 021314 (2010). [CrossRef] [PubMed]

**55. **H. Zhang and F. Aalamifar, *Feasibility Study of Robotically Tracked Photoacoustic Computed Tomography* (SPIE Med. 2015).

**56. **H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial Angular Compounding of Photoacoustic Images,” IEEE Trans. Med. Imaging **35**(8), 1845–1855 (2016). [CrossRef] [PubMed]