Photoacoustic imaging has been a focus of research for clinical applications owing to its ability for deep visualization with optical absorption contrast. However, there are various technical challenges remaining for this technique to find its place in clinics. One of the challenges is the occurrence of reflection artifacts. The reflection artifacts may lead to image misinterpretation. Here we propose a new method using multiple wavelengths for identifying and removing the reflection artifacts. By imaging the sample with multiple wavelengths, the spectral response of the features in the photoacoustic image is obtained. We assume that the spectral response of the reflection artifact is better correlated with the proper image feature of its corresponding absorber than with other features in the image. Based on this, the reflection artifacts can be identified and removed. Here, we experimentally demonstrated the potential of this method for real-time identification and correction of reflection artifacts in photoacoustic images in phantoms as well as in vivo using a handheld photoacoustic imaging probe.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
In the last decade, significant progress has been made for translating photoacoustic imaging (PAI) into clinics . This technique uses the photoacoustic (PA) effect, where materials absorb short pulsed light and generate ultrasound (US) waves. The US waves can be detected using US transducers for reconstructing the absorbing structures. Since in tissue the US waves experience order of magnitude less scattering compared to light, much deeper information can be reconstructed compared to purely optical imaging techniques. Therefore, PAI provides optical absorption contrast and has the ability to image deeper than purely optical imaging techniques at ultrasonic resolution. Exploiting these properties, current research is focusing on investigating its clinical applications such as imaging of breast cancer [2, 3], rheumatoid arthritis [4, 5], and atherosclerosis . Additionally, multispectral PAI strengthens the advantages of this technique for screening and monitoring human diseases, for instance by examining the oxygen saturation of hemoglobin (sO2) in the lesion  or characterizing different tissues .
Recent research has focused on developing compact and affordable PAI systems. Integrating the laser source, especially a low cost laser source , into commercial handheld US probes for clinical use of PAI systems was proposed [7–12]. However, one of the major drawbacks of using a linear US transducer array is the occurrence of reflection artifacts (RAs) due to its limited view angle. As photoacoustically generated US waves propagate in all directions, the US waves propagating away from the US transducer array can be reflected towards the US transducer array by acoustic heterogeneities such as bone and tendon causing RAs in the acquired photoacoustic image. The RAs appear at larger depths than the real absorbers leading to misinterpretation of the acquired images. For clinical usage, real-time correction of the RAs in PAI is of fundamental importance.
RAs are also called in-plane artifacts, one of two types of artifacts (clutter) in PAI. The other type is out-of-plane artifact . The term “plane” represents the imaging plane defined by the US transducer array. Since the laser beam excites a large volume, absorbers which are not in the imaging plane absorb the light and generate signals. If the out-of-plane sensitivity of the transducers is high enough, these absorbers appear in the acquired image resulting in out-of-plane artifacts (direct out-of-plane artifacts). If there is an acoustic reflector located underneath these out-of-plane absorbers, out-of-plane RAs (indirect out-of-plane artifacts) can be present in the acquired image . In this work, we aim to tackle RAs (in-plane artifacts).
Several methods for reducing RAs have been presented [14–18]. Deformation Compensated Averaging (DCA)  employs tissue deformation for de-correlating the artifact by slightly palpating the tissue. This technique requires a well-trained person, sufficient deformation of tissue and works for easily deformable tissue. Localized vibration tagging (LOVIT) , introduced by Jaeger, uses a similar principle as DCA but using the acoustic radiation force (ARF) aimed at the artifact in the focal region of the ultrasonic beam instead of tissue palpation. This is a promising approach to overcome the disadvantages of DCA. However, it can only reduce artifacts based on the deformation of tissue in the US focal region. This limits the real-time capability and has safety challenges. Recently, LOVIT has been further improved by using multiple foci . Another method exploits the acoustic tissue information by inversion of a linear scatter model using plane wave US measurements . This method has to match PA and US measurements and requires numerous plane wave angles limiting itself to real-time performance. Schwab then introduced an advanced interpolation approach to significantly reduce the required number of plane waves in a linear scattering medium . Allman introduced a convolutional neural network to remove RAs of point-like sources with high accuracy . However, since the network is trained with simulated data, the accuracy might be negatively affected in in vivo situations.
Previously Singh introduced a method, photoacoustic-guided focused ultrasound (PAFUSion), using focused ultrasound or synthetic backpropagation to mimic PA sources and thus identify the RAs [17, 21, 22]. This method can efficiently reduce the RAs, however it has several limitations: mimicking the PA source is limited by the angular aperture of the US probe; numerous additional US images are needed, challenging real-time artifact reduction; the PA sources (skin, blood vessels) must be perpendicular to the imaging plane that requires demanding alignment effort; the PA signal from the source and the mimicked signal by US must match each other in terms of amplitude and frequency content which might negatively affect the accuracy of the method.
In this paper, we propose a new method where we exploit the use of multispectral PAI for identifying and removing RAs. Imaging with multiple wavelengths, PA spectral responses of the features in the acquired image can be obtained. Our method is based on the assumption that RAs are better correlated with the image features of their corresponding original absorbers than with other features, exposing the suspicious artifacts. In addition, RAs appear at larger depths and have weaker signals than the original image feature. Combining these findings can reveal the RAs and remove them.
To test the method, a handheld probe with integrated diode lasers was used for PAI. These diode lasers emit light at 4 wavelengths (808, 915, 940 and 980 nm). We performed experiments in phantoms and in vivo. Results show that this is a promising method for correcting RAs, potentially in real-time.
2.1. Photoacoustic imaging
PAI is an imaging technique using pulsed laser irradiation to generate US waves which are subsequences of pressure changes due to thermal expansion and relaxation. The generated initial pressure is described as [23–25]:
Light propagating in the tissue is scattered and absorbed. Since scattering and absorption are strongly dependent on the wavelength, light at different wavelengths reaches different depths [26, 27]. Therefore, the light fluence inside the tissue depends on both the excitation wavelength and the position.
The absorption coefficient, , is a wavelength-dependent optical property of the absorber. The generated initial pressure, , can be rewritten as a function of the excitation wavelength and the local position:
2.2. Reflection artifacts in photoacoustic imaging
Figure 1 illustrates the principle of RAs in PAI. A part of generated US waves (blue) is reflected at the acoustic reflector, seen in Fig. 1(a) The reflected US waves (red) propagating back to the detector resemble a virtual acoustic source, so-called RA, located at a larger depth. Figure 1(b) is a reconstructed PA image of a phantom representing this situation. The phantom was made of a black thread placed above a plastic petri dish lid, and demi-water was used as an acoustic coupling medium. An RA at a larger depth is clearly visible in this image.
In a clinical scenario where there are a few blood vessels located above a tumor. The tumor can reflect US signals generated from the blood vessels causing RAs surrounding the tumor. This can negatively affect the ability to assess tumors based on oxygen saturation of hemoglobin [3, 28].
The principle of our method is based on Eq. (2) where the pixel value in the acquired PA image represents the generated initial pressure as a function of the local light fluence and the excitation wavelength. Exploiting this principle with multi-wavelength PAI, a sequence of PA images with multiple wavelengths of light is obtained. As all images are of the same region of interest (ROI), they show the same structure of the sample. Studying the changes of the pixel values reveals the spectral responses of absorbers in the images. Our method relies on the following two assumptions:
- 1. Absorbers with identical optical properties located at different positions give different spectral responses due to different local light fluence.
- 2. Both direct and reflected PA signals convey the optical properties of the source.
If the above two assumptions are fulfilled, the spectral response of RAs is identical to the spectral response of their source (real absorber) and two identical absorbers will not be misidentified as one reflection artifact of the other. The first assumption is discussed further at the end of this section.
Figure 2 shows the flowchart of the method. Images of the sample are acquired with multiple wavelengths. Of the acquired images, the image giving the strongest signal is selected for segmentation which detects image features. The features extracted from the segmentation step are applied to all the other images to obtain their spectral response. This information is then used in the RA correction step to identify and remove RAs. The corrected image which is a segmented image is further processed through the de-segmentation step to recover the shape of the remaining features giving the final corrected image.
In the segmentation step, an automatic segmentation algorithm which is based on the Sobel edge detection algorithm  is implemented to detect image features. Computing the Sobel edge threshold is supported by Matlab.
Figure 3 illustrates this segmentation method. Figure 3(a) is a sample PA image which represents two blood vessels (upward blue arrows) and their reflection (downward yellow arrows). Properly segmenting these features is expected. However, applying a threshold can lead to over-thresholding or under-thresholding. In the case of over-thresholding, weak edges cannot be detected resulting in feature loss. Figure 3(b) shows an over-thresholding case where bottom features are not detectable. In contrast, Fig. 3(c) shows an under-thresholding case. Several features are detected as one single feature in this case.
We avoid over-thresholding by choosing half of the threshold calculated by Matlab. As a consequence, under-thresholding might happen. We further process the image with a peak-process. In this process, we find all peaks in the image and then set a part of pixels surrounding the peaks to zero. Under-thresholding is significantly improved after this step giving separate features, seen in Fig. 3(d). It might lead to over-segmentation in which one absorber is segmented into a number of features. However, since these features are parts of one absorber, they share the spectral response of the absorber.
To obtain features spectral response, the detected features from the segmentation step are applied to all other images. Of each feature the maximum pixel value is taken from all images, giving that feature’s spectral response. Spectral responses of all features are then compared to each other using the Pearson correlation coefficient :
In the identifying and removing RAs step, a threshold, , is then applied to separate high correlation coefficients from all correlation coefficients. Features with spectral responses with correlations exceeding are grouped together as suspicious RAs. In addition, RAs appear at a larger depth and have a weaker signal than the corresponding real absorber due to longer propagation and attenuation. An extra condition is used to identify RAs, that RAs must be deeper than their real absorber at least , which is described at the end of this section. Features in the each group are analyzed based on these conditions to identify RAs and thus remove them.
RAs are removed by setting the pixel values of the RA features to zero. Figure 4(a) is the corrected image of Fig. 3(a). Features 8, 9, and 11 detected in Fig. 3(d) are removed. However, this corrected image is a segmented image. To recover the shape of the remaining features, the de-segmentation step is applied. All pixels surrounding the remaining peaks which were removed in the peak-process are recovered giving the final corrected image, seen in Fig. 4(b).
Segmentation benefits image feature analysis, however, a properly segmented image might be not obtained. Therefore, another approach without segmentation is also implemented for analyzing images. In this approach, each pixel of the image is considered as an object to study the spectral responses. Particularly, spectral responses of all pixels, rather than features, are correlated to each other. The analysis based on the correlation coefficient is the same as the analysis used for segmented features.
A comparison of the method with and without segmentation will be presented in the experimental results section, and further discussed in the discussion section.
In a highly scattering medium, a local region can have light fluence nearly homogeneous, thus two identical absorbers in that region might have the same spectral responses, giving a correlation coefficient exceeding . As a consequence, assumption 1 is not appropriate. To avoid this, a minimum distance in the depth between the two features is used as an extra condition to assess that whether one feature is an RA of the other one or two separate absorbers with the same spectral response. The value of is related to the value of . In other words, defines a region below a PA image feature where no other image features are assessed as its RAs. In addition, if one absorber is over-segmented, one segment would be considered as an RA. This can be avoided with . To determine this , measurements were performed and will be reported in the experimental results section, and discussed further in the discussion section.
Experiments were carried out using a handheld PAI probe, depicted in Fig. 5. The handheld probe is connected to a commercial ultrasound scanner MyLabOne (Esaote Europe BV, The Netherlands) for the acquisition of US and PA images. The scanner can acquire data at a maximum sampling frequency of 50 MHz with 12 bit digitization. This device was used in research mode so that raw data could be acquired in an external PC for offline processing.
The US transducer array in the handheld probe has a center frequency of 7.5 MHz with a bandwidth of 100%. It comprises 128 elements with a pitch of 0.3 mm. For our study the central 64 elements were used. Diode lasers integrated into the handheld probe emit light at 4 different wavelengths (808, 915, 940, and 980 nm) at a repetition rate of up to 10 kHz.
Table 1 presents specifications of the diode lasers working at the repetition rate of 1 kHz. In addition, the angles at the output are 53.1, 55.6, 47.8, and 50.3 degrees at 808, 915, 940, and 980 nm, respectively, due to diode lasers placed at different stacks and a prism at the light output. These differences between wavelengths add to the light fluence variation of these wavelengths.
Offline processing of data was done on a PC (Intel Core i7 3.41 GHz, 8 GB of RAM) running Matlab R2016b.
5. Experimental results
To demonstrate the feasibility of the method, we performed experiments in phantoms as well as in vivo. The PA image reconstruction was done using a Fourier transform based reconstruction algorithm .
In each experiment, 4 laser pulses of 4 different wavelengths followed by 1 US pulse were sent repeatedly for 100 times. 4 PA images at 4 different wavelengths and 1 US image were then acquired by averaging signal over 100 pulses. The diode lasers were run at a repetition rate of 1 kHz. The US image was used to verify the location of absorbers and corresponding RAs.
A phantom was made of two black threads, with the diameter of 200-250 , and a petri dish lid (Greiner Bio-One GmbH, Germany) as an acoustic reflector, with thickness of 750 , seen in Fig. 6(a). A schematic drawing of a cross-section of the phantom is shown in Fig. 6(b). The lid was positioned underneath one black thread as an acoustic reflector. The phantom was mounted on a mount (CP02/M, Thorlabs, Germany) to fixate it in a solution of 3.5% Intralipid 20% (Fresenius Kabi, The Netherlands) in demi-water. This solution served as an acoustic coupling medium as well as an optically scattering background. The reduced scattering coefficient of the solution was estimated as = 6 cm−1 at the wavelength of 900 nm based on . Figure 6(c) shows a combined PA and US image illustrating a cross-section of this phantom. The gray color part is the US image showing two surfaces of the lid which reflect US waves. The hot color part is the PA image where two black threads are visualized at expected positions relative to the lid. Underneath the lid were some more features. As there was no absorbers underneath the lid, these features were RAs of the black thread above the lid. In the PA image, there is a long “tail” of the absorber above the lid which perhaps is a reconstruction artifact.
4 PA images of the ROI corresponding to 4 wavelengths are shown in Fig. 6(d). The intensity of the reflections was not strong. The explanation might be that the acoustic reflectivity of the petri dish lid in that coupling medium was not high. On the other hand, in the US image’s case, the US transducer array generated higher pressure compared to PA signal resulting in high intensity of the reflector in the acquired US image. The image acquired at a 940 nm wavelength had the strongest signal therefore this image was selected for segmentation.
Figure 7(a) shows the segmented image acquired at 940nm with numbered features (see also Fig. 15, Appendix 1). The spectral response of several features is shown in Fig. 7(b) (the spectral response was normalized with the maximum value).
Two black threads made of the same materials have identical optical absorption properties. However, from Fig. 7(b), the difference in the spectral response of these two absorbers is observable (feature 57 for one thread and features 24 and 26 for another thread). This matches with the first assumption described in the method section. On the other hand, spectral responses of feature 57 and its RAs, features 54 and 58, are highly identical. In other words, optical properties of the absorber were conserved in the US waves in spite of reflection confirming the second assumption.
Of the obtained spectral responses mutual correlations are calculated to determine “similarity” of the responses. Table 2 shows the correlation coefficients of these responses. The correlation coefficients between feature 57 and features 52, 54, 58, and 64 are high (close to 1) and these features appear at larger depths than feature 57 with a lower signal revealing that they are RAs of feature 57. The correlation coefficients of feature 57 and features 24 and 26 are 0.937 and 0.848 respectively. A threshold in between 0.94 and 0.97 would be sufficient to separate features 24 and 26 from features 52, 54, 58 and 64. Feature 51, which is likely an RA of feature 57, however has a low correlation coefficient. This can be explained as the intensity of feature 51 is close to the background, that would highly affect the spectral response and thus the correlation.
In this experiment, = 0.95 was applied identifying features 52, 54, 58 and 64 as RAs. These features were subsequently removed from the image giving a corrected segmented image, Fig. 8(a). This corrected image was then de-segmented to obtain the final corrected image, seen in Fig. 8(b). Feature 51 was not removed. However, as this feature is close to the noise level, it does not considerably affect the interpretation of the image. Figure 8(c) shows a comparison of the acquired PA image and the final corrected image.
5.2. In vivo
We also assessed the method with in vivo experiments. Our experiments focused on fingers where bones are close underneath the skin giving clear RAs. Fingers were placed ~7 mm underneath the probe and demi-water was used as the US coupling medium.
Figure 9(a) shows an in vivo PA image of a cross-section of a healthy volunteer’s finger. Figure 9(d) is a photo depicting the experimental configuration. Acquired PA and US images are represented in Fig. 9(a) and Fig. 9(b) respectively. The US image is compared with Fig. 9(c), adapted from , revealing the periosteum and bone. In the PA image, features beneath the periosteum and bone are therefore disclosed as RAs of the skin and blood vessels.
The finger was imaged with 4 wavelengths and the acquired images were processed and analyzed the same as described in the phantoms section. The processed image with numbered features (see also Fig. 16, Appendix 2) and several spectral responses are shown in Fig. 10. Features 56 and 63, 77 and 62, 32 and 50 have similar spectral responses (the correlation coefficient of these pairs is higher than 0.99), validating the second assumption.
The spectral responses of features 32 and 56 are observably different from each other as they represent different chromophores (melanin and blood). Interestingly, features 56 and 77 representing two blood vessels also have different spectral responses. This might be a result of different sO2 in these blood vessels, or of different local spectra of the excitation light.
It is worth noting that feature 50 which is an RA of the skin has higher intensity than features 18 and 124 which are two blood vessels. Increasing the segmentation threshold to remove feature 50 will also remove features 18 and 124 leading to losing real features.
Figure 11 shows acquired and corrected images where all RAs were identified and removed.
= 0.95 was used in this experiment (see Data File 2 for a complete correlation coefficient table).
5.3. Minimum vertical distance
As mentioned in the method section, in a region having nearly homogeneous light fluence, two identical absorbers with a vertical distance (along depth) less than might have the same spectral responses, giving a correlation exceeding , which results in failure of assumption 1 formulated in section Method. The method, therefore, might misidentify and miscorrect one absorber as an RA of the other one.
To evaluate the correlation as a function of , we performed several experiments comparing the spectral responses of two identical absorbers in a medium mimicking tissue optical properties. The medium was a solution of 2% Intralipid 20% with estimated = 3.5 cm−1 at the wavelength of 900 nm , 10−4 volume fraction of India ink (Royal Talens, The Netherlands), and 4x10−4 volume fraction of black ecoline (Royal Talens, The Netherlands) in Milli-Q water. The absorption coefficient of this solution (without Intralipid) was 0.599, 0.456, 0.534, and 0.776 cm−1 at the wavelength of 808, 915, 940, and 980 nm respectively, measured using a photo-spectrometer (UV-2600, Shimadzu, The Netherlands). Two black suture wires (USP 3/0, diameter of 0.24 mm, Vetsuture Nylon, The Netherlands) mimicking two identical absorbers were used. These two wires were embedded in the solution, one was fixated and the other one was attached to a motorized translation stage (MTS50A-Z8, Thorlabs, Germany) to adjust the vertical distance between the two wires.
At each , PA images were acquired using the four wavelengths. Images were then processed to calculate the correlation coefficient of spectral responses of the two wires. The measurement was repeated 4 times at each distance . The average, maximum and minimum values are shown in Fig. 12(a) presenting the behavior of the correlation coefficient with the distance. Figure 12(b) shows spectral responses of the two wires of a measurement at = 0.68 mm and a measurement at = 2.1 mm. The correlation coefficients at and in these measurements were 0.9916 and 0.9126 respectively.
For a threshold of the correlation coefficient of 0.95, is 1.7 mm. The vertical error bar in Fig. 12(a) is large. The reason for this might be that the correlation coefficient of spectral responses was calculated with only 4 wavelengths. For the Pearson correlation coefficient, 3 data points can give a meaningful coefficient. However, a small number of data points would affect the confidence interval .
5.4. Comparison of the method with and without segmentation
Two approaches for analyzing acquired images were mentioned in the method section, with and without segmentation.
For the method without segmentation, the spectral response of each pixel of the image is compared to all other pixels. Figure 13(a) shows again the in vivo image presented in the in vivo section. Figure 13(b) illustrates a pixel (the top red pixel) in the skin. The spectral response of this pixel is compared to all other pixels at least 2 mm below it, giving the correlation coefficient map. Red color indicates the correlation coefficient above 0.95. Figure 13(c) depicts identified RAs (yellow pixels, except the considered pixel) of the PA signal in the considered pixel.
Figure 14 shows the results using the two approaches. The top images are an acquired PA image, the corrected image with segmentation, and the corrected image without segmentation of the phantom. The bottom images are images in vivo in the same order.
In the corrected images without segmentation, the absorber in the bottom left corner of the phantom was observably shrunk. This could be due to the small amount of data points for the Pearson correlation coefficient.
The “tail” of the absorber in the top right corner of the phantom is likely a reconstruction artifact. A part of it was removed in the corrected image without segmentation. The reason is that the removed pixels in the “tail” were identified as RAs of the absorber.
The threshold for the method without segmentation was applied the same as the threshold for the method with segmentation (0.95). However, was set as 2 mm for compensating the size of features.
In in vivo imaging, the RAs of the blood vessels look like excess vessels and could be erroneously recognized as angiogenesis or hyper-vascularization, hallmarks of various diseases such as cancer or rheumatoid arthritis. With not much experience of RAs, misdiagnosis might happen. Simple solutions such as thresholding or limiting the imaging depth may be able to remove RAs in cases that the RAs are not accompanied by real image features with the same amplitude or depth range. If such image features exist, however, these simple solutions are not appropriate.
Compared to previously reported methods for reducing RAs, the proposed method offers significant advantages. First of all, the method works automatically and performing it does not require experience or training of the users, as is the case with DCA or PAFUSion in which the users have to hold the probe perpendicularly to the acoustic reflectors. Secondly, no US image is needed. Acquiring US images with multiple plane wave angles is more time consuming and comes at a higher processing expense. Thirdly, as the method does not need US images to detect RAs, matching features between PA and US images (as in PAFUSion) is not necessary resulting in detected RAs being completely removed in this proposed method. Fourthly, unlike deep learning, the method does not require training with various generated PA distribution sizes and geometries, and acoustic characteristic of the sample which might be unknown in in vivo imaging. Finally, the proposed method enhances the advantages of multispectral PAI.
Out-of-plane artifacts (direct and indirect out-plane-artifacts) can appear in the acquired PA images, especially artifacts caused by the skin. PAFUSion does not work for indirect out-of-plane artifacts. In contrast, they can be treated under the proposed method if the direct out-of-plane artifacts also appear in the image. However, both methods cannot handle direct out-of-plane artifacts. Another method for these artifacts is needed. Our future work will focus on a complete method combining this proposed method and a new method for out-of-plane artifacts.
The proposed method exploits the variance of light distribution of different wavelengths. In other words, the local illumination spectrum needs to be different at different depths for the method to work. In a scenario that the absorption and scattering spectrum is flat, resulting in a similar illumination spectrum at different depths, this method will not work. However, this is not likely in clinical imaging where the absorption coefficient varies with wavelength and various tissues .
, which defines a region below an original PA feature where the method will not correct its RAs, is the main limitation of this method. In our experiments demonstrating the method, was 1.7 mm corresponding to the threshold of the correlation coefficient of 0.95. However, this value is not a critical limitation in clinically relevant scenarios. In in vivo imaging, with different layers of tissue such as skin and muscle, optical heterogeneity would be stronger resulting in a smaller than in this study. Additionally, can be further reduced by selecting wavelengths at which light fluence varies strongly along the depth.
The proposed method uses PA information from the acquired PA images with multiple wavelengths. In principle, one PA image can be reconstructed from one laser pulse per excitation wavelength, therefore the variability in laser pulses is not a limiting factor for the principle of the proposed mothed. In our experiments, to improve the signal to noise ratio, we performed averaging over 100 laser pulses per excitation wavelength. In this case, pulse to pulse variation in beam shape and pointing stability might impede the performance of our method. However, our results show that this is not the case in our probe.
In this method, pixel values in the acquired image are considered to represent the generated initial pressure. However, the reconstruction algorithm can produce image artifacts itself and thus affect the method. We did not observe strong effect of reconstruction artifacts in our work. Nevertheless, an appropriate reconstruction algorithm should be considered.
The method might not be able to identify an RA if it appears at the same position with another absorber or another RA. In that case, they appear as one single feature and therefore, the spectral response of this feature will not highly correlate with the RA’s real absorber. Another approach such as PAFUSion is needed to identify the RAs.
A small number of data points might affect the Pearson correlation coefficient . The relation between the number of wavelengths and the efficacy of our method will be investigated in the future work.
Our method is a post-processing method. Therefore, compared to pure PAI, it costs more calculation and thus reduces the frame rate. However, for images with the size of 51264 pixels (~25 mm depth), correcting with and without segmentation take ~0.2 seconds and ~2-3 seconds running in Matlab, respectively, showing the potential of the method running in real-time. For the method without segmentation, if the memory of the computing device is sufficient (~12 GB for this case) vectorizing data can be accomplished reducing time consumption significantly. Since the imaging depth capability of this handheld probe is ~10 mm, the size of images can be reduced resulting in less expensive calculation. Nevertheless, a GPU can be utilized for further acceleration of the method, especially without segmentation.
The proposed method can identify and remove in plane reflection artifacts (RAs) in PAI exploiting local light fluence variation between multiple wavelengths. Experiments in phantoms and in vivo were performed for a proof of concept, using 4 wavelengths: 808, 915, 940 and 980 nm. Results show the potential of the method for correcting RAs in real-time with no separate ultrasound images needed. In addition, experiments were carried out using a compact PAI system suitable for clinical use, demonstrating the practical applicability of this method for medical use.
Appendix 1 Phantom’s segmented image
Appendix 2 Segmented in vivo image
European Union’s Horizon 2020 research and innovation programme (CVENT, H2020-731771)
The authors would like to acknowledge David Thompson and Bart Fischer, University of Twente, for their discussions. The authors are also grateful to Tom Knop and Wilma Petersen, University of Twente, for their help with phantom preparation.
The authors declare that there are no conflicts of interest related to this article.
References and links
2. M. Heijblom, D. Piras, F. M. van den Engh, M. van der Schaaf, J. M. Klaase, W. Steenbergen, and S. Manohar, “The state of the art in breast imaging using the Twente Photoacoustic Mammoscope: results from 31 measurements on malignancies,” Eur. Radiol. 26(11), 3874–3887 (2016). [CrossRef] [PubMed]
3. M. Toi, Y. Asao, Y. Matsumoto, H. Sekiguchi, A. Yoshikawa, M. Takada, M. Kataoka, T. Endo, N. Kawaguchi-Sakita, M. Kawashima, E. Fakhrejahani, S. Kanao, I. Yamaga, Y. Nakayama, M. Tokiwa, M. Torii, T. Yagi, T. Sakurai, K. Togashi, and T. Shiina, “Visualization of tumor-related blood vessels in human breast by photoacoustic imaging system with a hemispherical detector array,” Sci. Rep. 7(1), 41970 (2017). [CrossRef] [PubMed]
4. P. J. van den Berg, K. Daoudi, H. J. Bernelot Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017). [CrossRef] [PubMed]
5. J. Jo, G. Xu, M. Cao, A. Marquardt, S. Francis, G. Gandikota, and X. Wang, “A functional study of human inflammatory arthritis using photoacoustic imaging,” Sci. Rep. 7(1), 15026 (2017). [CrossRef] [PubMed]
7. K. Daoudi, P. J. van den Berg, O. Rabot, A. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe integrating laser diode and ultrasound transducer array for ultrasound/photoacoustic dual modality imaging,” Opt. Express 22(21), 26365–26374 (2014). [CrossRef] [PubMed]
8. J. J. Niederhauser, M. Jaeger, R. Lemor, P. Weber, and M. Frenz, “Combined ultrasound and optoacoustic system for real-time high-contrast vascular imaging in vivo,” IEEE Trans. Med. Imaging 24(4), 436–440 (2005). [CrossRef] [PubMed]
9. C. Kim, T. N. Erpelding, L. Jankovic, M. D. Pashley, and L. V. Wang, “Deeply penetrating in vivo photoacoustic imaging using a clinical ultrasound array system,” Biomed. Opt. Express 1(1), 278–284 (2010). [CrossRef] [PubMed]
10. C. Haisch, K. Eilert-Zell, M. M. Vogel, P. Menzenbach, and R. Niessner, “Combined optoacoustic/ultrasound system for tomographic absorption measurements: possibilities and limitations,” Anal. Bioanal. Chem. 397(4), 1503–1510 (2010). [CrossRef] [PubMed]
11. K. Sivasubramanian, V. Periyasamy, R. A. Dienzo, and M. Pramanik, “Hand-held, clinical dual mode ultrasound - photoacoustic imaging of rat urinary bladder and its applications,” J. Biophotonics 11(5), e201700317 (2018). [CrossRef] [PubMed]
12. K. Sivasubramanian, V. Periyasamy, and M. Pramanik, “Non-invasive sentinel lymph node mapping and needle guidance using clinical handheld photoacoustic imaging system in small animal,” J. Biophotonics 11(1), e201700061 (2018). [CrossRef] [PubMed]
13. M. K. A. Singh, V. Parameshwarappa, E. Hendriksen, W. Steenbergen, and S. Manohar, “Photoacoustic-guided focused ultrasound for accurate visualization of brachytherapy seeds with the photoacoustic needle,” J. Biomed. Opt. 21(12), 120501 (2016). [CrossRef] [PubMed]
14. M. Jaeger, L. Siegenthaler, M. Kitz, and M. Frenz, “Reduction of background in optoacoustic image sequences obtained under tissue deformation,” J. Biomed. Opt. 14(5), 054011 (2009). [CrossRef] [PubMed]
15. M. Jaeger, J. C. Bamber, and M. Frenz, “Clutter elimination for deep clinical optoacoustic imaging using localised vibration tagging (LOVIT),” Photoacoustics 1(2), 19–29 (2013). [CrossRef] [PubMed]
16. H.-M. Schwab, M. F. Beckmann, and G. Schmitz, “Photoacoustic clutter reduction by inversion of a linear scatter model using plane wave ultrasound measurements,” Biomed. Opt. Express 7(4), 1468–1478 (2016). [CrossRef] [PubMed]
17. M. K. A. Singh and W. Steenbergen, “Photoacoustic-guided focused ultrasound (PAFUSion) for identifying reflection artifacts in photoacoustic imaging,” Photoacoustics 3(4), 123–131 (2015). [CrossRef]
18. D. Allman, A. Reiter, and M. A. L. Bell, “Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning,” IEEE Trans. Med. Imaging 37(6), 1464–1477 (2018). [CrossRef] [PubMed]
19. T. Petrosyan, M. Theodorou, J. Bamber, M. Frenz, and M. Jaeger, “Fast scanning wide-field clutter elimination in epi-optoacoustic imaging using comb-LOVIT,” in Ultrasonics Symposium (IUS),2017IEEE International (IEEE, 2017), p. 1. [CrossRef]
20. H.-M. Schwab and G. Schmitz, “An advanced interpolation approach for photoacoustic clutter reduction based on a linear plane wave scatter model,” in Ultrasonics Symposium (IUS),2016IEEE International (IEEE, 2016), pp. 1–4. [CrossRef]
21. M. K. A. Singh, M. Jaeger, M. Frenz, and W. Steenbergen, “Photoacoustic reflection artifact reduction using photoacoustic-guided focused ultrasound: comparison between plane-wave and element-by-element synthetic backpropagation approach,” Biomed. Opt. Express 8(4), 2245–2260 (2017). [CrossRef] [PubMed]
22. M. K. A. Singh, M. Jaeger, M. Frenz, and W. Steenbergen, “In vivo demonstration of reflection artifact reduction in photoacoustic imaging using synthetic aperture photoacoustic-guided focused ultrasound (PAFUSion),” Biomed. Opt. Express 7(8), 2955–2972 (2016). [CrossRef] [PubMed]
23. V. E. Gusev, and A. A. Karabutov, “Laser optoacoustics,” NASA STI/Recon Technical Report A, 93 (1991).
24. A. Oraevsky and A. Karabutov, “Optoacoustic tomography,” Biomedical photonics handbook 34, 1– 34 (2003).
25. A. A. Oraevsky, S. L. Jacques, and F. K. Tittel, “Measurement of tissue optical properties by time-resolved detection of laser-induced transient stress,” Appl. Opt. 36(1), 402–415 (1997). [CrossRef] [PubMed]
26. A. Bashkatov, E. Genina, V. Kochubey, and V. Tuchin, “Optical properties of human skin, subcutaneous and mucous tissues in the wavelength range from 400 to 2000 nm,” J. Phys. D Appl. Phys. 38(15), 2543–2555 (2005). [CrossRef]
27. V. V. Tuchin, and V. Tuchin, Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnosis (SPIE Press Bellingham, 2007).
28. M.-L. Li, J.-T. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96(3), 481–489 (2008). [CrossRef]
29. I. Sobel, and G. Feldman, “A 3x3 isotropic gradient operator for image processing, presented at a talk at the Stanford Artificial Project,” in Pattern Classification and Scene Analysis, R. Duda and P. Hart, eds. (John Wiley & Sons, 1968), pp. 271–272.
30. J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” in Noise Reduction in Speech Processing (Springer, 2009), pp. 1–4.
31. M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, “Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k-space interpolation,” Inverse Probl. 23(6), S51–S63 (2007). [CrossRef]
33. R. Putz and R. Pabst, Sobotta: Atlas of Human Anatomy (Elsevier, Urban & Fischer, 2006).
34. D. G. Bonett and T. A. Wright, “Sample size requirements for estimating Pearson, Kendall and Spearman correlations,” Psychometrika 65(1), 23–28 (2000). [CrossRef]