Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hidden phase-retrieved fluorescence tomography

Open Access Open Access

Abstract

Fluorescence tomography is a well-established methodology able to provide structural and functional information on the measured object. At optical wavelengths, the unpredictable scattering of light is often considered a problem to overcome, rather than a feature to exploit. Advances in disordered photonics have shed new light on possibilities offered by opaque materials, treating them as autocorrelation lenses able to create images and focus light. In this Letter, we propose tomography through disorder, introducing a modified Fourier-slice theorem, the cornerstone of the computed tomography, aiming to reconstruct a three-dimensional fluorescent sample hidden behind an opaque curtain.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Optics in disordered media have become an active field of research in the recent years thanks to advances in light shaping techniques and to a deeper understanding of the scattering process [1]. Imaging [2,3] and focusing [4] through opaque curtains have been achieved, paving the road toward innovative applications. Among others, they enabled the usage of multimode fiber bundles [5], on-purpose design of random devices to shape focus with desired features [6], and disordered-metasurface engineering [7] up to establishing quantum-secure cryptography protocol [8]. In all these cases, the speckle statistics play a fundamental role and can be controlled by inducing, for example, speckle-amorphity [9,10], or retaining information about an object obscured behind a diffuser [2]. Imaging based on speckle-correlation is a well-established technique imported from astronomy for the observation of bright objects behind turbulent atmosphere [11]. Phase retrieval (PR) algorithms play a major role in this case, and its exploitation has led to a number of imaging applications [12,13]. There is a vast literature concerning the retrieval of objects hidden behind a scattering curtain, which obtained remarkable results also in the reconstruction of bidimensional amplitude and phase masks [14]. However, recent studies have approached three-dimensional hidden imaging in different ways: resolving sources at different depths accounting for the speckle-magnification effect [15], characterizing the diffuser’s point source response and reconstructing using cross correlations [16] or solving a large-scale inverse problem [17].

In this Letter, we address the problem of tomographic reconstruction through disorder. Classical computed tomography (CT) relies on the observation of a sample at different angles (sinogram), reconstructing its three-dimensional distribution by solving an inverse problem [18]. In projection geometries, the method mostly relies on the Fourier-slice theorem, a mathematical tool that links the measured projection with a planar section in the Fourier space. So far, CT has been applied probing samples with x ray [19], optical wavelength [20], and others. At optical wavelength, however, scattering is usually seen as a puzzling business. In the following, we present a tomographic method that deals with the light scattered off an opaque glass, based on the inversion of the sample’s autocorrelation sinogram via a PR approach. To this end, we extend the concept behind the Fourier-slice theorem to process autocorrelation measurements, rather than direct projections. As a matter of fact, the autocorrelation can be estimated even if the sample is obscured behind an opaque layer under the so-called memory effect regime [21]. The basic principle behind this effect is that the speckle pattern generated by a single point source is invariant under translations. If the object lies within the region where the effect holds, this generates—in practice—a random superposition of object’s projections that globally share the same autocorrelation with the object itself [2,3]. Since we want to reconstruct a three-dimensional object distribution, we assume the object to be smaller than the transverse and axial memory effect region as in Ref. [15].

We approach the problem experimentally hiding a fluorescent three-dimensional sample $ {\cal O} $ behind an opaque diffuser, recording the transmitted intensity field at different angular views as in fluorescent optical projection tomography (OPT) [20]. Rather than forming images with an objective lens, we observe the speckles generated by the light propagating through an unknown ground-glass diffuser. The setup for incoherent speckle tomography is schematically described in Fig. 1. A three-dimensional fluorescent object $ {\cal O} $, absorbing at $ {\lambda _{{\rm abs}}} = 450\;{\rm nm} $ and emitting at $ {\lambda _{{\rm em}}} = 525\;{\rm nm} $ (obtained from Microscopy Education FluorRef-Green), is obscured behind an optical diffuser (Thorlabs, ground glass, grit 220). This opaque curtain is placed in front of a camera (Hamamatsu ORCA flash 4.0, pixel dimension of 6.5 µm) at distance $ {d_1} = 30\;{\rm mm} $ and coupled with a bandpass green filter (Brightline, 542/27 nm). We illuminate the sample with a blue LED at $ {\lambda _{{\rm ill}}} = 450 \pm 18\;{\rm nm} $ (Thorlabs, M450LP1) that creates a uniform illumination on the sample using 20 mm and 50 mm focal length lenses. The portion of the fluorescence light that propagates through the diffuser reaches the camera after being scrambled by the diffuser itself. In these conditions, the object is not directly recognizable, and a speckle pattern is formed on the camera sensor. The 14 bits dynamic range of the camera is adequate to resolve the speckle distribution over a diffused background [3], with a typical exposure time of 1 s. The object is mounted onto a rotation stage (Physik Instrumente M-037) at distance $ {d_2} = 33\;{\rm cm} $ from the diffuser, and it is rotated for 360° in steps of 2°. A sequence of speckle patterns is obtained as a function of the rotation angle $ \varphi $ with magnification determined by $ M = {d_1}/{d_2} \approx 0.09 $ [21], thus $ 1\;{\rm px} = 72.2\;{\unicode{x00B5}{\rm m}} $. The images of the patterns present low contrast and distortion towards the corners [22] and were corrected by dividing each pattern by its low-pass version (Gaussian kernel with $ \sigma = 50\;{\rm px} $). Alternatively, Zernike polynomials could be taken into account [22] for a more accurate wavefront correction.

 figure: Fig. 1.

Fig. 1. Sketch of the experimental setup for the imaging of a three-dimensional fluorescent object $ {\cal O} $ obscured behind an opaque diffuser (D) and a narrowband filter (F). The fluorescence is excited with a LED (L), and a camera (C) detects the scrambled wavefront. The setup is simple and does not have lenses nor objectives, but rather relies on speckle autocorrelation properties.

Download Full Size | PDF

Let us call the speckle stack produced by the hidden object $ {\cal O} $ rotated at any given angle $ \varphi $ as $ {S_\varphi }( {x,y} ) $. In Fig. 2(a), we show the corrected speckle patterns acquired at angles $ \varphi = \{{ 0^ \circ },{90^ \circ },{180^ \circ },{360^ \circ }\} $. Although the patterns are visually different, it is difficult to identify the object rotation. Since the memory effect holds also along the axial direction [15], each pattern recorded is the convolution of the object’s projection $ {O_\varphi } $ with the point spread function (PSF) of the diffuser, which, in our case, is unknown [2]. Thus, each speckle pattern may be written as $ {S_\varphi } = {O_\varphi }*{{\rm PSF}_\varphi } $, where $ * $ indicates the convolution operator. This is a condition called isoplanatism, where moving the point source through the object does not—substantially—alter the $ {{\rm PSF}_\varphi } $. Being the $ {{\rm PSF}_\varphi } $, the random response to a point source located at the object’s plane at angle $ \varphi $, its autocorrelation will be a sharply peaked function $ {{\rm PSF}_\varphi } \star {{\rm PSF}_\varphi } \approx \delta $ [2,3]. This implies that if we take the autocorrelation of $ {S_\varphi } $, we have

$$\begin{split}{S_\varphi } \star {S_\varphi } & = \left( {{O_\varphi }*{{{\rm PSF}}_\varphi }} \right) \star \left( {{O_\varphi }*{{{\rm PSF}}_\varphi }} \right)\\ & = \left( {{O_\varphi } \star {O_\varphi }} \right)*\left( {{{{\rm PSF}}_\varphi } \star {{{\rm PSF}}_\varphi }} \right) = {O_\varphi } \star {O_\varphi }.\end{split}$$

Here, we assume that $ {{\rm PSF}_\varphi } $ could also change for different angles as long as its autocorrelation is always close to a $ \delta $-function.

 figure: Fig. 2.

Fig. 2. (a) Speckle patterns acquired rotating the object at angle $ \varphi $. $ {S_\varphi } $ is corrected by its low-pass envelope to resolve speckle fluctuations. (b) Autocorrelation sinogram $ {X_\varphi } $ calculated for each $ {S_\varphi } $. To visualize the sinogram, we slice the stack $ {X_\varphi } $ vertically (red dot) and horizontally (blue dot), showing the results in the corresponding dot-labeled images on the right.

Download Full Size | PDF

This let us estimate the autocorrelation of the object’s projection by computing it from the seemingly random signal $ {S_\varphi } $. This calculation was done exploiting the ergodic property of the speckle patterns, averaging over different subwindows of size $ 128 \times 128\;{\rm px} $ running within the same camera detection. The number of speckles framed in a single image should be maximized, in order to increase the accuracy of the ensemble-averaged autocorrelation. For this reason, we kept the diffuser at distance $ {d_1} = 33.5\;{\rm mm} $ from the camera sensor. The autocorrelation was calculated using the correlation theorem,
$${X_\varphi }\left( {\xi ,\varepsilon } \right) \equiv\langle {{S_\varphi } \star {S_\varphi } \rangle = \big\langle {{\cal F}^{ - 1}}\{ \parallel {\cal F}\{ {S_\varphi }\left( {x,y} \right)\} {\parallel ^2}\} } \big\rangle ,$$
where the angle brackets $ \langle {\ldots} \rangle $ denote the ensemble average of different windows within the same speckle image $ {S_\varphi } $. Here, $ ( {\xi ,\varepsilon } ) $ are the shift-coordinates in 2D. In this way, we obtain the autocorrelation sinogram $ {X_\varphi } $ that we report in Fig. 2(b) and render in Fig. 3(a). Here we can see the resulting autocorrelations $ {X_\varphi } $ computed using Eq. (2) for each speckle pattern $ {S_\varphi } $ in Fig. 2(a). Slicing the stack as a function of the angle, horizontally (blue dot) and vertically (red dot), lets us visualize the autocorrelation sonogram, where it is possible to observe two rotating sidelobes, whereas no rotation is directly visible in the speckle stack $ {S_\varphi } $.
 figure: Fig. 3.

Fig. 3. Volume (a), 3D rendering of the autocorrelation sinogram $ {X_\varphi } $ previously shown in Fig. 2(b). Gray arrow (1), inversion of the $ {X_\varphi } $ via SIRT. Volume (b), result of the SIRT inversion of $ {X_\varphi } $, giving rise to the reconstruction of the autocorrelation $ {{\cal X}^ \bullet } = {{\cal R}^{ - 1}}\{ {X_\varphi }\} $. Gray arrow (2), 3D phase retrieval algorithm. Volume (3), output of the phase-retrieved fluorescence reconstruction after the recovery of the phase connected to the autocorrelation (operation 2).

Download Full Size | PDF

Before discussing the usage of the autocorrelation sinogram $ {X_\varphi } $, it is worth introducing the concept of the Radon transform in CT. Given the vastity of the argument, the reader can refer to Ref. [18] for more detailed insights on CT. First of all, we define the Radon transform as an operator that projects the function $ {\cal O} $ (that is the object that we want to measure) along a given set of observation angles $ \varphi $ as $ {\cal R}\{ {\cal O}\} = {O_\varphi } $. In our case, $ {O_\varphi } $ would be the direct images of the object $ {\cal O} $ observed at angle $ \varphi $. More generally, the $ {O_\varphi } $ is the (forward) projection sinogram that can be back-projected to rebuild the object distribution via the inverse Radon transform $ {{\cal R}^{ - 1}}\{ {O_\varphi }\} = {\cal O} $. Applying the $ {{\cal R}^{ - 1}} $ implies the solution of an inverse problem, normally addressed via filtered back-projection (FBP) or algebraic reconstruction algorithms [18]. Among the latter, we rely on simultaneous iterative reconstruction technique (SIRT) for any sinogram inversion; however, the study on which the best inversion method lies is well beyond the scope of the present Letter.

In the following, we will prove that the same principles are applicable in the case of projection’s autocorrelation, thus that $ {\cal R}\{ {\cal X}\} = {X_\varphi } $ and $ {{\cal R}^{ - 1}}\{ {X_\varphi }\} = {\cal X} $. This allow us to estimate the three-dimensional autocorrelation $ {\cal X}( {\xi ,\varepsilon ,\zeta } ) $ of the hidden object by inverting the autocorrelation sinogram $ {X_\varphi } $ [rendered in Fig. 3(a)]. Here, we denoted with $ \zeta $ the shift-coordinate along the $ z $ axis. Without loss of generality, we prove this at angle $ \varphi = 0 $, but the concept can be trivially extended to every rotation angle. We call $ {O_{\varphi = 0}} = O $ the projection along $ z $ of the volumetric object distribution $ {\cal O} $: $ O( {x,y} ) = \int {\cal O}( {x,y,z} ){\rm d}z $. Formally, we have to show that the autocorrelation $ {\cal X} = {\cal O} \star {\cal O} $ projected at $ \varphi = 0 $ equals the autocorrelation of the object’s projection $ {X_{\varphi = 0}} = O \star O $. We do this, by proving the validity of the Fourier-slice theorem [18] also in the autocorrelation space. Let us Fourier transform $ {\cal X} $ and slice it through $ {k_z} = 0 $,

$$\begin{split}{\cal F}\{ {\cal X}{\} |_{{k_z} = 0}}& = {\left\| {\int {\cal O}\left( {x,y,z} \right){e^{ - i2\pi \left( {x{k_x} + y{k_y}} \right)}}{\rm d}x {\rm d}y {\rm d}z} \right\|^2}\\& = {\left\| {\int O\left( {x,y} \right){e^{ - i2\pi \left( {x{k_x} + y{k_y}} \right)}}{\rm d}x {\rm d}y} \right\|^2},\end{split}$$
corresponding to Fourier-slice at $ \varphi = 0 $. Here, we notice that we are simply left with:
$${\cal F}\{ {\cal X}{\} |_{{k_z} = 0}} = {\left\| {{\cal F}\{ O\} } \right\|^2} = {\cal F}\{ X\} ,$$
which reproduces the Fourier-slice theorem for the autocorrelation function. Recalling Eq. (2) guarantees that the autocorrelation calculated from the speckle pattern at any angle $ {X_\varphi } $ is the projection of the three-dimensional autocorrelation of the hidden object $ {\cal X} $; thus, $ {{\cal R}^{ - 1}}\{ {X_\varphi }\} $ allows its reconstruction. Given Eq. (4), we accomplish the sinogram inversion by using a standard SIRT algorithm (gray arrow labeled with (1) in Fig. 3). This let us obtain the tomographic object’s autocorrelation $ {{\cal X}^ \bullet } $ [rendered in Fig. 3(b)]. To roll back to the reconstruction of the object $ {{\cal O}^ \bullet } $ from its autocorrelation, we have to solve another inverse problem. Since we have a valid estimation of the tomographic $ {{\cal X}^ \bullet } $, we can calculate $ {\cal M} $, the Fourier magnitude of the object, via the Wiener–Kinchin theorem as: ${\cal M}\left( {{k_x},{k_y},{k_z}} \right) \equiv \parallel {\cal F}\{ {{\cal O}^ \bullet }\} \parallel = \sqrt {{\cal F}\{ {{\cal X}^ \bullet }\} } .$

We miss only the phase information to be associated with $ {\cal M} $ in order to obtain the reconstruction of the hidden object $ {{\cal O}^ \bullet } $. To do this, we feed this estimation to a PR algorithm [11], starting with a random phase $ {\Phi _0}( {{k_x},{k_y},{k_z}} ) $. The structure of the autocorrelation is closely related to the structure of the object and could be cleverly used as an additional prior information [23]. PR consists in four-step iteration in which we do the following:

  • 1. Fourier transform the object’s estimation $ {{\cal O}_i} $: ${\cal F}\{ {{\cal O}_i}\} = \parallel {\cal F}\{ {{\cal O}_i}\} \parallel {e^{i{\Phi _i}}}$
  • 2. Replace the modulus with the measured one $ {\cal M} $: ${{\cal G}^\prime _i} = {\cal M}\frac{{{\cal F}\{ {{\cal O}_i}\} }}{{\parallel {\cal F}\{ {{\cal O}_i}\} \parallel }} = {\cal M}{e^{i{\Phi _i}}}$
  • 3. Inverse Fourier transform the previous quantity: ${{\cal O}^\prime _i} = {{\cal F}^{ - 1}}\{ {{\cal G}^\prime _i}\} $
  • 4. The object estimation $ {{\cal O}_{i + 1}} $ is formed by applying different criteria in the region $ \gamma $, where the $ {{\cal O}_i} $ does not satisfy the object constraints (realness and positiveness). We use two different methods in combination, the hybrid input–output (HIO) and the error reduction (ER),
    $${\rm HIO} : {{\cal O}_{i + 1}} = \left\{ {\begin{array}{*{20}{l}}{{{{\cal O}^\prime }_i}}&\quad{{\rm if}\,\left( {x,y,z} \right) \notin \gamma }\\{{{\cal O}_i} - \beta {{{\cal O}^\prime }_i}}&\quad{{\rm if}\,\left( {x,y,z} \right) \in \gamma }\end{array}} \right.,$$
    $${\rm ER} : {{\cal O}_{i + 1}} = \left\{ {\begin{array}{*{20}{c}}{{{{\cal O}^\prime }_i}}&\quad{{\rm if}\,\left( {x,y,z} \right) \notin \gamma }\\0&\quad{{\rm if}\,\left( {x,y,z} \right) \in \gamma }\end{array}} \right..$$

We initially start with the HIO method (typically 1000 iterations, feedback coefficient $ \beta = 0.8 $) followed by ER (500 iteration) to lower the reconstruction noise of the output image. The process is schematically represented by the gray arrow (2) in Fig. 3, where we processed a volume of $ {128^3} $ voxels. The quality of the reconstruction is assessed by computing the distance $ { \epsilon _i} = \parallel {\cal M} - \parallel {\cal F}\{ {{\cal O}_i}\} \parallel \parallel $. To finalize the result, we take the average of 10 best reconstructions with the lowest $ \epsilon $ out of 100 trials, forming the final reconstruction $ {{\cal O}^ \bullet }( {x,y,z} ) $ rendered in Fig. 3(c). For comparison, we image the object directly with a telecentric objective lens (Computar TEC-55, 55 mm $ f/2.8 - 32 $), set to $ f/22 $ with a camera exposure time of 0.05 s. This configuration results into an effective magnification of $ M^\prime = 0.2 $ and a pixel size of $ 1{\rm px} = 32.5\;{\unicode{x00B5}{\rm m}} $. We perform a fluorescence OPT measurement, rotating the object as in the hidden experiment but here acquiring directly the object projections. We centered the axis of rotation of the resulting projection sinogram by aligning opposite angular projections at 0° and 180°. To create the ground truth reconstruction, this sinogram was inverted using SIRT as in the previous case. This reconstruction is rendered in the box of Fig. 1 and used as reference in Figs. 4(a) and 4(c).

 figure: Fig. 4.

Fig. 4. (a) Frontal view (at $ \varphi = 0^ \circ $) of the ground truth reconstruction of the object $ {\cal O} $ measured with a normal OPT approach. The location of the fluorescent signal is color encoded to assess the depth at which the signal is emitted. (b) Tomography of the same sample hidden behind the diffuser. The two “legs” are reconstructed at the right depths. (c) Tomographic section of the ground truth along the dashed line of panel (a). (d) Tomographic section of the retrieved volume (b) at the same height as in the dashed line of panel (a). We can notice the correct location of the two fluorescent objects. The object extends approximately 31.1 mm along the horizontal axis.

Download Full Size | PDF

The three-dimensional object was chosen to have features at different depths, as shown in the ground truth panel in Fig. 4(a). In this figure, different depths are color encoded. Compared against Fig. 4(a), the reconstruction shown in Fig. 4(b) correctly retrieved the shape of the hidden object as well as its depth-dependent features. To observe a transverse section of the object through the dashed line of Fig. 4(a), we slice the ground truth and the retrieved volumes at the same height. These tomographic sections are displayed in Fig. 4(c) for the ground truth and in Fig. 4(d) for the hidden reconstructions, confirming that features at different depths are correctly resolved. We note that reconstruction converged to higher intensity for the bigger element of the sample. This could be due to the presence of the noise in the calculation of the autocorrelations, adding peaked contribution around the zero-shift region (central part of the autocorrelation), and unbalancing the PR reconstruction. Disentangling the noise contribution from the sinogram $ {X_\phi } $ may facilitate quantitative reconstructions, as discussed in Ref. [24] in the context of audio signals. The method was tested hiding a different sample behind four diffusers (Thorlabs, grits: 120, 220, 600, 1500), and it always exhibited similar performances (Fig. S1 in [25]).

One of the advantages of our protocol is that it does not require prior knowledge of the position of rotation axis: working in the shift-space avoids misalignment, since the $ {X_\varphi } $ always rotates around its center regardless of the object’s actual position [Fig. 2(b)]. This guarantees an accurate $ {{\cal X}^ \bullet } $ estimate, independently of the object absolute positioning behind the diffuser. Instead, reconstructing the object’s projection angle by angle would turn into independent randomly positioned images, potentially mirrored against each other. PR algorithms have, in fact, infinite degenerate solutions invariant by spatial translation and axes flip [11]. This would lead to a wrong reconstruction even feeding the retrieval of the projection $ O_\varphi ^ \bullet $ with the previous $ O_{\varphi - 1}^ \bullet $, since the reconstructed sinogram would need an accurate alignment before Radon inversion. Differently to other methods [16,17], we do not require the characterization of the diffuser, perhaps being able to deal with temporal changes without the need to update the calibration. However, our method may be improved by testing other approaches to carry out inverse Radon transform [26], reconstructing an autocorrelation that better behaves with different PR implementations (as, for example, oversampling smoothness [27] or shrink-wrap [23]). So far, we have shown that tomography of a three-dimensional hidden object is a concrete possibility, thanks to a modified version of the Fourier-slice theorem that applies to autocorrelations. Unlike conventional OPT approaches, the proposed method has the ability to reconstruct a perfectly aligned volumetric image without the knowledge or calibration of the rotation axis [28] and without any lens system. This may convey PR protocols as promising tools in the field of optics, potentially exploitable in other projection-based CT applications.

Funding

H2020 Marie Skłodowska-Curie Actions (799230); Horizon 2020 Framework Programme (871124).

Acknowledgment

The authors thank Prof. Antonio Pifferi for the scientific and logistic support. European Union's H2020 Marie Skłodowska-Curie Actions (799230); Horizon 2020 Framework Programme (871124).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. D. S. Wiersma, Nat. Photonics 7, 188 (2013). [CrossRef]  

2. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, Nature 491, 232 (2012). [CrossRef]  

3. O. Katz, P. Heidmann, M. Fink, and S. Gigan, Nat. Photonics 8, 784 (2014). [CrossRef]  

4. I. M. Vellekoop, A. Lagendijk, and A. Mosk, Nat. Photonics 4, 320 (2010). [CrossRef]  

5. T. Čižmár and K. Dholakia, Nat. Commun. 3, 1027 (2012). [CrossRef]  

6. D. Di Battista, D. Ancora, H. Zhang, K. Lemonaki, E. Marakis, E. Liapis, S. Tzortzakis, and G. Zacharakis, Optica 3, 1237 (2016). [CrossRef]  

7. M. Jang, Y. Horie, A. Shibukawa, J. Brake, Y. Liu, S. M. Kamali, A. Arbabi, H. Ruan, A. Faraon, and C. Yang, Nat. Photonics 12, 84 (2018). [CrossRef]  

8. S. A. Goorden, M. Horstmann, A. P. Mosk, B. Škorić, and P. W. Pinkse, Optica 1, 421 (2014). [CrossRef]  

9. D. Di Battista, D. Ancora, M. Leonetti, and G. Zacharakis, Appl. Phys. Lett. 109, 121110 (2016). [CrossRef]  

10. D. Di Battista, D. Ancora, G. Zacharakis, G. Ruocco, and M. Leonetti, Opt. Express 26, 15594 (2018). [CrossRef]  

11. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, IEEE Signal Process. Mag. 32(3), 87 (2015). [CrossRef]  

12. E. Edrei and G. Scarcelli, Optica 3, 71 (2016). [CrossRef]  

13. T. Wu, O. Katz, X. Shao, and S. Gigan, Opt. Lett. 41, 5003 (2016). [CrossRef]  

14. K. Lee and Y. Park, Nat. Commun. 7, 13359 (2016). [CrossRef]  

15. Y. Okamoto, R. Horisaki, and J. Tanida, Opt. Lett. 44, 2526 (2019). [CrossRef]  

16. S. Mukherjee, A. Vijayakumar, M. Kumar, and J. Rosen, Sci. Rep. 8, 1 (2018). [CrossRef]  

17. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, Optica 5, 1 (2018). [CrossRef]  

18. A. C. Kak, M. Slaney, and G. Wang, Med. Phys. 29, 107 (2002). [CrossRef]  

19. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances (SPIE, 2003), Vol. 114.

20. J. Sharpe, Annu. Rev. Biomed. Eng. 6, 209 (2004). [CrossRef]  

21. I. Freund, M. Rosenbluh, and S. Feng, Phys. Rev. Lett. 61, 2328 (1988). [CrossRef]  

22. M. Liao, D. Lu, W. He, G. Pedrini, W. Osten, and X. Peng, Appl. Opt. 58, 473 (2019). [CrossRef]  

23. S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A. Noy, M. R. Howells, U. Weierstall, and J. C. Spence, Phys. Rev. B 68, 140101 (2003). [CrossRef]  

24. G. Farahani, EURASIP J. Audio Speech Music Proc. 2017, 13 (2017). [CrossRef]  

25. D. Ancora, D. Di Battista, A. M. Vidal, S. Avtzi, G. Zacharakis, and A. Bassi (2020). https://doi.org/10.6084/m9.figshare.11901927.v1 [CrossRef]  

26. A. K. Trull, J. van der Horst, L. J. van Vliet, and J. Kalkman, Appl. Opt. 57, 1874 (2018). [CrossRef]  

27. J. A. Rodriguez, R. Xu, C.-C. Chen, Y. Zou, and J. Miao, J. Appl. Crystallogr. 46, 312 (2013). [CrossRef]  

28. D. Ancora, D. Di Battista, G. Giasafaki, S. E. Psycharakis, E. Liapis, J. Ripoll, and G. Zacharakis, Sci. Rep. 7, 11854 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Sketch of the experimental setup for the imaging of a three-dimensional fluorescent object $ {\cal O} $ obscured behind an opaque diffuser (D) and a narrowband filter (F). The fluorescence is excited with a LED (L), and a camera (C) detects the scrambled wavefront. The setup is simple and does not have lenses nor objectives, but rather relies on speckle autocorrelation properties.
Fig. 2.
Fig. 2. (a) Speckle patterns acquired rotating the object at angle $ \varphi $ . $ {S_\varphi } $ is corrected by its low-pass envelope to resolve speckle fluctuations. (b) Autocorrelation sinogram $ {X_\varphi } $ calculated for each $ {S_\varphi } $ . To visualize the sinogram, we slice the stack $ {X_\varphi } $ vertically (red dot) and horizontally (blue dot), showing the results in the corresponding dot-labeled images on the right.
Fig. 3.
Fig. 3. Volume (a), 3D rendering of the autocorrelation sinogram $ {X_\varphi } $ previously shown in Fig. 2(b). Gray arrow (1), inversion of the $ {X_\varphi } $ via SIRT. Volume (b), result of the SIRT inversion of $ {X_\varphi } $ , giving rise to the reconstruction of the autocorrelation $ {{\cal X}^ \bullet } = {{\cal R}^{ - 1}}\{ {X_\varphi }\} $ . Gray arrow (2), 3D phase retrieval algorithm. Volume (3), output of the phase-retrieved fluorescence reconstruction after the recovery of the phase connected to the autocorrelation (operation 2).
Fig. 4.
Fig. 4. (a) Frontal view (at $ \varphi = 0^ \circ $ ) of the ground truth reconstruction of the object $ {\cal O} $ measured with a normal OPT approach. The location of the fluorescent signal is color encoded to assess the depth at which the signal is emitted. (b) Tomography of the same sample hidden behind the diffuser. The two “legs” are reconstructed at the right depths. (c) Tomographic section of the ground truth along the dashed line of panel (a). (d) Tomographic section of the retrieved volume (b) at the same height as in the dashed line of panel (a). We can notice the correct location of the two fluorescent objects. The object extends approximately 31.1 mm along the horizontal axis.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

S φ S φ = ( O φ P S F φ ) ( O φ P S F φ ) = ( O φ O φ ) ( P S F φ P S F φ ) = O φ O φ .
X φ ( ξ , ε ) S φ S φ = F 1 { F { S φ ( x , y ) } 2 } ,
F { X } | k z = 0 = O ( x , y , z ) e i 2 π ( x k x + y k y ) d x d y d z 2 = O ( x , y ) e i 2 π ( x k x + y k y ) d x d y 2 ,
F { X } | k z = 0 = F { O } 2 = F { X } ,
H I O : O i + 1 = { O i i f ( x , y , z ) γ O i β O i i f ( x , y , z ) γ ,
E R : O i + 1 = { O i i f ( x , y , z ) γ 0 i f ( x , y , z ) γ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.