## Abstract

We demonstrate the feasibility of *bidirectional* image transmission through a *physically thick* scattering medium within its memory effect range by digital optical phase conjugation. We show the bidirectional transmission is not simply the consequence of optical reciprocity. We observe that when the spatial light modulator (the device performing the digital optical phase conjugation) is relayed to the *middle plane* of the medium, the memory effect will be fully exploited and thus the transmitted images will have maximum field of view (FOV). Furthermore, we show that the FOV can be expanded *n* times by performing *n* times wavefront measurements.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Image transmission through scattering media has received significant interests in recent years [1–6]. In most applications, the space behind or inside the scattering media is physically inaccessible or hardly accessible, for instance, tissues and organs beneath the skin and spaces inside a dust storm. In general, image transmission through scattering media can be classified into two categories according to the transmission direction: if an unknow object is hidden behind the media and its image is delivered to the other side of the media, we call it imaging [1,6]; on the other hand, if a known object is placed in the accessible side of the media and its image is delivered to the inaccessible side, we call it projecting or patterning [2–5], which is useful for many applications such as optogenetics [7,8], laser operation and photo-thermal therapy [9,10], where precise control over light is demanded. While performing imaging and projecting simultaneously is straightforward in transparent media, it is quite a challenge to implement a bidirectional image transmission through scattering media due to multiple light scattering [11]. In order to tackle this challenge, wavefront shaping [12–15] was proposed, which uses spatial light modulators (SLM) to control input or output wavefronts to suppress scattering. Hillman et al. [3] proposed a projecting method using interferometry to record the transmitted wavefront of a coherently illuminated object placed behind a thin turbid layer; following this, an SLM was employed to produce the phase-conjugated wavefront which can generate the same pattern as the object when it propagated to the object plane. However, the pattern was generated at the same side of the turbid layer as the reference object, which makes it an *invasive* approach. Besides, when the reference object changes, the recording process has to be performed again. Katz et al. [1] used an SLM in-between the turbid layer and a charge coupled device (CCD) to compensate the diffused wavefront originating from a point source behind the layer, such that the diffused wave can be compensated (corrected) and then focused on the CCD. In this way, an object in the vicinity of the reference point source can be directly imaged onto the CCD due to angular correlation, a.k.a. memory effect [16,17].

Though the above approaches started to address the image transmission through scattering media, to the best of our knowledge, scattering samples used in these experiments were thin turbid layers with thickness less than 100 μm [1–4]. However, in many applications, the scattering media are much thicker, such as biological tissues, dust storms and fogs. Bearing this in mind, in this paper, we adapt Katz’s method to scattering media up to 1 mm thick and show images transmission from both sides of the media. In Katz’s setup, the SLM was relayed to *the surface* of the turbid layer to maximize the FOV (field of view). However, in our experiments, we find that for a *thick* medium, the relay plane of the SLM should be at the *middle* of the medium to fully exploit the memory effect range and thus to maximize the FOV. Different from Katz’s method, which searches the compensative phase map (to be displayed on the SLM) through an iterative optimization process [15], we acquire the map by measuring [18] the diffused wavefront originating from the point source behind the media, which is more effective. Furthermore, we demonstrate that by performing *n*-times wavefront measurements via placing the point source at *n* different positions, the FOV can be expanded *n*-times by using a compensative phase map synthesized from the measured *n* wavefronts.

In the following of this paper, we first provide a theoretical analysis on the process of image transmission through a thick medium from both directions and derive the optimal position of the SLM’s relay plane. Then we experimentally demonstrate the bidirectional image transmission using chicken breast tissues, and measure the FOV when the SLM is relayed to different cross-sections of the tissues. Finally, we show our proposed method to expand the FOV beyond the memory effect range.

## 2. Theoretical analysis

We hereby provide a theoretical analysis on how digital optical phase conjugation enables bidirectional image transmission through thick scattering media in the present of the memory effect. We adapted the analytical method used in our previous work [2] for holographic projecting through thin turbid layers with coherent light. But for *thick scattering* media, the shift-shift type [19] and the generalized memory effect [20] have to be considered, which are the key factors determining the optimal position of the SLM’s relay plane. Further, light propagation from two directions, i.e., imaging (Fig. 1(b)) and projecting (Fig. 1(c)), are analyzed in section 2.2 and 2.3, respectively. In the following, we first give the expression of the memory effect of thick scattering media.

#### 2.1 The memory effect

Consider an arbitrary one-dimensional complex scalar field *A*(*x*) (*x* is the coordinate) incident on a scattering medium that has a transmission function T. The transmitted field *B*(*x*) on its output surface can be denoted by

*infinitely*thin, the memory effect can be expressed as

*k*

_{x}is the projection of the wave vector $\overrightarrow{k}$ along

*x*axis, ${e}^{\text{i}{k}_{\text{x}}x}$ is a phase ramp that introduces a tilt of

*∆θ*=

*k*

_{x}/

*k*(

*k*is the amplitude of $\overrightarrow{k}$) to the incident field

*A*(

*x*), c(

*k*

_{x}) is the correlation coefficient between the tilted output and the original output

*B*(

*x*), which decreases as the tilt angle

*∆θ*increases, and

*R*(

*x*) is a random field representing the uncorrelated part of the tilted output field (the weight of

*R*(

*x*) increases as

*∆θ*increases).

However, if the scattering medium has a *finite* thickness *L*, Eq. (2) should be modified to [20]

*B*will not only be tilted by

*∆θ*but also shift by $\Delta \theta *\alpha L$ (we will show later $\alpha =0.5$). To simplify our following derivations in section 2.2 and 2.3, we wish to eliminate this coordinate shift and make Eq. (3) take the same form as Eq. (2). To do so, we introduce an auxiliary field

*B*’(

*x*), which is related to

*B*(

*x*) by

*B*’(

*x*) is the back-propagated field of

*B*(

*x*). It is easy to demonstrate that ${\mathrm{F}}_{\alpha L}$has the following property:

#### 2.2 Imaging

We now analyze the mechanism of image transmission from behind the scattering medium. Before starting the mathematical derivation, we first give a general description of the imaging process. As the first step of our approach, we acquire the compensative phase map to be displayed on the SLM by wavefront measurement (Fig. 1(a)). We place a point source (laser-generated) behind the medium and use phase shifting interferometry [21] (the interferometer is not shown in Fig. 1 for simplicity, see Fig. 2 for detailed setup) to measure the transmitted diffused wavefront on the SLM plane. The conjugated phase map of the measured wavefront is then displayed on the SLM to compensate this diffused wavefront into a plane wave. A Lens is followed to focus the plane wave onto a CCD. In this way, the point source is imaged onto the CCD (the image is the focus). Then we replace the point source with an object (represented by the smiling face in Fig. 1(b)) and illuminate it with spatially incoherent light (LED). The light from the object can be seen as a collection of spherical waves emitted from different points of the object, and will be focused to different points on the CCD due to the memory effect, forming the image of the object.

In the wavefront measurement step, the input field *I* on the left surface of the medium is a spherical wave denoted by *E*_{s}. According to Eq. (1), the speckle field ${E}_{\mathrm{med}}$ on the output surface (right surface) of the medium can be expressed as T(*E*_{s}), and according to Eq. (9), the auxiliary (back-propagated) field for ${E}_{\mathrm{med}}$is $E{\text{'}}_{\mathrm{med}}=\mathrm{T}\text{'}\left({E}_{s}\right)$. If we relay the SLM to the cross-section of the medium (the orange dotted rectangular box in Fig. 1) that is $\alpha L$away from the right surface of the medium (hereinafter the distance between SLM’s relay plane and the right surface of the medium will be denoted as ${z}_{r}$), the field received by SLM should also be $E{\text{'}}_{\mathrm{med}}$. We measure this field using interferometry and then impose its conjugated phase map $\mathrm{T}\text{'}*\left({E}_{s}^{*}\right)$ on the SLM.

After obtaining this compensative phase map and imposing it on the SLM, we are ready to perform the imaging of the object shown in Fig. 1(b). Towards this end, we replace the point source with the target object and illuminate it with spatially incoherent light (LED). The input field *I* on the left surface of the medium can be expressed as

*x*are the coordinates on the object plane and the input plane, and

*d*is the distance between the object plane and the input plane. Equation (10) can be rewritten as

*E*

_{s}and $\mathrm{exp}(-ik\frac{x\text{'}}{d}x)$ is a tilting factor (or phase ramp, the tilting angle is $\frac{x\text{'}}{d}$). We denote $k\frac{x\text{'}}{d}$ by ${k}_{x\text{'}}$ which represents the projection of the wavevector $k$ on

*x*-axis. Then we have $I={E}_{s}{\displaystyle {\sum}_{x\text{'}}U(x\text{'})\mathrm{exp}(\frac{ik}{2d}x{\text{'}}^{2})\mathrm{exp}(-i{k}_{x\text{'}}x)}$, which is a collection of spherical waves propagating towards different directions (Fig. 1(b)), with $U(x\text{'})\mathrm{exp}(\frac{ik}{2d}x{\text{'}}^{2})$ denoting the weight of each component. According to the memory effect (Eq. (8)) and the assumption that the medium is linear (T and $\mathrm{T}\text{'}$are linear functions), the field on the SLM plane is

*O*. Thereby, the amplitude of the image on the CCD is given by $c(\frac{k}{f}x\text{'}\text{'})U(-\frac{d}{f}x\text{'}\text{'})+\left|R(x\text{'}\text{'})\right|$, where $x\text{'}\text{'}$ is the coordinate on the image plane,

*f*is the focal length of L2, $\frac{f}{d}$ is the magnification rate of the image, $\left|R(x\text{'}\text{'})\right|$ is the background noise of the image, and the width of the correlation function $c(\frac{k}{f}x\text{'}\text{'})$ is the FOV of the image.

In the above derivation, the SLM’s relay plane is placed at ${z}_{r}=\alpha L$ to eliminate the transverse coordinate shift of the diffused wavefronts with different directions, so that the compensative phase map can match each diffused wavefront. If the SLM is not relayed to this plane, the compensation effectiveness will decrease and thus the FOV will shrink. According to the theory of Osnabrugge et al. [20], the coefficient $\alpha $ is 1, and thus the SLM should be relayed to the left surface of the media. However, in our experiments, we have found that the optimal relay plane of the SLM was at the middle of the medium, therefore $\alpha $ should be 0.5 (detailed in Section 3).

#### 2.3 Projecting

Next, we proceed to analyze the mechanism of image transmission in the opposite direction, a.k.a. the projecting. Similarly, we first give a general description of the projecting process before starting the derivation. After the conjugated phase map is acquired, we put an object at the focal plane of L2 (Fig. 1(c)) and illuminate it with a LED source. Light from different points of the object is collimated by L2 into plane waves with different directions, and then modulated by the SLM into conjugated waves with different directions. The conjugated waves impend onto the medium at different angles and thus yield output spherical waves converging to different points, forming the projection of the object.

In the derivation of the imaging process, we assumed ${z}_{r}=\alpha L$, but here, we take ${z}_{r}$ as a variable. Thus, in the first step, we have the measured wavefront on the SLM plane ${\mathrm{F}}_{{z}_{r}}^{-1}\mathrm{T}({E}_{s}^{}(x))$. Following this, the compensative phase map to be displayed on SLM should be ${\mathrm{F}}_{{z}_{r}}^{-1*}{\mathrm{T}}^{*}({E}_{s}^{*}(x))$. In the projecting step, as we mention above, the wavefront *I* received by SLM can been seen as a series of plane waves towards different directions, therefore it can be expressed as $I={\displaystyle {\sum}_{x\text{'}}U(x\text{'}){e}^{-\text{i}{k}_{x\text{'}}x}}$,where $x\text{'}$ and *x* are the coordinates on the object plane and the SLM plane, $U(x\text{'})$ is the transmission function of the object and ${k}_{x\text{'}}=k\frac{x\text{'}}{f}$. After modulated by the SLM, *I* becomes ${\sum}_{x\text{'}}U(x\text{'}){e}^{-\text{i}{k}_{x\text{'}}x}{\mathrm{F}}_{{z}_{r}}^{-1*}{\mathrm{T}}^{*}({E}_{s}^{*}(x))$. This field is then relayed to the inside plane of the medium that is ${z}_{r}$ away from the right surface of the medium, thus the field ${E}_{\mathrm{med}}$ on the right surface of the medium should be its back-propagated field:

*O*on the left surface of the medium is given by

*d*, and thus form the projection of the object. The amplitude of the projection is given by $c\left(-{k}_{x\text{'}},\frac{{k}_{x\text{'}}}{k}{z}_{r}\right)U(\frac{f}{{z}_{r}-\alpha L-d}x\text{'}\text{'})\text{+}\left|R\left(x\text{'}\text{'}\right)\right|$, where $x\text{'}\text{'}$ is the coordinate on the projection plane.The angular FOV of the projection is determined by the width of the correlation function $c\left({k}_{x\text{'}},-\frac{{k}_{x\text{'}}}{k}{z}_{r}\right)$ (as a function of variable ${k}_{x\text{'}}$). Note that the shape of the correlation function is affected by ${z}_{r}$. Thereby, the FOV can be optimized by choosing an appropriate ${z}_{r}$. According to the theory of Osnabrugge et al. [20], the optimal ${z}_{r}$ should be $0.5L$ (this has been verified by our experiments, see section 3). Substituting ${z}_{r}=0.5L$ and $\alpha =0.5$ into Eq. (17), we obtain

Till now, we have derived how the imaging and projecting are constructed through thick scattering media using digital optical phase conjugation. This will be validated by experiments in the next section.

## 3. Experiments

We now experimentally demonstrate our method. First, we show the bidirectional image transmission (imaging and projecting) using chicken breast slices as the scattering media. Second, we study the influence of the position of the SLM’s relay plane on the FOV of the transmitted images using different kinds of scattering media, and then find the optimal relay plane that maximizes the FOV. Last but not least, we expand the FOV beyond the memory effect range by performing multiple times wavefront measurements.

#### 3.1 Bidirectional image transmission

Figure 2 depicts our experimental setup, in which the scattering medium was a 1 mm thick chicken breast slice which has a reported mean free path (MFP) of 43 μm, an anisotropy factor (*g*) of 0.965, and a transport mean free path of 1.2mm [23]. As demonstrated in Fig. 1, our experiments include three steps: 1) wavefront measurement; 2) imaging; 3) projecting.

In the first step, we used a 532 nm laser (green arrows) to characterize the tissue. Light from the laser source was splitted into two beams: the sample beam and the reference beam. The sample beam was coupled into a single mode fiber and the output port of the fiber was used as the reference point source, which has a 3.5 μm diameter (the mode field diameter of the fiber) and was placed 100 mm away from the tissue. The spherical wave emitted from the point source was scrambled by the tissue into a speckle field and then imaged onto the surface of a reflective SLM (1080*1920 pixels, 6.5 μm pixel pitch, 255 gray level, LETO, Holoeye, Germany) by a commercial camera lens (L1, Micro-Nikkor 105mm f/2.8, Nikon, Japan) with a magnification of 3x. The object plane of L1 was at the middle of the medium to maximize the FOV. The reference beam was expanded by a beam expander into a near-plane wave with a 33 mm diameter (*1/e ^{2}*) before interfering with the sample beam. The interference pattern on the SLM surface was captured by CCD1 through another commercial lens (L3, Micro-Nikkor 105mm f/2.8, Nikon, Japan). Then a 4-step phase shifting interferometry method [21] was implemented to extract the diffused wavefront of the sample beam from 4 interference patterns captured at 4 different phase delays (0, π/2, π, 3π/2) between the two arms. The phase delay was provided by a liquid crystal variable retarder (LCVR, LCC1113-A, Thorlabs, USA) placed in the reference arm. Then the conjugation of the diffused wavefront (a small region is shown in Fig. 3(c)) was applied on the SLM to offset this diffused wavefront.

Next, we performed the imaging experiment using a ‘butterfly’ object (Fig. 3(a)). We replaced the point source with the object and illuminated the object with a LED source (M530L3, Thorlabs, USA) which has a center wavelength of 530 nm and a bandwidth of 20 nm. Since we only measured and compensated the 532 nm wavelength, the other wavelengths in the LED spectrum would cause background noise to the image [24]. So, we employed a 532 nm filter with a 1 nm bandwidth in-between the LED and the object to block these wavelengths. The diffused wavefront (red arrows in Fig. 2) coming out from the scattering media was modulated by the SLM into a series of plane waves propagating towards different directions and then focused to different points on CCD2 by L2 (achromatic doublet, 100 mm focal length), forming a clear image of the object (Fig. 3(f)). The FOV was confined to the center part of the object due to the limited memory effect range of the slice.

Lastly, we performed the projecting experiment by exchanging the position of CCD2 and the object (along with the LED and the filter). In this case, the direction of the light path was opposite to that in the imaging experiment (marked with red arrows in Fig. 2). Like in the imaging experiment, the SLM was also relayed to the middle of the medium to maximize the FOV of the projection. Figure 3(g) shows the projection captured by CCD2. Compared to our previous approach for holographic projection through turbid layers using coherent light, the projection obtained with our new approach is devoid of speckle artifacts due to the use of incoherent light.

Till now, we have demonstrated the feasibility of the *bidirectional* image transmission through *thick* scattering media by digital optical phase conjugation. Next, we address the issue of the optimal relay plane of the SLM. In the above experiments, the image and projection were captured when SLM was relayed to the middle plane of the medium. For a qualitative comparison, we also captured the image (Fig. 3(d)) and projection (Fig. 3(e)) of the object when SLM was relayed to the right surface of the media, which is to the best of our knowledge the case for all previous imaging or projecting experiments based on memory effect and wavefront shaping [1–4,25–27]. An obvious FOV improvement can be observed from 3(d) and 3(e) to 3(f) and 3(g) in Fig. 3. However, this does not guarantee that the middle plane of the medium is the best relay plane. Next, we perform a quantitative study on the influence of the position of the SLM’s relay plane on the FOV.

#### 3.2 Optimal relay plane of the SLM

Now, we study the influence of the position of the SLM’s relay plane on the FOV of the transmitted images and find the optimal relay plane. To measure the FOV in the imaging configuration, we used the reference point source as the object and recorded the intensity of its image (a spot) on CCD2 as the point source was scanned along the *x*-axis. The measured FOV curves of ${z}_{r}$ = 0, 0.5 and 1 mm (the zero point of ${z}_{r}$ is on the right surface of the medium and its positive direction points to the left) are shown in Fig. 4(a). Here the *x*-axis was converted into angle coordinate by dividing it by the distance (100 mm) between the point source and the medium. The change of the curve width (full width at 1/5 maximum and 1/2 maximum) with ${z}_{r}$ (in 0.05 mm step size) is plotted in Fig. 4(c) (blue curve for 1/5 width and orange curve for 1/2 width). Here, the 1.38 refractive index [28] of the chicken breast tissue was considered when calculating ${z}_{r}$.

To measure the FOV in the projecting configuration, we time-reversed the diffused wavefront emitted from the reference point source by modulating the reference beam into the phase conjugated (time-reversed) wave, which would travel against the path of the diffused wave and focus to the point source. We scanned the time-reversed focus along *x*-axis by superimposing a ramp phase map onto the conjugated one displayed on the SLM and then recorded the intensity of the focus at each scanning points. The measured FOV curves for ${z}_{r}$ = 0, 0.5 and 1 mm are shown in Fig. 4(b). We plotted the change of the curve width with ${z}_{r}$ in Fig. 4(c) (green curve for 1/5 width and purple curve for 1/2 width). We observe that in both the imaging and projecting situations, the optimal ${z}_{r}$ that maximizes the FOV is 0.55*L, and the maximum FOV (1/5 width) is about 1.5 times larger than that for ${z}_{r}$ = 0 or L. To investigate whether the thickness of the scattering medium affects the optimal ${z}_{r}$, we repeated the above measurements for a 2 mm thick chicken breast slice. The change of the FOV with ${z}_{r}$ is plotted in Fig. 4(d), which shows a similar optimal ${z}_{r}$ (0.5*L).

Different kinds of scattering samples were also tested and have showed similar results as the chicken breast tissues (see the Appendix). Based on these results, we have the following observations. i) The optimal relay plane of the SLM that maximizes the FOV of the images and projections is around the middle plane of the scattering media, which is independent of the MFP, *g* and the thickness of the media. This agrees well with our theoretical expectation in section 2: ${z}_{r}=0.5L$. The ± 10% variation of the optimal ${z}_{r}$ in the experimental results are contributed to the measurement errors and the macro nonuniformity of the samples’ refractive index distribution (distinguished from the micro nonuniformity of the refractive index, i.e., the difference between the refractive indices of the scattering particles and the background) which may be caused by the preparation of the samples. ii) The ratio between the FOV for ${z}_{r}=0.5L$ and for ${z}_{r}=0$(or ${z}_{r}=L$) decreases as the medium thickness increases.

#### 3.3 FOV enlargement by multiple wavefront measurements

Though we have optimized the FOV by relaying the SLM to the middle of the media, the FOV is still confined by the memory effect (Fig. 3(f) and Fig. 3(g)). Here, we propose a method to expand the FOV beyond the memory effect range. Since a single measurement of the diffused wavefront originating from a point source behind a scattering medium gives us the optical accessibility (imaging and projecting) to the vicinity of the point source, multiple wavefront measurements for different point sources may enlarge the accessible region. Hereby, we perform the experiments to validate this method.

Assume the conjugation of the measured wavefronts are$\mathrm{exp}(i{\phi}_{0})$, $\mathrm{exp}(i{\phi}_{1})$, $\mathrm{exp}(i{\phi}_{2})$,…, and $\mathrm{exp}(i{\phi}_{n})$, where ${\phi}_{n}$ is the phase distribution of the *n*th wavefront. Then the compensative phase map to be displayed on the SLM is given by

*n*th point source and the

*0*th point source (numbering of the points sources is determined at will). Now, let us see how the synthesized phase map ${W}_{s}$ compensates the diffused wavefront. In the imaging step, light from parts of the object that are beyond the compensation range covered by ${W}_{0}$ will be compensated by some other components in ${W}_{s}$, while the rest components will generate background noise to the image. The function of the phase ramp before the active component is to direct the compensated wavefront to the corresponding region of the image plane to avoid overlap of the images from different parts of the object. Similarly, in the projecting situation, light from abaxial parts of the object will impinge onto the SLM in large angles beyond the memory effect range, while the function of the ramp phase is to pull this light back to the optical axis. In our experiments, we measured five diffused wavefronts when the point source was placed at five different positions arranged in a ‘bottom left, bottom right, top left, top right, and center’ manner (Fig. 5(a)) behind a 1 mm thick chicken breast slice. Then we used the five conjugated phase maps and the synthesized map to do the imaging and projecting experiment. Figure 5(b)-5(g) (Fig. 5(h)-5(m)) show the images (projections) of the ‘butterfly’ object when the SLM displayed the five conjugated phase maps and the synthesized map, respectively.

It can be observed that a single conjugated phase map only ‘lights up’ a small region of the object that is in the vicinity of the corresponding point source, while the synthesized phase map can ‘light up’ a five times larger region spanned by the five-point sources. It is worth noting that the distances between the point sources should be determined appropriately. If they are too close, the FOV for each point source will overlap with each other, while if too far, some regions between the point sources will remain dark.

## 4. Conclusion and discussion

We have demonstrated the feasibility of bidirectional image transmission through thick scattering media within the memory effect range by digital optical phase conjugation. Potential applications are biomedical diagnosis and therapy, optical trapping, and imaging in low visibility weathers such as fog and rain. The FOV of the delivered images can be optimized by relaying the SLM to the middle plane of the media, and can be expanded beyond the memory effect range by performing multiple times wavefront measurements.

It appears that the bidirectional image transmission is simply the consequence of optical reciprocity. This is true in free space, but not quite in the scattering media because the mechanisms determining the optimal relay planes of the SLM in the two directions are not the same, though the optimal relay planes are the same, as stated in the theory part. In the imaging situation, the optimal relay plane is determined, at root, by the transverse shift amount experienced by the transmitted field on the output surface of the media when the input field is tilted by a certain amount. While in the projecting situation, it is determined by the optimal combination of tilting and shifting of the input field that maximizes the correlation between the input and output fields.

In the proposed method (section 3.3) to expand the FOV, the synthesized wavefront ${W}_{s}$ (Eq. (19)) is a complex function which has both amplitude and phase distributions, but our SLM is phase only, which means it can only control the phase of light. Therefore, the amplitude of ${W}_{s}$ has to be discarded and only the phase distribution is used as the compensative phase map. In consequence, the compensation effectiveness will decrease compared to the single conjugated phase map, and therefore the contrast of the image has to be compromised for larger FOV.

In practical applications where the scattering media are dynamic, one cycle of the wavefront measurement and compensation has to be completed within the decorrelation time of the media. The cycling rate is determined by both the CCD’s (CCD1 in Fig. 2) framerate *f _{c}* and SLM’s framerate

*f*, since the wavefront measurement and compensation are implemented by the CCD and SLM, respectively. The speed of wavefront measurement is

_{s}*f*/

_{c}*4*for in-line interferometry [21,29] and

*f*for off-axis interferometry [30,31], and the speed of wavefront compensation is

_{c}*f*. Therefore, the overall cycling rate is determined by the smaller one of

_{s}*f*and

_{s}*f*/

_{c}*4*(or

*f*).

_{c}The laser-generated point source and the reference arm in our setup are not necessary to characterize the scattering media. A previously reported method for digital optical phase conjugation of fluorescence [32] can be employed.

## Appendix

We here study the influence of the position of SLM’s relay plane on the FOV of the transmitted images using silica microsphere samples, which were created by dispersing silica microspheres into 10% gelatin water solutions. The refractive indexes of the microsphere and the background (gel) are 1.45 and 1.33, respectively. We created two different samples using microspheres of 1 µm and 2.5 µm diameters, which according to the Mie theory yield g of 0.95 and 0.98, respectively. The concentrations of the microsphere were controlled to $1.35\ast {10}^{-2}/\mu {m}^{3}$ and $6\ast {10}^{-4}/\mu {m}^{3}$for the 1 µm and 2.5 µm samples respectively, to obtain a 0.1 mm MFP for both samples (the products of the concentration and the cross section of the microsphere should be equal to guarantee equal MFPs). Figure 6 shows the relationship between the FOV (1/5 width) and ${z}_{r}$ for samples with different g and L. It can be observed that for all samples, the maximum FOV is reached around ${z}_{r}=0.5L$ (with a $\pm 10\%$ variation). This further confirms our statement that the optimal relay plane of the SLM should be at the middle of the media (see section 3.2).

## Funding

National Key Research and Development Program (2016YFC0100602).

## References

**1. **O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics **6**(8), 549–553 (2012). [CrossRef]

**2. **M. Qiao, H. Liu, G. Pang, and S. Han, “Non-invasive three-dimension control of light between turbid layers using a surface quasi-point light source for precorrection,” Sci. Rep. **7**(1), 9792 (2017). [CrossRef] [PubMed]

**3. **T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. **3**(1), 1909 (2013). [CrossRef] [PubMed]

**4. **J. Ryu, M. Jang, T. J. Eom, C. Yang, and E. Chung, “Optical phase conjugation assisted scattering lens: variable focusing and 3D patterning,” Sci. Rep. **6**(1), 23494 (2016). [CrossRef] [PubMed]

**5. **E. N. Leith and J. Upatnieks, “Holographic imagery through diffusing media,” J. Opt. Soc. Am. **56**(4), 523 (1966). [CrossRef]

**6. **S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. **104**(10), 100601 (2010). [CrossRef] [PubMed]

**7. **K. Deisseroth, “Optogenetics,” Nat. Methods **8**(1), 26–29 (2011). [CrossRef] [PubMed]

**8. **O. Yizhar, L. E. Fenno, T. J. Davidson, M. Mogri, and K. Deisseroth, “Optogenetics in neural systems,” Neuron **71**(1), 9–34 (2011). [CrossRef] [PubMed]

**9. **I. H. El-Sayed, X. Huang, and M. A. El-Sayed, “Selective laser photo-thermal therapy of epithelial carcinoma using anti-EGFR antibody conjugated gold nanoparticles,” Cancer Lett. **239**(1), 129–135 (2006). [CrossRef] [PubMed]

**10. **D. P. O’Neal, L. R. Hirsch, N. J. Halas, J. D. Payne, and J. L. West, “Photo-thermal tumor ablation in mice using near infrared-absorbing nanoparticles,” Cancer Lett. **209**(2), 171–176 (2004). [CrossRef] [PubMed]

**11. **H. C. Van de Hulst, *Multiple light scattering: tables, formulas, and applications* (Elsevier, 2012).

**12. **R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics **9**(9), 563–571 (2015). [CrossRef] [PubMed]

**13. **H. Yu, J. Park, K. Lee, J. Yoon, K. Kim, S. Lee, and Y. Park, “Recent advances in wavefront shaping techniques for biomedical applications,” Curr. Appl. Phys. **15**(5), 632–641 (2015). [CrossRef]

**14. **P. Lai, L. Wang, J. W. Tay, and L. V. Wang, “Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media,” Nat. Photonics **9**(2), 126–132 (2015). [CrossRef] [PubMed]

**15. **I. M. Vellekoop, “Feedback-based wavefront shaping,” Opt. Express **23**(9), 12189–12206 (2015). [CrossRef] [PubMed]

**16. **S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. **61**(7), 834–837 (1988). [CrossRef] [PubMed]

**17. **I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. **61**(20), 2328–2331 (1988). [CrossRef] [PubMed]

**18. **Y. M. Wang, B. Judkewitz, C. A. Dimarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. **3**(1), 928 (2012). [CrossRef] [PubMed]

**19. **B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. **11**(8), 684–689 (2015). [CrossRef]

**20. **G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica **4**(8), 886–892 (2017). [CrossRef]

**21. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**(16), 1268–1270 (1997). [CrossRef] [PubMed]

**22. **B. Judkewitz, Y. M. Wang, R. Horstmeyer, A. Mathy, and C. Yang, “Speckle-scale focusing in the diffusive regime with time-reversal of variance-encoded light (TROVE),” Nat. Photonics **7**(4), 300–305 (2013). [CrossRef] [PubMed]

**23. **W.-F. Cheong, S. A. Prahl, and A. J. Welch, “A review of the optical properties of biological tissues,” IEEE J. Quantum Electron. **26**(12), 2166–2185 (1990). [CrossRef]

**24. **F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. **36**(3), 373–375 (2011). [CrossRef] [PubMed]

**25. **C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express **18**(20), 20723–20731 (2010). [CrossRef] [PubMed]

**26. **G. Ghielmetti and C. M. Aegerter, “Scattered light fluorescence microscopy in three dimensions,” Opt. Express **20**(4), 3744–3752 (2012). [CrossRef] [PubMed]

**27. **I. M. Vellekoop and C. M. Aegerter, “Scattered light fluorescence microscopy: imaging through turbid layers,” Opt. Lett. **35**(8), 1245–1247 (2010). [CrossRef] [PubMed]

**28. **S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. **58**(11), R37–R61 (2013). [CrossRef] [PubMed]

**29. **W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. U.S.A. **98**(20), 11301–11305 (2001). [CrossRef] [PubMed]

**30. **E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. **38**(34), 6994–7001 (1999). [CrossRef] [PubMed]

**31. **E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. **39**(23), 4070–4075 (2000). [CrossRef] [PubMed]

**32. **I. M. Vellekoop, M. Cui, and C. Yang, “Digital optical phase conjugation of fluorescence in turbid tissue,” Appl. Phys. Lett. **101**(8), 081108 (2012). [CrossRef] [PubMed]