Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sampling requirements and adaptive spatial averaging for incoherent digital holography

Open Access Open Access

Abstract

Incoherent digital holography (IDH) enables passive 3D imaging under spatially incoherent light; however, the reconstructed images are seriously affected by detector noise. Herein, we derive theoretical sampling requirements for IDH to reduce this noise via simple postprocessing based on spatial averaging. The derived theory provides a significant insight that the sampling requirements vary depending on the recording geometry. By judiciously choosing the number of pixels used for spatial averaging based on the proposed theory, noise can be reduced without losing spatial resolution. We then experimentally verify the derived theory and show that the associated adaptive spatial averaging technique is a practical and powerful way of improving 3D image quality.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Incoherent digital holography (IDH) is a technique for recording holograms of 3D objects illuminated by spatially incoherent light sources and numerically reconstructing images at arbitrary propagation distances [1]. Unlike traditional digital holography, which uses lasers [2], IDH’s recording process is based on self-interference. Thus, holograms can be created using incoherent light sources such as sunlight, fluorescent lights, halogen lamps, and light-emitting diodes (LEDs). This feature enables us to create holographic 3D cameras that use ambient light [3,4] and perform fluorescence microscopy [57] and radiometric temperature measurement [8], none of which are possible with traditional digital holography. IDH has thus expanded holography’s potential imaging and measurement applications. It also offers unique imaging features, such as super resolution due to the violation of Lagrange invariant [911] and infinite depth-of-focus (DOF) with rotation shear [1214]. However, owing to the low contrast of IDH, the reconstructed images are more seriously affected by detector noise than those produced by traditional digital holography. This is a known technical difficulty with implementing IDH.

Temporal averaging has been used to deal with this issue and eliminate detector noise [15]. This involves capturing a sequence of holograms and then averaging them to suppress noise. This simple and effective method has been applied in many studies [16,17]. However, this method is time consuming and results in reduced temporal resolution.

Based on the analogy between time and space in optics assuming that the system is ergodic, spatial averaging or digital pixel binning should effectively reduce noise without reducing temporal resolution. Although most researchers would agree with this idea, spatial averaging for IDH has not yet been reported. With spatial averaging, each pixel’s value is mathematically averaged with those of several adjacent pixels. Of course, this does have the side effect of reducing the spatial resolution, as is well known in image sensor applications. However, IDH requires optics, such as lenses, concave mirrors, or spatial light modulators (SLMs), to introduce the radial or axial shear required to encode 3D object information into self-interference holograms [1]. Introducing such optics necessarily causes the beams or holograms to be spatially band limited prior to sampling with an image sensor. In addition, for 3D objects that must be captured over a wide depth range, the effects of spatial-frequency filtering will vary depending on the depth. Thus, there may be a mismatch between the spatial-frequency bandwidths of the image sensor and the beams passing through the optics. In other words, the image sensor may oversample the optical field or hologram, in which case we can effectively implement spatial averaging without losing spatial resolution.

In this paper, we derive theoretical sampling requirements for IDH that should enable noise to be reduced by spatial averaging without suffering the side effect of spatial resolution loss. The resulting theory reveals the relation between the spatial-frequency bandwidths of the image sensor and the beams passing through the optical components, yielding the significant insights that the sampling requirement varies depending on the recording geometry and that the image sensor oversamples holograms of 3D objects located in a certain depth range. Based on this theory, we then propose an adaptive spatial averaging method. By choosing appropriate numbers of pixels for spatial averaging depending on the recording geometry, it is possible to reduce noise while avoiding averaging’s side effect. The theoretical and experimental investigations described here can thus provide a simple and powerful way of reconstructing high-quality images via IDH.

This paper is organized as follows. In Section 2, we briefly introduce the basic principle behind IDH. In Section 3, we derive IDH’s sampling requirements, based on wave propagation and the Nyquist interval, and thereby reveal the relation between the image sensor and the beams passing through the optics. In Section 4, we describe an adaptive spatial averaging method based on this theory. In Section 5, we evaluate our method experimentally by recording a USAF resolution chart to confirm our theoretical findings and demonstrate the method’s effectiveness. Finally, we present our conclusions in Section 6.

2. Incoherent digital holography

In this section, we briefly review the basic principle behind IDH before we derive its sampling requirements in Section 3. Figure 1 illustrates a typical recording geometry for IDH based on common-path interferometry, which is commonly referred to as Fresnel incoherent correlation holography (FINCH) [5,6,15]. Although the following description and discussion are mainly based on FINCH, due to its simplicity and robustness, they can also be applied to other setups, such as Michelson or Mach-Zehnder interferometers [3,14].

 figure: Fig. 1.

Fig. 1. Recording geometry for incoherent digital holography.

Download Full Size | PDF

In IDH, the 3D object to be captured can be regarded as being composed of many mutually incoherent point sources. Accordingly, the hologram can be regarded as being formed by the incoherent summation of the individual point-source holograms. For simplicity, the following theoretical description therefore describes the recording process for a single point source centered on the optical axis. Note that IDH requires high temporal coherence to create holograms, so we assume the point-source light is monochromatic, with wavelength $\lambda$. Starting from the point source, the beam propagates a distance $z_s$ before being incident on a lens of focal length $f_o$. The transmitted beam’s complex amplitude distribution after passing through the lens can be represented mathematically based on Fresnel diffraction, as follows:

$$u_o(\vec{r}) = C_0 \exp \left\{ \frac{i\pi}{\lambda} \left( \frac{f_o-z_s}{z_s f_o} \right) \left|\vec{r} \right|^2 \right\},$$
where $\vec {r}$ is a 2D position vector perpendicular to the optical axis and $C_0$ is a constant. In the following equations, the $C_n$ $(n=1, 2, 3,\ldots )$ are also used as constants. The optical field represented by Eq. (1) then propagates a distance $z_l$ before reaching an SLM, which acts as a bifocal lens and yields two spherical beams. The SLM introduces defocus phases with focal lengths $f_{d1}$ and $f_{d2}$, respectively, in the two beams, and their complex amplitude distributions are
$$u_1(\vec{r}) = C_1 \exp \left\{ \frac{i\pi}{\lambda} \left( \frac{f_{d1}-z_d}{z_d f_{d1}} \right) \left|\vec{r} \right|^2 \right\},$$
$$u_2(\vec{r}) = C_2 \exp \left\{ \frac{i\pi}{\lambda} \left( \frac{f_{d2}-z_d}{z_d f_{d2}} \right) \left|\vec{r} \right|^2 \right\},$$
where
$$z_d = \frac{z_s f_o + z_l f_o - z_l z_s}{f_o - z_s}.$$
Note that $f_{d1}$ and $f_{d2}$ must be different if we are to encode the 3D object’s depth information into a hologram. The SLM generally introduces these two focal lengths by designing phase patterns [18,19] or leveraging the birefringence properties of liquid crystals [20,21]. Alternatively, polarization diffraction optical elements can be used [22]. Next, the phase-modulated beams propagate a distance $z_h$ to the image sensor, at which point the spherical beams’ complex amplitudes can be represented as
$$u_1'(\vec{r}) = C_3 \exp \left\{ \frac{i\pi}{\lambda} \left( \frac{f_{d1}-z_d}{f_{d1} z_d + f_{d1} z_h - z_h z_d } \right) \left|\vec{r} \right|^2 \right\},$$
$$u_2'(\vec{r}) = C_4 \exp \left\{ \frac{i\pi}{\lambda} \left( \frac{f_{d2}-z_d}{f_{d2} z_d + f_{d2} z_h - z_h z_d } \right) \left|\vec{r} \right|^2\right\},$$
These beams interfere with each other, creating a self-interference hologram on the sensor:
$$\left| u_1' + u_2' \right| ^2 = B + C_5 \exp \left\{ \frac{i\pi}{\lambda z_r} \left|\vec{r} \right|^2 \right\} + \mathrm{c.c.},$$
where $B$ denotes the hologram’s bias component, the reconstruction distance $z_r$ is
$$z_r = \frac{(z_h z_d - f_{d1} z_d - f_{d1} z_h) (z_h z_d - f_{d2} z_d - f_{d2} z_h)}{(f_{d1} - f_{d2}) z_d^2},$$
and “c.c.” denotes the complex conjugate of the second term. The hologram described by Eq. (7) is then captured digitally by the image sensor. During the reconstruction process, the first and third (or second) terms in Eq. (7) appear as bias and twin images, which reduce the quality of the numerically reconstructed images. However, phase-shifting [1,2325], off-axis [19,2628], and compressive sensing [29,30] techniques can be used to extract only the desired second (or third) term. From the extracted complex amplitude distribution, it is possible to numerically refocus or reconstruct a 3D image of the object via numerical backpropagation based on Fresnel diffraction or angular spectrum methods [31].

3. Sampling requirements for incoherent digital holography

Based on Fig. 1 and the equations presented in Section 2, we can derive theoretical sampling requirements for IDH. The hologram described by Eq. (7) can be rewritten as

$$\left| u_1' + u_2' \right| ^2 = B + 2 C_5 \cos \left\{ \frac{\pi}{\lambda z_r} \left|\vec{r} \right|^2 \right\}.$$
This corresponds to a Gabor zone plate of focal length $z_r$. To appropriately digitize this with an image sensor without aliasing, the pixel pitch must satisfy the sampling theorem. According to this theorem, the Nyquist interval $p_N$ is given by
$$p_N = \frac{\lambda z_r}{2\left| \vec{r}\right|_{\mathrm{max}}},$$
where $\left | \vec {r}\right |_{\mathrm {max}}$ denotes the maximum position vector, corresponding to the radius of the Gabor zone plate. As shown in the inset to Fig. 1, IDH holograms are limited in spatial extent due to vignetting by the optics, such as the lens and SLM, which are needed for 3D imaging [1]. Note that this inevitably restricted hologram diameter is one of the main differences between IDH and traditional digital holography, which does not require such optics to create holograms. Denoting the hologram diameter by $D_h$, Eq. (10) can be rewritten as
$$p_N = \frac{\lambda z_r}{D_h}.$$
This is simply proportional to the reciprocal of the Gabor zone plate’s numerical aperture, and correspondingly IDH’s lateral spatial resolution, which is given by $\delta \propto 1.22 \lambda z_r(D_h M_T)^{-1}$ [32,33], where $M_T$ is the transverse magnification. To analytically and/or numerically evaluate the sampling requirements in more detail, we define the diameter $D_h$ in terms of parameters such as the object’s depth position, the focal length of the optics, and the distance between them, as described below.

Denoting the lens diameter by $D_o$, the diameter $D_l$ of the beam incident on the SLM can be represented as follows, based on similar triangles determined by the curvature of the spherical phase in Eq. (1):

$$D_l = \left| 1+ \frac{f_o - z_s}{z_s f_o} z_l \right| D_o.$$
Note that we ignore the fluctuation of the diameter due to diffraction. In practice, the SLM causes vignetting with an effective width of $D_{slm}$, so the beam diameter immediately after the SLM is determined by
$$D_{h0} = \mathrm{min} \left\{ D_l, D_{slm} \right\},$$
where $\mathrm {min} \left \{ \cdots \right \}$ yields the minimum term in the set. The SLM modulates the beam to generate two spherical beams $u_1\left (\vec {r}\right )$ and $u_2\left (\vec {r}\right )$, as expressed by Eqs. (2) and (3), which then propagate to the image sensor while converging or diverging, depending on their defocus phases. Similarly to Eq. (12), the diameters of the beams reaching the image sensor are
$$D_{h1} = \left| 1+ \frac{f_{d1} - z_d}{z_d f_{d1}} z_h \right| D_{h0},$$
$$D_{h2} = \left| 1+ \frac{f_{d2} - z_d}{z_d f_{d2}} z_h \right| D_{h0},$$
The area within which the two beams overlap determines the hologram’s shape and size, and its effective width $D_h$. Again, the image sensor may cause vignetting, with an effective width of $D_{sensor}$, so the hologram diameter is determined by
$$D_{h} = \mathrm{min} \left\{ D_{h1}, D_{h2}, D_{sensor} \right\}.$$
Note that SLMs and image sensors are generally rectangular, so we can define $D_{slm}$ and $D_{sensor}$ in three ways, in terms of their vertical, horizontal, and diagonal widths. By substituting Eqs. (8) and (16) into Eq. (11) and referring to the actual recording geometry parameters, we can numerically evaluate the sampling conditions, or the Nyquist interval $p_N$, for IDH. To avoid undersampling the hologram, the image sensor’s pixel pitch $p$ should satisfy the condition
$$p \le p_N.$$
Once we have determined the IDH recording setup, all the system parameters are fixed, except for the recording distance $z_s$, which varies depending on the 3D object’s actual depth position. Because $p_N$ is a function of $z_s$, the required pixel pitch $p_N$ also changes depending on the depth position. In 3D imaging via IDH, the objects to be captured can be located at a range of depths, so the image sensor may oversample holograms for objects at certain depths, enabling us to adopt spatial averaging without losing spatial resolution. In Section 4, we take advantage of this fact to suppress noise using an adaptive spatial averaging method.

Note that the above derivation for $p_N$ is based on a point source with a temporally coherent light. In practice, IDH utilizes spectrally band-limited light from a temporally incoherent light source, passed through a bandpass filter. The use of such incoherent light smears the outer regions of holograms [34,35], reducing the diameter $D_h$. Consequently, in practical applications, we can use larger pixel pitches $p$ than the theoretical $p_N$ value given by Eq. (11).

4. Adaptive spatial averaging

There are several potentially effective ways of implementing spatial averaging for IDH. However, because the focus of this study is to verify the effectiveness of the theoretical results and rigorously evaluate them experimentally (see Section 5), we adopted the simple averaging process described below.

Figure 2 illustrates our adaptive spatial averaging method, based on the theory derived in Section 3. After capturing a hologram with the image sensor, each pixel is locally averaged with $n \times n$ $(n\in \mathbb {N})$ adjacent pixels as follows:

$$I'(i', j') = \sum_{i=(i'-1)n+1}^{i'n} \sum_{j=(j'-1)n+1}^{j'n} \frac{I(i,\;j)}{n^2},$$
where $I(i, j)$ and $I'(i', j')$ are the original and averaged holograms, respectively. Here $(i, j)$ and $(i', j')$ denote pixel indices. This averaging process reduces the number of pixels in the hologram by a factor of $n \times n$. This is inconvenient for comparing the quality and resolution of the images reconstructed with and without averaging, so we upsampled the averaged holograms by a factor of $n \times n$ using a nearest neighbor algorithm to maintain the same number of pixels before and after averaging. Although this increased the number of pixels, it also introduced nonnegligible aliasing, which led to artifacts in the images reconstructed via numerical backpropagation. To mitigate this, we applied a spatial lowpass filter with a window size of $n^{-1} n_x \times n^{-1} n_y$, where $n_x$ and $n_y$ are the numbers of hologram pixels in the horizontal and vertical directions. Note that this lowpass filter’s bandwidth was equivalent to that for the previous spatial averaging process.

 figure: Fig. 2.

Fig. 2. Illustration of adaptive spatial averaging.

Download Full Size | PDF

The above averaging process essentially emulates capturing the hologram using an image sensor with pixel pitch $np$. If the recording system’s actual pixel pitch is $p$, the sampling requirement given by Eq. (17) then becomes

$$n \le \frac{p_N}{p}.$$
This implies that when $2p \le p_N$, we can perform adaptive spatial averaging without losing spatial resolution.

5. Experiments

5.1 Noise reduction

Although the effectiveness of applying temporal averaging in IDH has been investigated in detail [15], that of spatial averaging has yet to be reported and hence its ability to reduce noise remains unknown. Before verifying the theoretical results derived in Section 3, we first evaluate spatial averaging’s effect on IDH image quality by comparing it with temporal averaging. In this experiment, we used a standard USAF resolution target with clear bars on an opaque background as the object, and recorded it using the setup shown in Fig. 3. Here, the SLM is being operated in a reflection configuration, but the setup is otherwise essentially the same as that shown in Fig. 1. During the recording process, the object was back-illuminated by incoherent LED light with a center wavelength of 625 nm and a bandwidth of 18 nm. This light was then collected by a lens of focal length $f_o =$ 300 mm and a 10-nm bandpass filter centered at 633 nm was applied to enhance the light’s temporal coherence. To generate two spherical beams with different defocus phases $f_{d1}$ and $f_{d2}$, we used a phase-only SLM with 1408 $\times$ 1058 pixels and a pixel pitch of 10.4 $\mu$m, together with a set of two linear polarizers [20].

 figure: Fig. 3.

Fig. 3. Recording setup for the image quality and spatial resolution experiments.

Download Full Size | PDF

The first polarizer’s transmission axis was aligned at 45° against the alignment direction of the liquid crystals in the SLM. The SLM displayed a 1024 $\times$ 1024-pixel phase pattern for the defocus phase of focal length $f_{d1} =$ 400 mm, which only modulated the incident beam’s horizontal linear polarization. In contrast, the beam’s vertical linear polarization reflected off the SLM’s practically flat surface, i.e., $f_{d2}\rightarrow \infty$. The modulated polarization beams were then incident on the second polarizer, which was aligned parallel to the first. This allowed the orthogonally polarized beams to interfere with each other, thereby creating a hologram. This was captured using a CMOS camera with 2048 $\times$ 2048 pixels and a pixel pitch of 6.5 $\mu$m.

To eliminate the bias and twin images, we introduced phase shifts of 0, $\pi /2$, $\pi$, and $3\pi /2$ rad with the SLM in turn, to apply a four-step phase-shifting technique [25,36]. The effective diameters of the lens, SLM, and camera were $D_o=$ 12 mm, $D_{slm}=$ 10.65 mm, and $D_{sensor}=$ 13.3 mm, respectively. The resolution chart was located at $z_s =$ 400 mm, the lens and SLM were separated by $z_l =$ 100 mm, and the SLM and camera were separated by $z_h =$ 260 mm. Here, we made no attempt to optimize the recording setup to achieve super-resolution IDH [10,11]. The setup was designed to capture the whole target without vignetting.

Figure 4(a) shows the image reconstructed without any form of averaging. This includes the whole USAF resolution chart used in the experiment, and consists of 2048 $\times$ 2048 pixels, the same number as were captured by the image sensor. To investigate the effect of changing $n$ on image quality, Figs. 4(b)–4(d) show the results of performing spatial averaging with $n=$ 2, 4, and 8, respectively. Since the numbers of image sensor pixels are divisible by these values, the images produced by spatial averaging and nearest neighbor-upscaling (Section 4) had the same numbers of pixels as the original image, without the need for zero-padding or cropping. This is helpful for comparing the image quality in detail.

 figure: Fig. 4.

Fig. 4. Reconstruction results (a) without averaging, with spatial averaging over (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel blocks, and with temporal averaging over (e) 4, (f) 16, and (g) 64 holograms.

Download Full Size | PDF

As can be seen from Figs. 4(a)–4(d), spatial averaging qualitatively improved the image quality. To evaluate the image quality more quantitatively, we also calculated the contrast ratio (CR) and speckle contrast (SC) [37]. The CR measures the brightness of images relative to their backgrounds, and is defined by

$$\mathrm{CR} = \frac{\mu_b}{\mu_d},$$
where $\mu _b$ and $\mu _d$ are the mean intensity values in bright and dark areas, respectively. A higher CR implies higher image quality. Here, we chose the 140 $\times$ 140-pixel bright and dark areas indicated by the solid red and dotted blue boxes in Fig. 4.

In contrast, the SC, or coefficient of variance, estimates the spatial fluctuations in image intensity, and is defined by

$$\mathrm{SC} = \frac{\sigma_b}{\mu_b},$$
where $\sigma _b$ is the standard deviation of the intensity over the bright area. Ideally, the intensity distribution over the chosen bright area of the chart should be unity, so a smaller SC is desirable.

The CR and SC values are shown under the reconstructed images in Fig. 4, and are plotted as stars in Fig. 5. Note that in Fig. 5, we regard spatial averaging over 2 $\times$ 2-, 4 $\times$ 4-, and 8 $\times$ 8-pixel blocks as being comparable to temporal averaging over 4, 16, and 64 holograms. Figures 4 and 5 show that using spatial averaging can improve both the CR and SC, and that larger values of $n$ can improve image quality further. Although we used a 2D transmissive object (the USAF resolution chart) for this evaluation, spatial averaging’s effectiveness is not limited to such objects: in Appendix A, we further demonstrate its effectiveness by recording both 2D and 3D reflective objects.

 figure: Fig. 5.

Fig. 5. Image quality comparison for temporal and spatial averaging, in terms of the (a) contrast ratio and (b) speckle contrast.

Download Full Size | PDF

For temporal averaging, we captured a total of 64 sets of 4 phase-shifted holograms without modifying the recording setup, then subsequently averaged them. Figures 4(e)–4(g) show typical reconstructed images after temporal averaging with 4, 16, and 64 holograms, respectively. Again, we calculated the CR and SC of each image, and the results are given in Fig. 4. In addition, Fig. 5 plots the changes in the CR and SC with the number of holograms used for temporal averaging. From Figs. 4 and 5, we see that temporal averaging can also provide a steady improvement in image quality, in agreement with the results of a previous study [15].

Figure 5 indicates that, when averaging over 4 holograms, spatial and temporal averaging yield similar CRs and SCs. However, as the number of holograms increases, they show different improvements in CR and SC. Spatial averaging is slightly inferior to temporal averaging in terms of the SC, but it performs better in terms of the CR. Our ergodic noise assumption thus becomes less valid as the number of holograms increases, possibly due to non-stationary spatiotemporal fluctuations in the light and/or noise.

Despite the slight differences in performance between spatial and temporal averaging, the above quantitative and qualitative comparisons show that they can both provide comparable image quality improvements. Remarkably, spatial averaging can reduce noise without requiring us to capture any additional holograms, but higher $n$ values may affect the spatial resolution. In the following subsection, we evaluate this potential side effect of spatial averaging. In addition, we show experimentally that, as our theoretical results suggest, we can overcome this side effect under certain conditions.

5.2 Modulation transfer function

In this section, we discuss and evaluate the sampling conditions needed to reduce noise via spatial averaging without losing spatial resolution. In addition, we use the modulation transfer function (MTF) to evaluate the spatial resolutions of the reconstructed images and thereby confirm both our theoretical results and the effectiveness of adaptive spatial averaging.

Figure 6 plots the relation between the Nyquist interval $p_N$ and the recording distance $z_s$, based on substituting the parameters of our recording setup (Section 5.1) into Eq. (11). In Appendix B, we verify this curve by evaluating the periods of holograms created from a point source. At points on the curve where the pixel pitch $p$ is smaller than $p_N$, the image sensor is oversampling the hologram, and vice versa. Note that, although we have only plotted $p_N$ over a 0–1000 mm range in Fig. 6, the curve is practically convex. The minimum Nyquist interval is 8.64 $\mu$m at a distance of $z_s =$ 786 mm, where the two spherical beams’ diameters are perfectly matched [6,10,32]. Therefore, in our recording setup (Fig. 3), the image sensor’s 6.5 $\mu$m pixel pitch means the holograms are always oversampled, regardless of the object’s depth position.

 figure: Fig. 6.

Fig. 6. Nyquist interval as a function of the object’s recoding distance. The solid squares show the sampling pitches with and without spatial averaging.

Download Full Size | PDF

In the above experiment (Section 5.1), the USAF resolution chart was positioned at $z_s=$ 400 mm. In this case, as shown in Fig. 6, spatial averaging with $n=$ 2 may not lead to a loss of spatial resolution because the corresponding effective pixel size $np=$ 13 $\mu$m is below the Nyquist interval curve. In contrast, for spatial averaging with $n=$ 4 and 8, the effective pixel sizes are $np=$ 26 and 52 $\mu$m, respectively, which lie above the curve and thus may degrade the spatial resolution.

To verify the above prediction, we calculated the MTFs of the reconstructed images. MTFs are generally evaluated by estimating the contrast of the bars in the reconstructed USAF resolution chart images. However, the low quality of these images, particularly Fig. 4(a), caused serious contrast fluctuations that would make difficult to correctly evaluate the MTF this way. To avoid this issue and measure the quality more robustly, we first reduced the noise as far as possible via time averaging with 64 holograms (Section 5.1) before applying spatial averaging with $n=$ 2, 4, and 8. Figure 7 shows the final reconstructed images, together with magnified views to enable a qualitative comparison of the changes in spatial resolution.

 figure: Fig. 7.

Fig. 7. Images reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging after temporal averaging with 64 holograms.

Download Full Size | PDF

The theoretical spatial resolution of images reconstructed via IDH, as determined by the Rayleigh criterion with lateral magnification [11,32,33], is 31.24 $\mu$m, and the corresponding spatial-frequency bandwidth limit is 16.01 lp/mm. The Rayleigh criterion is generally valid for incoherent imaging. In the case of IDH, however, even though the recording process is based on incoherent light, the reconstruction process is based on coherent imaging. In coherent imaging, the spatial resolutions of the reconstructed images can fluctuate depending on the phases between adjacent lights [31]. Consequently, when recording a complicated object (such as the USAF resolution chart), the practical resolution may be slightly different from the Rayleigh criterion. Thus, we estimated the bar contrast up to 20.16 lp/mm (group 4, element 3), beyond the Rayleigh criteria of 16.01 lp/mm, so as to adequately investigate the recording setup’s full spatial-frequency response.

Figure 8 shows the resulting MTFs. Ideally, IDH MTFs should exhibit top-hat responses [32]. In our experiment, as this figure shows, optical component aberrations and low temporal coherence reduced the contrast of the high-frequency components. Despite this effect, however, the MTFs’ similarity is still of interest to verify spatial averaging’s effect on the spatial resolution. There is a relatively small difference between the MTFs for $n=$ 1 and 2, implying no loss of spatial resolution due to 2 $\times$ 2 spatial averaging. In contrast, spatial averaging with $n=$ 4 and 8 reduces the contrast of the high-frequency components, due to the relatively large effective pixel pitches involved. These results clearly support the above theoretical prediction, inferred from Fig. 6. However, as our initial experiment confirmed (Fig. 4), 2 $\times$ 2 spatial averaging can improve image quality, indicating that it can reduce noise without losing spatial resolution, in accordance with our theoretical results.

 figure: Fig. 8.

Fig. 8. Modulation transfer function comparison. The broken lines are to guide the eye, while the solid vertical line shows the theoretical limit determined by the Rayleigh criterion.

Download Full Size | PDF

In the above experiment, the object was always positioned at $z_s=$ 400 mm. As mentioned previously, if the object was placed at $z_s$ 786 mm, where the two spherical beams overlap perfectly, the corresponding Nyquist interval would be 8.64 $\mu$m. In that case, applying spatial averaging for $n \le 2$ with our image sensor (6.5 $\mu$m pixel pitch) would lead to reduced spatial resolution although noise can be reduced, in Appendix C, we experimentally evaluated the CR, SC, and MTF for this case. However, either increasing or decreasing the distance relative to $z_s=$ 786 mm would increase the Nyquist interval (see Fig. 6 and Appendix B), enabling us to apply spatial averaging with a larger number of pixels. Moreover, as mentioned in Section 3, using low-temporal-coherence light tends to yield large Nyquist intervals. This means that, in practical 3D imaging scenarios, spatial averaging could possibly be applied without losing spatial resolution even when the object is located at $z_s=$ 786 mm. Consequently, as long as the image sensor’s pixel pitch is less than the minimum Nyquist interval, it should be possible to implement spatial averaging without losing spatial resolution.

In addition, the Nyquist interval curve shown in Fig. 6 is specific to our experimental setup (Fig. 3). By modifying the setup appropriately, we could potentially control properties such as the depth plane of the minimum Nyquist interval and the curve’s slope. For a given imaging scenario, we should thus be able to optimize the Nyquist interval curve so as to reconstruct high-quality images, but we leave a more detailed investigation of this point as a topic for future work. Adaptive spatial averaging only requires a simple postprocessing step depending on the recording geometry. Thus, it could be a practical and powerful way to improve IDH image quality.

Additionally, optical scanning holography based on incoherent mode can also create an incoherent hologram for reconstructing high-quality 3D images [38,39]. However, this technique needs to illuminate objects with a specific illumination pattern shaped as the Fresnel zone plate. Moreover, raster scanning of an object or the illumination pattern is required, resulting in time-consuming process. Although the scanning process can be mitigated using compressed optical scanning holography [40] or compressed sensing [41], the recording time is still low compared with IDH. In contrast, IDH does not require the specific illumination pattern and raster scanning. Therefore, IDH using the proposed spatial averaging method is useful for reconstructing high-quality 3D images and would stimulate the application of IDH.

6. Conclusion

In this study, we have, for the first time, derived theoretical sampling requirements for IDH with the aim of using spatial averaging to reduce noise effectively. We have also verified these theoretical results experimentally. The theory related to the sampling requirement reveals the relation between the spatial-frequency bandwidths of the image sensor and the optical field passing through the optics, thereby yielding the significant insight that the sampling requirement varies depending on the recording geometry. Specifically, there must be a difference between the spatial-frequency bandwidths of the optical field and the image sensor for objects located over a certain depth range.

These theoretical findings led us to a useful spatial averaging strategy that can reduce noise without losing spatial resolution, as verified by our experiments. We therefore believe that such an adaptive spatial averaging method could play an important role in reconstructing high-quality 3D images via IDH in practice. In addition, our theoretical results could guide the design of suitable recording systems for IDH based on the desired spatial resolution, noise level, and imaging scenario. Although we have mainly investigated the feasibility of spatial averaging, combining adaptive spatial averaging with temporal averaging could further improve 3D image quality.

Appendix A: Recording reflective objects with spatial averaging

In this Appendix, to further evaluate the feasibility of spatial averaging for IDH, we record holograms of several reflective 2D and 3D objects, namely a Japanese coin, a euro coin, metal plates, and dice, as shown in Figs. 9(a)–9(d). These images were captured by modifying our original recording setup (Fig. 3) to use the SLM as a tube lens. The coins shown in Figs. 9(a)–9(b) were recorded individually. To simulate a 3D object, the two metal plates shown in Fig. 9(c) were placed in different axial planes. In addition, we also used four dice as one combined 3D object, as shown in the defocus images in Figs. 9(d). To record the reflective objects, we used the illumination configuration shown in Fig. 9(e). Here, the objects were illuminated with the same LED as was used in the original setup (Fig. 3). The reflected beam was collected by the lens, from where the optical field propagated to the recording setup and the hologram was captured.

 figure: Fig. 9.

Fig. 9. Recording reflective objects, namely (a) a Japanese coin, (b) a Euro coin, (c) two metal plates, and (d) four dice. (e) Illumination setup.

Download Full Size | PDF

Figures 1013 show images reconstructed with and without using spatial averaging (Section 4), performed using 2 $\times$ 2-, 4 $\times$ 4-, or 8 $\times$ 8-pixel blocks. As in our previous experiments (Section 5.1), we evaluated the image quality in terms of the CR. Note that we did not consider the SC here, as it is generally used to evaluate unity intensity distributions, whereas our reflective objects consisted of rough surfaces. The CRs are shown above the reconstructed images in Figs 1013. Spatial averaging improved the CRs of all the reflective objects, and they increased further as the number of pixels used for averaging increased. This demonstrates that spatial averaging can help to improve image quality for reflective 2D and 3D objects as well as transmissive ones. Note that the DOF or defocus behaves differently in IDH than in conventional imaging, as confirmed by Figs. 9, 12, and 13. In IDH, the DOF is relatively large, which is one of its known drawbacks [10,25].

 figure: Fig. 10.

Fig. 10. Images of a Japanese coin reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Images of a Euro coin reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Images of two metal plates, reconstructed by focusing on a rear plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging, and by focusing on a front plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Images of four dice, reconstructed by focusing on a rear plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging, and by focusing on a front plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.

Download Full Size | PDF

Appendix B: Verification of the theoretical Nyquist interval

In this Appendix, we verify the theoretically derived curve shown in Fig. 6. Figure 14(a) shows the optical setup used in this experiment. This is essentially the same as our previous recording system (Fig. 3), but we used a He-Ne laser with a wavelength of 633 nm to create an ideal point source with high temporal coherence. By moving lens L2, we could generate a point source in an arbitrary depth plane. We therefore moved the lens in steps to create point sources located at $z_s=$ 100–600 mm in 100 mm intervals, capturing a hologram for each.

 figure: Fig. 14.

Fig. 14. (a) Experimental setup for estimating the holograms’ minimum half periods. (b) Nyquist interval versus recording distance. The squares and circles show the half periods measured via simulation and experiment, respectively.

Download Full Size | PDF

Figure 15 shows the holograms captured at each depth plane, together with magnified views. Here, the hologram’s period increases as the recording distance $z_s$ decreases, and the half periods should match the theoretically predicted Nyquist intervals. We therefore estimated the minimum half period of each hologram; the results are shown in Fig. 15 and plotted as solid circles in Fig. 14(b). As a reference, we also simulated the creation of point-source holograms numerically, based on the angular spectrum method [31] and the recording process described in Section 2. The calculated holograms are shown in Fig. 15, and their half periods are plotted as solid squares in Fig. 14(b).

 figure: Fig. 15.

Fig. 15. Holograms of a point source, comparing the experimental results (left column) with the theoretical ones (right column).

Download Full Size | PDF

As Fig. 14(b) shows, the simulated and experimental results are reasonably consistent with the theoretical curve. Note, however, that the beams’ spatial extents were slightly different in the experiment than in the simulation, possibly due to a difference in the point source’s diameter. Reducing the source’s diameter in the simulation would have enabled the results to more closely match the experiment, but achieving this while maintaining reasonable computational accuracy using the angular spectrum method would have required substantially more computer memory and/or computation time. To avoid such increased computational requirements, we conducted the simulations so as to create the visible outer fringes shown in Fig. 15.

From Figs. 14(b) and 15, the minimum half periods measured via simulation and experiment are both slightly larger than the theoretical curve. This difference may be due to the assumption made when calculating the theoretical beam diameter, namely that it simply obeys a similar triangles rule. However, in practice, diffraction must cause slight changes in the beam diameter with propagation distance. Comparing the simulated and experimental results at $z_s =$ 100 mm, the half period is larger in the experiment than in the simulation. This may be caused by aberrations, as evidenced by the deformation of the holograms in Fig. 15.

Despite these minor discrepancies, these results demonstrate the accuracy of the theoretical curve, which provides an intuitive understanding of the recording system’s Nyquist interval.

Appendix C: Spatial averaging at perfectly overlapping condition

In this Appendix, we experimentally evaluate the effect of spatial averaging at the condition where two beams perfectly overlap. As theoretically discussed above via Fig. 6 in subsection 5.2, applying spatial averaging for $n \ge 2$ using our setup at $z_s=$ 786 mm leads to reduced spatial resolution with suppression of noise. To verify this prediction, we captured holograms of the object located at $z_s=$ 786 mm. Notably, we used a 1-nm bandpass filter in the experimental setup to prevent reducing a hologram diameter due to low temporal coherence [34,35]. Figure 16(a)–16(d) show reconstructed images and corresponding CR and SC values are shown under them. These results show that applying spatial averaging with larger values of $n$ can improve both the CR and SC regardless of the condition where two beams perfectly overlap. In contrast, the MTF are affected by spatial averaging as shown in Fig. 16(e). Spatial averaging obviously reduces the contrast of the high-frequency components. These results certainly support the above theoretical prediction, inferred from Fig. 6. When we used the image sensor with a 6.5-$\mu$m pixel pitch in the setup, spatial averaging for the reconstruction of an object located at $z_s=$ 786 mm reduced spatial resolution. However, either increasing or decreasing the distance relative to $z_s=$ 786 mm would increase the Nyquist interval, enabling us to apply spatial averaging without losing spatial resolution.

 figure: Fig. 16.

Fig. 16. Experimental verification under the perfect overlapping condition. Images reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging. (e) Modulation transfer function comparison.

Download Full Size | PDF

References

1. J. Rosen, A. Vijayakumar, M. Kumar, M. R. Rai, R. Kelner, Y. Kashter, A. Bulbul, and S. Mukherjee, “Recent advances in self-interference incoherent digital holography,” Adv. Opt. Photonics 11(1), 1–66 (2019). [CrossRef]  

2. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4(4), 472–553 (2012). [CrossRef]  

3. M. K. Kim, “Full color natural light holographic camera,” Opt. Express 21(8), 9636–9642 (2013). [CrossRef]  

4. K. Choi, K.-I. Joo, T.-H. Lee, H.-R. Kim, J. Yim, H. Do, and S.-W. Min, “Compact self-interference incoherent digital holographic camera system with real-time operation,” Opt. Express 27(4), 4818–4833 (2019). [CrossRef]  

5. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

6. N. Siegel, V. Lupashin, B. Storrie, and G. Brooker, “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10(12), 802–808 (2016). [CrossRef]  

7. X. Quan, M. Kumar, O. Matoba, Y. Awatsuji, Y. Hayasaki, S. Hasegawa, and H. Wake, “Three-dimensional stimulation and imaging-based functional optical microscopy of biological cells,” Opt. Lett. 43(21), 5447–5450 (2018). [CrossRef]  

8. M. Imbe, “Radiometric temperature measurement by incoherent digital holography,” Appl. Opt. 58(5), A82–A89 (2019). [CrossRef]  

9. X. Lai, S. Zeng, X. Lv, J. Yuan, and L. Fu, “Violation of the Lagrange invariant in an optical imaging system,” Opt. Lett. 38(11), 1896–1898 (2013). [CrossRef]  

10. J. Rosen and R. Kelner, “Modified Lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22(23), 29048–29066 (2014). [CrossRef]  

11. X. Lai, S. Xiao, Y. Guo, X. Lv, and S. Zeng, “Experimentally exploiting the violation of the Lagrange invariant for resolution improvement,” Opt. Express 23(24), 31408–31418 (2015). [CrossRef]  

12. D. Weigel, H. Babovsky, A. Kiessling, and R. Kowarschik, “Widefield microscopy with infinite depth of field and enhanced lateral resolution based on an image inverting interferometer,” Opt. Commun. 342, 102–108 (2015). [CrossRef]  

13. K. Watanabe and T. Nomura, “Spatially incoherent Fourier digital holography by for-step phase-shifting rotational shearing interferometer and its image quality,” Opt. Rev. 24(3), 351–360 (2017). [CrossRef]  

14. T. Nobukawa, Y. Katano, T. Muroi, K. Kinoshita, and N. Ishii, “Bimodal incoherent digital holography for both three-dimensional imaging and quasi-infinite–depth-of-field imaging,” Sci. Rep. 9(1), 3363 (2019). [CrossRef]  

15. B. Katz, D. Wulich, and J. Rosen, “Optimal noise suppression in Fresnel incoherent correlation holography (FINCH) configured for maximum imaging resolution,” Appl. Opt. 49(30), 5757–5763 (2010). [CrossRef]  

16. Y. Wan, T. Man, and D. Wang, “Incoherent off-axis Fourier triangular color holography,” Opt. Express 22(7), 8565–8573 (2014). [CrossRef]  

17. J. Rosen, V. Anand, M. Rai, S. Mukherjee, and A. Bulbul, “Review of 3D imaging by coded aperture correlation holography (COACH),” Appl. Sci. 9(3), 605 (2019). [CrossRef]  

18. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012). [CrossRef]  

19. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42(3), 383–386 (2017). [CrossRef]  

20. G. Brooker, N. Siegel, V. Wang, and J. Rosen, “Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy,” Opt. Express 19(6), 5047–5062 (2011). [CrossRef]  

21. G. Brooker, N. Siegel, J. Rosen, N. Hashimoto, M. Kurihara, and A. Tanabe, “In-line FINCH super resolution digital holographic fluorescence microscopy using a high efficiency transmission liquid crystal GRIN lens,” Opt. Lett. 38(24), 5264–5267 (2013). [CrossRef]  

22. K. Choi, J. Yim, and S.-W. Min, “Achromatic phase shifting self-interference incoherent digital holography using linear polarizer and geometric phase lens,” Opt. Express 26(13), 16212–16225 (2018). [CrossRef]  

23. Y. Wan, T. Man, F. Wu, M. K. Kim, and D. Wang, “Parallel phase-shifting self-interference digital holography with faithful reconstruction using compressive sensing,” Opt. Lasers Eng. 86, 38–43 (2016). [CrossRef]  

24. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, “Single-shot phase-shifting incoherent digital holography,” J. Opt. 19(6), 065705 (2017). [CrossRef]  

25. T. Nobukawa, T. Muroi, Y. Katano, N. Kinoshita, and N. Ishii, “Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings,” Opt. Lett. 43(8), 1698–1701 (2018). [CrossRef]  

26. J. Hong and M. K. Kim, “Single-shot self-interference incoherent digital holography using off-axis configuration,” Opt. Lett. 38(23), 5196–5199 (2013). [CrossRef]  

27. C. M. Nguyen, D. Muhammad, and H.-S. Kwon, “Spatially incoherent common-path off-axis color digital holography,” Appl. Opt. 57(6), 1504–1509 (2018). [CrossRef]  

28. C. M. Nguyen and H.-S. Kwon, “Common-path off-axis incoherent Fourier holography with a maximum overlapping interference area,” Opt. Lett. 44(13), 3406–3409 (2019). [CrossRef]  

29. J. Weng, D. C. Clark, and M. K. Kim, “Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography,” Opt. Commun. 366, 88–93 (2016). [CrossRef]  

30. T. Man, Y. Wan, F. Wu, and D. Wang, “Self-interference compressive digital holography with improved axial resolution and signal-to-noise ratio,” Appl. Opt. 56(13), F91–F96 (2017). [CrossRef]  

31. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

32. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by finch fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef]  

33. N. Siegel, J. Rosen, and G. Brooker, “Reconstruction of objects above and below the objective focal plane with dimensional fidelity by FINCH fluorescence microscopy,” Opt. Express 20(18), 19822–19835 (2012). [CrossRef]  

34. X. Lai, Y. Zhao, X. Lv, Z. Zhou, and S. Zeng, “Fluorescence holography with improved signal-to-noise ratio by near image plane recording,” Opt. Lett. 37(13), 2445–2447 (2012). [CrossRef]  

35. P. Bouchal and Z. Bouchal, “Concept of coherence aperture and pathways toward white light high-resolution correlation imaging,” New J. Phys. 15(12), 123002 (2013). [CrossRef]  

36. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef]  

37. J. W. Goodman, Speckle Phenomena in Optics. Theory and Applications (Robert, 2006).

38. Y. S. Kim, T. Kim, S. S. Woo, H. Kang, T.-C. Poon, and C. Zhou, “Speckle-free digital holographic recording of a diffusely reflecting object,” Opt. Express 21(7), 8183–8189 (2013). [CrossRef]  

39. J.-P. Liu, T. Tahara, Y. Hayasaki, and T.-C. Poon, “Incoherent digital holography: A review,” Appl. Sci. 8(1), 143 (2018). [CrossRef]  

40. P. W. M. Tsang, J.-P. Liu, and T.-C. Poon, “Compressive optical scanning holography,” Optica 2(5), 476–483 (2015). [CrossRef]  

41. A. C. S. Chan, K. K. Tsia, and E. Y. Lam, “Subsampled scanning holographic imaging (SuSHI) for fast, non-adaptive recording of three-dimensional objects,” Optica 3(8), 911–917 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Recording geometry for incoherent digital holography.
Fig. 2.
Fig. 2. Illustration of adaptive spatial averaging.
Fig. 3.
Fig. 3. Recording setup for the image quality and spatial resolution experiments.
Fig. 4.
Fig. 4. Reconstruction results (a) without averaging, with spatial averaging over (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel blocks, and with temporal averaging over (e) 4, (f) 16, and (g) 64 holograms.
Fig. 5.
Fig. 5. Image quality comparison for temporal and spatial averaging, in terms of the (a) contrast ratio and (b) speckle contrast.
Fig. 6.
Fig. 6. Nyquist interval as a function of the object’s recoding distance. The solid squares show the sampling pitches with and without spatial averaging.
Fig. 7.
Fig. 7. Images reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging after temporal averaging with 64 holograms.
Fig. 8.
Fig. 8. Modulation transfer function comparison. The broken lines are to guide the eye, while the solid vertical line shows the theoretical limit determined by the Rayleigh criterion.
Fig. 9.
Fig. 9. Recording reflective objects, namely (a) a Japanese coin, (b) a Euro coin, (c) two metal plates, and (d) four dice. (e) Illumination setup.
Fig. 10.
Fig. 10. Images of a Japanese coin reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.
Fig. 11.
Fig. 11. Images of a Euro coin reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.
Fig. 12.
Fig. 12. Images of two metal plates, reconstructed by focusing on a rear plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging, and by focusing on a front plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.
Fig. 13.
Fig. 13. Images of four dice, reconstructed by focusing on a rear plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging, and by focusing on a front plane (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging.
Fig. 14.
Fig. 14. (a) Experimental setup for estimating the holograms’ minimum half periods. (b) Nyquist interval versus recording distance. The squares and circles show the half periods measured via simulation and experiment, respectively.
Fig. 15.
Fig. 15. Holograms of a point source, comparing the experimental results (left column) with the theoretical ones (right column).
Fig. 16.
Fig. 16. Experimental verification under the perfect overlapping condition. Images reconstructed (a) without averaging and with (b) 2 $\times$ 2-, (c) 4 $\times$ 4-, and (d) 8 $\times$ 8-pixel spatial averaging. (e) Modulation transfer function comparison.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

u o ( r ) = C 0 exp { i π λ ( f o z s z s f o ) | r | 2 } ,
u 1 ( r ) = C 1 exp { i π λ ( f d 1 z d z d f d 1 ) | r | 2 } ,
u 2 ( r ) = C 2 exp { i π λ ( f d 2 z d z d f d 2 ) | r | 2 } ,
z d = z s f o + z l f o z l z s f o z s .
u 1 ( r ) = C 3 exp { i π λ ( f d 1 z d f d 1 z d + f d 1 z h z h z d ) | r | 2 } ,
u 2 ( r ) = C 4 exp { i π λ ( f d 2 z d f d 2 z d + f d 2 z h z h z d ) | r | 2 } ,
| u 1 + u 2 | 2 = B + C 5 exp { i π λ z r | r | 2 } + c . c . ,
z r = ( z h z d f d 1 z d f d 1 z h ) ( z h z d f d 2 z d f d 2 z h ) ( f d 1 f d 2 ) z d 2 ,
| u 1 + u 2 | 2 = B + 2 C 5 cos { π λ z r | r | 2 } .
p N = λ z r 2 | r | m a x ,
p N = λ z r D h .
D l = | 1 + f o z s z s f o z l | D o .
D h 0 = m i n { D l , D s l m } ,
D h 1 = | 1 + f d 1 z d z d f d 1 z h | D h 0 ,
D h 2 = | 1 + f d 2 z d z d f d 2 z h | D h 0 ,
D h = m i n { D h 1 , D h 2 , D s e n s o r } .
p p N .
I ( i , j ) = i = ( i 1 ) n + 1 i n j = ( j 1 ) n + 1 j n I ( i , j ) n 2 ,
n p N p .
C R = μ b μ d ,
S C = σ b μ b ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.