## Abstract

This paper reports a fast method for generating a 2048x2048 digital Fresnel hologram at a rate of over 100 frames per second. Briefly, the object wave of an image is nonuniformally sampled and generated on a wavefront recording plane (WPR) that is close to the object scene. The sampling interval at each point on the WRP image is then modulated according to the depth map. Subsequently, the WRP image is converted into a hologram. The hologram generated with our proposed method, which is referred to as the warped WRP (WWRP) hologram, is capable of presenting a 3-D object with faster speed as compared with existing methods.

© 2015 Optical Society of America

## 1. Introduction

The exploration on fast generation of digital holograms has been an area of interest in the past 2 decades with the ultimate objective of generating holograms of (3-D) object scenes at video rates (25 to 30 frames per second). Numerous research works have been conducted to simplify the computationally intensive hologram generation process. For example, moderate reduction in the computation time has been achieved with look-up-tables [1–3], virtual windows [4], multi-rate filters [5], and the patch models [6]. There are also techniques that utilize hardware devices to speed up some of the core processes [7–9]. The fastest approach attained, is probably the wavefront recording plane (WRP) method [10] [11]. Briefly, the object wave on a virtual 2-D WRP that is close to the object scene is derived. For each object point in the scene, only a small zone of diffraction fringe patterns is determined. Subsequently, the WRP is converted into the hologram. For a sparse object scene (i.e., limited number of object points), a hologram can be generated at high frame rate. However, the computation time will increase proportionally with the number of object points, hence restricting the generation of holograms to a small, or a coarsely sampled object image. In this paper we propose a fast algorithm for hologram generation that is independent on the number of object points. The algorithm only involves a pair of re-sampling, and 4 Fast Fourier Transform (FFT) operations. The hologram generated with our proposed method, which will be presented in the following sections, is capable of preserving the depth information of a 3-D scene of a dense object scene.

## 2. Generating the warped wavefront recording plane (WWRP) hologram

Our proposed method for generating the WWRP hologram is comprising of 4 stages, and the following terminologies are adopted. The source 3-D object is modeled as a 3-D surface, with the intensity and depth of each object point represented by the planar images $I\left(x,y\right)$, and the depth map $D\left(x,y\right)$, respectively. The hologram and the WRP are denoted by $H\left(x,y\right)$ and $W\left(x,y\right)$. We assume that $I\left(x,y\right)$, $D\left(x,y\right)$, $W\left(x,y\right)$, and $H\left(x,y\right)$ are identical in size, comprising of $X$ columns and $Y$ rows of pixels. Each pixel has a dimension of $\delta \times \delta $.

#### Stage 1: re-sampling (pre-warping) the object intensity image

Since $I\left(x,y\right)$, $D\left(x,y\right)$, $W\left(x,y\right)$, and $H\left(x,y\right)$ are digital images, the default sampling interval is uniform for both the horizontal and the vertical directions. In this stage a new image ${I}_{1}\left(x,y\right)$, which is referred to as the ‘pre-warped image,’ is obtained by sampling pixels from the original image according to the depth map. In other words, the sampling interval of the pre-warped image could be nonuniform. The rationale, as well as the criteria of pixel mapping between $I\left(x,y\right)$ and ${I}_{1}\left(x,y\right)$ will be explained in the later part of this paper. For the time being, we interpret ${I}_{1}\left(x,y\right)$ is a modified version of the original image.

#### Stage 2: generation of the WRP

In the 2nd stage, a WRP is generated from the pre-warped image ${I}_{1}\left(x,y\right)$. The WRP is a hypothetical plane that is placed at a close distance ${z}_{o}$ from, and parallel to ${I}_{1}\left(x,y\right)$, we have

Assuming ${z}_{o}>>\delta $, which is generally true in practice, the free-space impulse response $h\left(x,y;{z}_{o}\right)$ can be approximated as#### Stage 3: re-sampling (warping) the WRP

In this stage, we shall explain how the depth map $D\left(x,y\right)$ can be incorporated onto the WRP with our proposed method. We note that, due to the close proximity between the WRP and the object image, each object point is only affecting a small neighboring region on the WRP. We further assume that the depth map is generally smooth, so that within a small neighborhood of an object point, the depth value is practically constant. The depth of each object point within the region can be extended by changing the sampling interval at the corresponding region on the WRP. To illustrate this, we consider a simple scenario of a small region $R$ centered at $\left({x}_{o},{y}_{o}\right)$ on the WRP. The diffraction fringe pattern in $R$ is mainly contributed by object points that are close to the region, with almost the same depth. Object points that are farther away have less effect, and are neglected. Suppose the sampling interval in $R$ is increased by a factor $a$, both the WRP and its corresponding image ${I}_{1}\left(x,y\right)$ will be scaled by the same amount. The modified WRP signal is given by

*D*as given byGeneralizing the above principle to the entire WRP, a revised matrix of sampling intervals for each pixel location, taking into account of the corresponding depth value, is derived asAfter all the elements in ${S}_{1}\left(x,y\right)$ is deduced, a new image ${W}_{1}\left(x,y\right)$, which is referred to as the warped WRP, is generated by re-locating each pixel into its new position according to the revised sampling intervals. This can be described with a point mapping operation asWhere

Equation (9) shows that the depth information $D\left(x,y\right)$ has been incorporated in the WRP image ${W}_{1}\left(x,y\right)$. However, the WRP ${W}_{1}\left(x,y\right)$ is contributed from the pre-warp image ${I}_{1}\left({p}_{x;y},{q}_{x;y}\right)$ instead of $I\left(x,y\right)$. To preserve the original image, we simply generate the pre-warp image as

#### Stage 4: converting WRP to a hologram

In the final stage of our proposed method, the WRP is converted into a hologram $H\left(x,y\right)$ that is positioned at a distance of ${z}_{h}$ from the WRP. This can be accomplished by convolving the warped WRP image with the free-space impulse response as given by

To speed up the calculation, the convolution in Eqs. (1) and (13) is realized in the spectral domain based on a pair of Fast Fourier Transform (FFT) operations as*y*’ and ‘

*x*’, respectively, can be evaluated independently from each other. As such, both of these processes can be realized in a parallel fashion, and the processes in Table 1 can be computed in less than 5ms with a graphical processing unit (GPU). The re-sampling process in Eq. (9) and (12) is simply a memory-addressing operation and is basically computation-free in practice. In the above evaluation, there is an extra time involved for transferring the image data to and from the computing device and the source/destination units. However, these additional overheads are not directly related to our proposed method and hence not included as a part of the computation loading.

## 3. Experimental results

Our proposed method is evaluated with a pair of 3-D models. Each model is represented by the intensity image and the depth map as shown in Figs. 2(a)-2(d). The depth map shows the relative distance, with the nearest and the farthest distances from the view-point represented in black and white intensity, respectively. The first model ‘A’ is a wedge geometry (progressively increasing depth from left to right) with a highly textural image, while the second model ‘B’ is a cone having the texture of the grid image, and with the tip of the cone being nearest to the hologram. The depth range of both models is $\left[0,0.02m\right]$.

The size of the object image, the WRP, and the hologram are assumed to be identical and composing of $2048\times 2048$ pixels. The wavelength of the optical beam, pixel size $\delta $ of the hologram, and the distance between the WRP and $I\left(x,y\right)$ (i.e. ${z}_{o}$) are set to $650nm$, $7um$, and 0.1m, respectively. For each model, the following steps are conducted. First, Eq. (8) is first applied to generate the sampling interval matrix ${S}_{1}\left(x,y\right)$, based on the depth map in each case. After ${S}_{1}\left(x,y\right)$ is determined, Eqs. (10) and (11) are employed to generate the revised sampling intervals, which are taken to derived the pre-warped image ${I}_{1}\left(x,y\right)$. Equation (14) is then applied to convert ${I}_{1}\left(x,y\right)$ into the WRP image $W\left(x,y\right)$, from which a warped WRP ${W}_{1}\left(x,y\right)$ is generated with Eq. (9). Subsequently, Eq. (15) is applied to convert the warped WRP in the hologram $H\left(x,y\right)$ that is separated by a distance of ${z}_{h}=0.3m$ from the WRP. To evaluate the hologram generated with our proposed method, we have computed the numerical reconstructed images at 3 selected focused planes positioned at $0.1m$, $0.11m$, and $0.12m$ from the WRP (i.e., ${z}_{h}+0.1m=0.4m$, ${z}_{h}+0.11m=0.41m$, and ${z}_{h}+0.12m=0.42m$). The results are shown in Figs. 3(a)-3(c) and Figs. 4(a)-4(c). For model ‘A’, we observe that when the focused plane is at $0.4m$, the textural patterns on the left side of the reconstructed image in Fig. 3(a) (that is closer to the hologram) are clearer than the rest of the image. The clear region moves to the middle in Fig. 3(b), and to the right in Fig. 3(c), when the focused distances are changed to $0.41m$, and $0.42m$, respectively. Similar results are attained for model ‘B’. In Fig. 4(a), the part of the grid pattern that is closest to the hologram is clearly reconstructed at $0.4m$. The mid section of the cone is clear at the reconstruction distance $0.41m$in Fig. 4(b), while the bottom section is reconstructed with clarity at $0.42m$ in Fig. 4(c). The above observations will become even more apparent when the images are zoomed in. These evaluation show that the hologram generated by our proposed method is capable of preserving the depth information, as well as the intensity of the source object.

## 4. Conclusion

In this paper, we have proposed a fast method for generation of Fresnel holograms that only involves 2 re-sampling processes and 4 FFT operations. Comparing with existing methods that are based on the WRP framework, our proposed method has the following advantages. First, the initial WRP is generated directly from a planar image instead of individual object points. As such, the process can be realized swiftly with a pair of FFT operations, and the computation time is independent on the number of object points. Second, the depth information at each point of the object scene is incorporated into the initial WRP by adjusting the local sampling intervals. The amount of arithmetic calculations involved is insignificant in the re-sampling process, as compared with the computation of the WRP fringe patterns for individual object points. Third, there is no need to reserve a large look-up-table to store the pre-computed WRP fringe patterns. Fourth, the hologram is capable of representing a dense object scene without the need of down-sampling the intensity image, hence preserving favorable quality on the reconstructed images that contain high textural contents. Our evaluation has demonstrated the generation of a 2048x2048 hologram, representing an image scene of similar size and comprising of complicated textures, in less than 10ms (i.e., over 100 frames per second). On the downside, the re-sampling process in Eq. (12) will impose certain degradation in the source image, but as shown in the experimental result, the effect is not prominent for a depth range of 0.02m. For a wider depth range which involves higher degree of re-sampling, the degradation will become progressively more obvious. The speed performance of our proposed method, as compared with [11] is about the same in the generation of a hologram for $3\times {10}^{4}$ object points (based on the GPU adopted in our work), and the image quality is also similar. However, as the number of object points increased, the number of parallel threads will be insufficient to handle concurrent processing of all the object points due to the limited amount of parallel processors in the GPU. As a result, the hologram generation task has to be conducted sequentially in multiple rounds, hence lowering the overall computation speed. In our method, the speed is fix for a given hologram size and independent of the number of object points.

## References and links

**1. **S.-C. Kim, J. M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express **20**(11), 12021–12034 (2012). [CrossRef] [PubMed]

**2. **S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. **48**(6), 1030–1041 (2009). [CrossRef] [PubMed]

**3. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef] [PubMed]

**4. **T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. **46**(12), 125801 (2007). [CrossRef]

**5. **P. W. M. Tsang, J.-P. Liu, W. K. Cheung, and T.-C. Poon, “Fast generation of Fresnel holograms based on multirate filtering,” Appl. Opt. **48**(34), H23–H30 (2009). [CrossRef] [PubMed]

**6. **H. Sakata and Y. Sakamoto, “Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,” Appl. Opt. **48**(34), H212–H221 (2009). [CrossRef] [PubMed]

**7. **K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. **185**(10), 2742–2757 (2014). [CrossRef]

**8. **A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. **53**(11), 113104 (2014). [CrossRef]

**9. **T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express **18**(10), 9955–9960 (2010). [CrossRef] [PubMed]

**10. **T. Shimobaba, N. Okada, T. Kakue, N. Masuda, Y. Ichihashi, R. Oi, K. Yamamoto, and T. Ito, “Computer holography using wavefront recording method,” in *Digital Holography and Three-Dimensional Imaging*, OSA Technical Digest (online), OSA, paper DTu1A.2 (2013).

**11. **T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express **18**(19), 19504–19509 (2010). [CrossRef] [PubMed]