Abstract

This paper reports a fast method for generating a 2048x2048 digital Fresnel hologram at a rate of over 100 frames per second. Briefly, the object wave of an image is nonuniformally sampled and generated on a wavefront recording plane (WPR) that is close to the object scene. The sampling interval at each point on the WRP image is then modulated according to the depth map. Subsequently, the WRP image is converted into a hologram. The hologram generated with our proposed method, which is referred to as the warped WRP (WWRP) hologram, is capable of presenting a 3-D object with faster speed as compared with existing methods.

© 2015 Optical Society of America

1. Introduction

The exploration on fast generation of digital holograms has been an area of interest in the past 2 decades with the ultimate objective of generating holograms of (3-D) object scenes at video rates (25 to 30 frames per second). Numerous research works have been conducted to simplify the computationally intensive hologram generation process. For example, moderate reduction in the computation time has been achieved with look-up-tables [1–3], virtual windows [4], multi-rate filters [5], and the patch models [6]. There are also techniques that utilize hardware devices to speed up some of the core processes [7–9]. The fastest approach attained, is probably the wavefront recording plane (WRP) method [10] [11]. Briefly, the object wave on a virtual 2-D WRP that is close to the object scene is derived. For each object point in the scene, only a small zone of diffraction fringe patterns is determined. Subsequently, the WRP is converted into the hologram. For a sparse object scene (i.e., limited number of object points), a hologram can be generated at high frame rate. However, the computation time will increase proportionally with the number of object points, hence restricting the generation of holograms to a small, or a coarsely sampled object image. In this paper we propose a fast algorithm for hologram generation that is independent on the number of object points. The algorithm only involves a pair of re-sampling, and 4 Fast Fourier Transform (FFT) operations. The hologram generated with our proposed method, which will be presented in the following sections, is capable of preserving the depth information of a 3-D scene of a dense object scene.

2. Generating the warped wavefront recording plane (WWRP) hologram

Our proposed method for generating the WWRP hologram is comprising of 4 stages, and the following terminologies are adopted. The source 3-D object is modeled as a 3-D surface, with the intensity and depth of each object point represented by the planar images I(x,y), and the depth map D(x,y), respectively. The hologram and the WRP are denoted by H(x,y) and W(x,y). We assume that I(x,y), D(x,y), W(x,y), and H(x,y) are identical in size, comprising of X columns and Y rows of pixels. Each pixel has a dimension of δ×δ.

Stage 1: re-sampling (pre-warping) the object intensity image

Since I(x,y), D(x,y), W(x,y), and H(x,y) are digital images, the default sampling interval is uniform for both the horizontal and the vertical directions. In this stage a new image I1(x,y), which is referred to as the ‘pre-warped image,’ is obtained by sampling pixels from the original image according to the depth map. In other words, the sampling interval of the pre-warped image could be nonuniform. The rationale, as well as the criteria of pixel mapping between I(x,y) and I1(x,y) will be explained in the later part of this paper. For the time being, we interpret I1(x,y) is a modified version of the original image.

Stage 2: generation of the WRP

In the 2nd stage, a WRP is generated from the pre-warped image I1(x,y). The WRP is a hypothetical plane that is placed at a close distance zo from, and parallel to I1(x,y), we have

W(x,y)=I1(x,y)h(x,y;zo).
Assuming zo>>δ, which is generally true in practice, the free-space impulse response h(x,y;zo) can be approximated as

h(x,y;zo)=exp[iπ(x2δ2+y2δ2)/(λzo)].

Stage 3: re-sampling (warping) the WRP

In this stage, we shall explain how the depth map D(x,y) can be incorporated onto the WRP with our proposed method. We note that, due to the close proximity between the WRP and the object image, each object point is only affecting a small neighboring region on the WRP. We further assume that the depth map is generally smooth, so that within a small neighborhood of an object point, the depth value is practically constant. The depth of each object point within the region can be extended by changing the sampling interval at the corresponding region on the WRP. To illustrate this, we consider a simple scenario of a small region R centered at (xo,yo) on the WRP. The diffraction fringe pattern in R is mainly contributed by object points that are close to the region, with almost the same depth. Object points that are farther away have less effect, and are neglected. Suppose the sampling interval in R is increased by a factor a, both the WRP and its corresponding image I1(x,y) will be scaled by the same amount. The modified WRP signal is given by

W'(x,y)|x,yR=W(x',y')=I1(x',y')h(x',y';zo),
where a1, x'=x/a, and y'=y/a. The last term h(x',y';zo) in Eq. (2) can be expressed as
h(x',y';zo)=exp[i2πλ(x'2δ2+y'2δ2)2zo]=exp[i2πλ(x2δ2+y2δ2)2a2zo]=h(x,y;a2zo).
Rewriting Eq. (3), we have
W'(x,y)|x,yR=I1(x/a,y/a)h(x,y;a2zo).
Equation (5) indicates that due to the stretching of the sampling interval, the effective depth of the object points corresponding to the diffraction patterns in the region R has been relocated to a new value a2zo. At the same time, the original source image I(x,y), has been changed to I1(x',y'). However, it can be easily seen that the original image can be preserved if I1(x',y') is set to I(x,y) in Eq. (5). This is the principle on deriving the pre-warped image in Stage 1. Referring back to Eq. (5), if the depth of the object scene covered by the region R is to be increased from zo to (zo+D(xo,yo)), the sampling interval in R has to be increased by a factor aso that
a2zo=zo+D(xo,yo)a=1+D(xo,yo)/zo,
equivalent to changing the standard uniform sampling interval to a new value b which depends on the depth map D as given by
b=1/a=1/1+D(xo,yo)/zo.
Generalizing the above principle to the entire WRP, a revised matrix of sampling intervals for each pixel location, taking into account of the corresponding depth value, is derived as
S1(x,y)=1/1+D(x,y)/zo.
After all the elements in S1(x,y) is deduced, a new image W1(x,y), which is referred to as the warped WRP, is generated by re-locating each pixel into its new position according to the revised sampling intervals. This can be described with a point mapping operation as
W1(x,y)=W(px;y,qx;y),
Where
px;y=Rn[m=1xS1(m,y)]=Rn[px1;y+S1(x,y)],
qx;y=Rn[n=1yS1(x,n)]=Rn[px;y1+S1(x,y)].
and Rn[A] is the “Round” operation that returns the nearest integer of a real number A (for example, Rn[1.2]=1, and Rn[1.8]=2). We shall illustrate the formation of the first few pixels along a single row at y=y0 of W1(x,y) in Fig. 1, based on a uniform sampling interval of, say, 0.6 as an example (i.e., S1(x,y0)=0.6). The same principle can be easily extended to 2-D sampling with non-uniform sampling intervals. From the figure, it can be seen that each sample in W1(x,y0) is mapped from one of the samples in W(x,y).

 

Fig. 1 Example showing a single row of pixels in W1(x,y) mapped from samples in W(x,y).

Download Full Size | PPT Slide | PDF

Equation (9) shows that the depth information D(x,y) has been incorporated in the WRP image W1(x,y). However, the WRP W1(x,y) is contributed from the pre-warp image I1(px;y,qx;y) instead of I(x,y). To preserve the original image, we simply generate the pre-warp image as

I1(px;y,qx;y)=I(x,y).

Stage 4: converting WRP to a hologram

In the final stage of our proposed method, the WRP is converted into a hologram H(x,y) that is positioned at a distance of zh from the WRP. This can be accomplished by convolving the warped WRP image with the free-space impulse response as given by

H(x,y)=W1(x,y)h(x,y;zh).
To speed up the calculation, the convolution in Eqs. (1) and (13) is realized in the spectral domain based on a pair of Fast Fourier Transform (FFT) operations as
W(x,y)=I1(x,y)h(x,y;zo)IFFT[FFT[I1(x,y)]FFT[h(x,y;zo)]],
And
H(x,y)=W1(x,y)h(x,y;zh)IFFT[FFT[W1(x,y)]FFT[h(x,y;zh)]],
where FFT[] and IFFT[] denote the forward and the inverse FFT, respectively. An evaluation of the computation loading involves in our proposed method is given in Table 1, and we explain the evaluation as follows. As the FFT of the free-space impulse response functions can be pre-computed in advance, Eqs. (14) and (15) can be realized with 4 FFT operations. Next, in Eq. (8), the sampling array S1(x,y) can be deduced from D(x,y) with a small Look-up Table with negligible amount of computation. The location of the new sample positions (px;y, qx;y) only involves 2 additions per pixel according to Eqs. (10) and (11). However, it can be seen that in the computation of px;y and qx;y each column and row, index with ‘y’ and ‘x’, respectively, can be evaluated independently from each other. As such, both of these processes can be realized in a parallel fashion, and the processes in Table 1 can be computed in less than 5ms with a graphical processing unit (GPU). The re-sampling process in Eq. (9) and (12) is simply a memory-addressing operation and is basically computation-free in practice. In the above evaluation, there is an extra time involved for transferring the image data to and from the computing device and the source/destination units. However, these additional overheads are not directly related to our proposed method and hence not included as a part of the computation loading.

Tables Icon

Table 1. Evaluation of computation loading of our proposed method

3. Experimental results

Our proposed method is evaluated with a pair of 3-D models. Each model is represented by the intensity image and the depth map as shown in Figs. 2(a)-2(d). The depth map shows the relative distance, with the nearest and the farthest distances from the view-point represented in black and white intensity, respectively. The first model ‘A’ is a wedge geometry (progressively increasing depth from left to right) with a highly textural image, while the second model ‘B’ is a cone having the texture of the grid image, and with the tip of the cone being nearest to the hologram. The depth range of both models is [0,0.02m].

 

Fig. 2 (a) Intensity image of model ‘A’, (b) Depth map of model ‘A’, (c) Intensity image of model ‘B’, (d) Depth map of model ‘B’.

Download Full Size | PPT Slide | PDF

The size of the object image, the WRP, and the hologram are assumed to be identical and composing of 2048×2048 pixels. The wavelength of the optical beam, pixel size δ of the hologram, and the distance between the WRP and I(x,y) (i.e. zo) are set to 650nm, 7um, and 0.1m, respectively. For each model, the following steps are conducted. First, Eq. (8) is first applied to generate the sampling interval matrix S1(x,y), based on the depth map in each case. After S1(x,y) is determined, Eqs. (10) and (11) are employed to generate the revised sampling intervals, which are taken to derived the pre-warped image I1(x,y). Equation (14) is then applied to convert I1(x,y) into the WRP image W(x,y), from which a warped WRP W1(x,y) is generated with Eq. (9). Subsequently, Eq. (15) is applied to convert the warped WRP in the hologram H(x,y) that is separated by a distance of zh=0.3m from the WRP. To evaluate the hologram generated with our proposed method, we have computed the numerical reconstructed images at 3 selected focused planes positioned at 0.1m, 0.11m, and 0.12m from the WRP (i.e., zh+0.1m=0.4m, zh+0.11m=0.41m, and zh+0.12m=0.42m). The results are shown in Figs. 3(a)-3(c) and Figs. 4(a)-4(c). For model ‘A’, we observe that when the focused plane is at 0.4m, the textural patterns on the left side of the reconstructed image in Fig. 3(a) (that is closer to the hologram) are clearer than the rest of the image. The clear region moves to the middle in Fig. 3(b), and to the right in Fig. 3(c), when the focused distances are changed to 0.41m, and 0.42m, respectively. Similar results are attained for model ‘B’. In Fig. 4(a), the part of the grid pattern that is closest to the hologram is clearly reconstructed at 0.4m. The mid section of the cone is clear at the reconstruction distance 0.41min Fig. 4(b), while the bottom section is reconstructed with clarity at 0.42m in Fig. 4(c). The above observations will become even more apparent when the images are zoomed in. These evaluation show that the hologram generated by our proposed method is capable of preserving the depth information, as well as the intensity of the source object.

 

Fig. 3 (a)-(c). Reconstructed images of the WWRP hologram of the wedge model ‘A’ at focused distances of 0.4m, 0.41m, and 0.42m from the hologram, respectively.

Download Full Size | PPT Slide | PDF

 

Fig. 4 (a)-(c). Reconstructed images of the WWRP hologram of the cone model ‘B’ at focused distances of 0.4m, 0.41m, and 0.42m from the hologram, respectively.

Download Full Size | PPT Slide | PDF

4. Conclusion

In this paper, we have proposed a fast method for generation of Fresnel holograms that only involves 2 re-sampling processes and 4 FFT operations. Comparing with existing methods that are based on the WRP framework, our proposed method has the following advantages. First, the initial WRP is generated directly from a planar image instead of individual object points. As such, the process can be realized swiftly with a pair of FFT operations, and the computation time is independent on the number of object points. Second, the depth information at each point of the object scene is incorporated into the initial WRP by adjusting the local sampling intervals. The amount of arithmetic calculations involved is insignificant in the re-sampling process, as compared with the computation of the WRP fringe patterns for individual object points. Third, there is no need to reserve a large look-up-table to store the pre-computed WRP fringe patterns. Fourth, the hologram is capable of representing a dense object scene without the need of down-sampling the intensity image, hence preserving favorable quality on the reconstructed images that contain high textural contents. Our evaluation has demonstrated the generation of a 2048x2048 hologram, representing an image scene of similar size and comprising of complicated textures, in less than 10ms (i.e., over 100 frames per second). On the downside, the re-sampling process in Eq. (12) will impose certain degradation in the source image, but as shown in the experimental result, the effect is not prominent for a depth range of 0.02m. For a wider depth range which involves higher degree of re-sampling, the degradation will become progressively more obvious. The speed performance of our proposed method, as compared with [11] is about the same in the generation of a hologram for 3×104 object points (based on the GPU adopted in our work), and the image quality is also similar. However, as the number of object points increased, the number of parallel threads will be insufficient to handle concurrent processing of all the object points due to the limited amount of parallel processors in the GPU. As a result, the hologram generation task has to be conducted sequentially in multiple rounds, hence lowering the overall computation speed. In our method, the speed is fix for a given hologram size and independent of the number of object points.

References and links

1. S.-C. Kim, J. M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]   [PubMed]  

2. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]   [PubMed]  

3. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

4. T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007). [CrossRef]  

5. P. W. M. Tsang, J.-P. Liu, W. K. Cheung, and T.-C. Poon, “Fast generation of Fresnel holograms based on multirate filtering,” Appl. Opt. 48(34), H23–H30 (2009). [CrossRef]   [PubMed]  

6. H. Sakata and Y. Sakamoto, “Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,” Appl. Opt. 48(34), H212–H221 (2009). [CrossRef]   [PubMed]  

7. K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014). [CrossRef]  

8. A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014). [CrossRef]  

9. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010). [CrossRef]   [PubMed]  

10. T. Shimobaba, N. Okada, T. Kakue, N. Masuda, Y. Ichihashi, R. Oi, K. Yamamoto, and T. Ito, “Computer holography using wavefront recording method,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online), OSA, paper DTu1A.2 (2013).

11. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. S.-C. Kim, J. M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
    [Crossref] [PubMed]
  2. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
    [Crossref] [PubMed]
  3. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
    [Crossref] [PubMed]
  4. T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
    [Crossref]
  5. P. W. M. Tsang, J.-P. Liu, W. K. Cheung, and T.-C. Poon, “Fast generation of Fresnel holograms based on multirate filtering,” Appl. Opt. 48(34), H23–H30 (2009).
    [Crossref] [PubMed]
  6. H. Sakata and Y. Sakamoto, “Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,” Appl. Opt. 48(34), H212–H221 (2009).
    [Crossref] [PubMed]
  7. K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
    [Crossref]
  8. A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
    [Crossref]
  9. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
    [Crossref] [PubMed]
  10. T. Shimobaba, N. Okada, T. Kakue, N. Masuda, Y. Ichihashi, R. Oi, K. Yamamoto, and T. Ito, “Computer holography using wavefront recording method,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online), OSA, paper DTu1A.2 (2013).
  11. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
    [Crossref] [PubMed]

2014 (2)

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

2012 (1)

2010 (2)

2009 (3)

2008 (1)

2007 (1)

T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
[Crossref]

Cheung, W. K.

Ichihashi, Y.

Ito, T.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
[Crossref] [PubMed]

T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
[Crossref] [PubMed]

Kakue, T.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

Kim, E.-S.

Kim, J. M.

Kim, S.-C.

Liu, J.-P.

Masuda, N.

Murano, K.

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

Nakayama, H.

Oikawa, M.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

Okabe, G.

T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
[Crossref]

Okada, N.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

Poon, T.-C.

Sakamoto, Y.

Sakata, H.

Shimobaba, T.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
[Crossref] [PubMed]

T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).
[Crossref] [PubMed]

Sugiyama, A.

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

Takada, N.

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010).
[Crossref] [PubMed]

Tsang, P. W. M.

Yamaguchi, T.

T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
[Crossref]

Yoshikawa, H.

T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
[Crossref]

Appl. Opt. (4)

Comput. Phys. Commun. (1)

K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014).
[Crossref]

Opt. Eng. (2)

A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014).
[Crossref]

T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007).
[Crossref]

Opt. Express (3)

Other (1)

T. Shimobaba, N. Okada, T. Kakue, N. Masuda, Y. Ichihashi, R. Oi, K. Yamamoto, and T. Ito, “Computer holography using wavefront recording method,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online), OSA, paper DTu1A.2 (2013).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Example showing a single row of pixels in W 1 ( x,y ) mapped from samples in W( x,y ) .
Fig. 2
Fig. 2 (a) Intensity image of model ‘A’, (b) Depth map of model ‘A’, (c) Intensity image of model ‘B’, (d) Depth map of model ‘B’.
Fig. 3
Fig. 3 (a)-(c). Reconstructed images of the WWRP hologram of the wedge model ‘A’ at focused distances of 0.4m , 0.41m , and 0.42m from the hologram, respectively.
Fig. 4
Fig. 4 (a)-(c). Reconstructed images of the WWRP hologram of the cone model ‘B’ at focused distances of 0.4m , 0.41m , and 0.42m from the hologram, respectively.

Tables (1)

Tables Icon

Table 1 Evaluation of computation loading of our proposed method

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

W( x,y )= I 1 ( x,y )h( x,y; z o ).
h( x,y; z o )=exp[ iπ( x 2 δ 2 + y 2 δ 2 ) / ( λ z o ) ].
W'( x,y )| x,yR =W( x',y' )= I 1 ( x',y' )h( x',y'; z o ),
h( x',y'; z o )=exp[ i2π λ ( x ' 2 δ 2 +y ' 2 δ 2 ) 2 z o ]=exp[ i2π λ ( x 2 δ 2 + y 2 δ 2 ) 2 a 2 z o ]=h( x,y; a 2 z o ).
W'( x,y )| x,yR = I 1 ( x/a,y/a )h( x,y; a 2 z o ).
a 2 z o = z o +D( x o , y o )a= 1+D( x o , y o )/ z o ,
b=1/a=1/ 1+D( x o , y o )/ z o .
S 1 ( x,y )=1/ 1+D( x,y )/ z o .
W 1 ( x,y )=W( p x;y , q x;y ),
p x;y =Rn[ m=1 x S 1 ( m,y ) ]=Rn[ p x1;y + S 1 ( x,y ) ],
q x;y =Rn[ n=1 y S 1 ( x,n ) ]=Rn[ p x;y1 + S 1 ( x,y ) ].
I 1 ( p x;y , q x;y )=I( x,y ).
H( x,y )= W 1 ( x,y )h( x,y; z h ).
W( x,y )= I 1 ( x,y )h( x,y; z o )IFFT[ FFT[ I 1 ( x,y ) ]FFT[ h( x,y; z o ) ] ],
H( x,y )= W 1 ( x,y )h( x,y; z h )IFFT[ FFT[ W 1 ( x,y ) ]FFT[ h( x,y; z h ) ] ],

Metrics