Abstract

We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

© 2015 Optical Society of America

1. Introduction

Holography is a technique for recording and reconstructing the complex-valued wave field of light (both amplitude and phase) by means of diffraction and interference phenomena. The physical process of light propagation can also be modelled and simulated on a computer, allowing for the generation of computer generated holograms (CGHs). CGHs have the advantage that they eliminate the need for a physical holographic recording set-up; therefore, they are not subject to noise or optical aberrations, and moreover they allow the representation of synthetic objects. However, generating holograms for 3D objects requires significant amounts of computation time, especially when occlusions are taken into consideration. Therefore, many solutions have been proposed over the years for accelerating this process. These algorithms can essentially be divided into two categories: polygonal methods, which represent the virtual objects in the scene using meshes consisting of triangles [1, 2] and point cloud methods describing objects as a collection of self-luminous points [3]. We will focus on the latter. Various algorithms can be found in literature for accelerating the generation of CGH. For example: holographic stereograms [4], wavefront recording plane methods (WRP) [5–9], look-up tables with pre-calculated distributions of the complex amplitude [6, 10, 11] and methods using band-limited double-step fresnel diffraction [9, 12, 13].

Moreover, occlusions are a challenging issue to be handled in digital holography. There are two main types of occlusions: self-occlusions, in which some areas of the object are occluded by other areas of the same object, and mutual-occlusions, when one object is occluded – partially or not – by another object. Different occlusion handling methods in holography have been suggested, using inverse orthographic projection with low angular sampling [14], ray-casting [15], ray-wavefront conversion [16] or accurate depth cues [4]. However, point clouds occlusion has not been extensively researched, except for a method with interpolation kernels for points [3], which applies culling to points that lie within the radius of each interpolation kernel.

In this paper, we propose a method for computer-generated holograms based on multiple WRPs that introduces four innovations, namely:

  1. a novel intra-WRP light wave propagation strategy utilizing pre-computed propagation kernels stored in look-up tables that provide support for forward and backward light propagation,
  2. a novel inter-WRP light propagation strategy to utilize pre-computed propagation kernels stored in look-up tables,
  3. the application of a Gaussian interpolation kernel in a multiple WRP setting that blurs the discrete points and hence creates the effect of the propagation of a smooth surface between the WRPs and the holographic plane,
  4. a novel occlusion processing technique exploiting this multiple WRP propagation structure utilising inverse Gaussian filters to mask occluded points.

This paper is structured as follows. In Sec. 2, we briefly explain the wavefront recording plane method. Subsequently, in Sec. 3 the proposed CGH method with occlusion masking is presented and we report on the experiments in Sec. 4. Finally, conclusions of our work are presented in Sec. 5.

2. The wavefront recording plane method

The wavefront recording plane (WRP) method was introduced in [5] and the proof of concept is shown in Fig. 1(a). This method generates the hologram by first positioning one virtual holographic plane - the WRP - in front of the object and calculating this WRP point-by-point in a computationally efficient way. Subsequently, the hologram is calculated by propagating this WRP to the hologram plane using e.g. a Fresnel transform.

 figure: Fig. 1

Fig. 1 Previous WRP methods: (a) The WRP method as introduced in [5] with FFT, (b) The multiple WRP method [8] with FFT and (c) The WRP method with band-limited double step Fresnel diffraction (BL-DSF) for RGB and Depth data [9].

Download Full Size | PPT Slide | PDF

During the first step, in [5], the light field of each pixel of the WRP is calculated by summing the contribution of each point of the object according to the ray-tracing equation:

uw(x,y)=j=1NAjRjexp(i2πλRj)
where, Rj=(xxj)2+(yyj)2+(zwzj)2 is the distance between a 3D object point (xj, yj, zj) and a coordinate (x, y) on the WRP; Aj is the intensity of the object point, λ is the wavelength of the reference light and N is the total number of the 3D object points. The advantage is that due to the small distance between the points and the WRP, each point contributes only to a small area of the WRP, therefore dramatically reducing the hologram computation time. The size of the window support is determined by the diffraction angle θ for the on-axis reconstruction of the 3D object light from the CGH, with sinθ = λf = λ/2p, where f is the spatial frequency and p is the pixel pitch:
Wj=|zj|tan(sin1λ2p)|zjzw|tan(λ2p).

The second step of the WRP method involves the calculation of the complex amplitude uHP(ξ, η) of the WRP propagated to the hologram plane, i.e. to find the diffracted field of the CGH itself. This can be done by the convolution-based Fresnel diffraction:

uHP(ξ,η)=exp(i2πλzw)iλzwuw(x,y)exp(iπλzwrp((ξxw)2+(ηyw)2)dxwdyw==exp(i2πλzw)iλzw1[[uw(ξ,η)][h(ξ,η)]]
where, and −1 are the Fourier and inverse Fourier operators, zw is the perpendicular distance between the WRP and the hologram plane, and h(ξ, η) is the impulse response equal to exp(iπλzw(ξ2+η2)).

Using this two-step method, the calculated complex amplitude is an excellent approximation of the wave field that would be computed by calculating immediately the Fresnel propagation of the object points to the hologram plane.

3. The proposed multiple-WRP method with occlusion processing

As stated in the introduction, we introduce four innovations, which are subsequently treated in the following subsections. The concept of the proposed method is illustrated in Fig. 2(a).

 figure: Fig. 2

Fig. 2 (a) The proposed multiple-WRP method with uniform segmentation of the 3D object/scene using intra- and inter-WRP propagation and (b) The look-up table with depth quantization levels for each WRP, showing the backward and forward propagation of the points.

Download Full Size | PPT Slide | PDF

3.1. Multiple WRPs with optimized intra- and inter-WRP propagation

The WRP method can be extended by considering multiple WRPs. In the method proposed in [8], which is depicted in Fig. 1(b), the use of double WRPs improves already the computational complexity; the technique is applied on long depth objects, that are formed by images segmented to different depths. In that case, the WRPs are placed in front of the object – as in the original method. In the method of [9], illustrated in Fig. 1(c), multiple WRPs have been used for generating objects from RGB and depth data, taking advantage of discretized depth data and making use of forward and backward propagation.

In our proposed method, depicted in Fig. 2(a), we employ multiple WRPs that are crossing through the object (i.e. the point cloud). This reduces the distance of each point to the closest WRP plane, and, hence, reduces the size of the support windows 2Wj, leading to a decrease of the computational load. Additionally, by uniformly quantizing the depth of the object, forward and backward propagation can be applied, allowing for the generation of a reusable look-up table (LUT) for all depth levels [Fig. 2(b)]. The LUT consists of the field contributions of a point at every quantized distance. These field contributions have different sizes, which are derived from 2Wj plus a corrective factor for each depth level. This corrective factor is determined heuristically and has a value in the range of 2 to 8 pixels. In other approaches where the WRP (nearly) passes through the object points, one needs to apply depth margins [9], such that the points that are very close to or pass through the WRP do not have zero window support. However, in our case, we use volume points (see section 3.2), that actually need a bigger support window to be reconstructed to avoid cutting out the smooth degradation of their Gaussian distribution, thus requiring a corrective factor. Hence, this corrective factor will not only improve the rendering result, but also gracefully solve the support area problem of a point close to the WRP. The propagation of the points’ field within a WRP zone is termed as intra-WRP propagation and results in a further reduced computational complexity [6].

The number of WRPs used in each CGH can be arbitrarily chosen, and therefore optimized. The optimal number of WRPs regarding performance is a function of the depth of the object and the density of the point cloud, as can be seen from the performance experiments in section 4.4. Moreover, the WRPs are placed symmetrically through the object resulting in smaller and symmetrical propagation kernels.

Furthermore, in literature [5, 8, 9], it is suggested that when all the points that belong to one WRP are calculated, the WRP is propagated directly to the hologram plane. The previous multiple WRP techniques make use of fast Fourier transform (FFT) [8] and band-limited double-step Fresnel diffraction (BL-DSF) [9], which was firstly introduced and applied in digital holography in [12]. However, in the proposed method, progressive propagation is used (see Fig. 2(a)), which has been also applied in [1] for a polygon-based CGH method.

In our approach, the angular spectrum method (ASM) [17] is used for the intra- and inter- WRP propagations. The angular spectrum propagation is a convolution-based diffraction method and is given by the following formula:

u2(x2,y2)=1[[u1(x1,y1)]exp(2πiz1/λ2ωx2ωy2)]
where, and −1 are the Fourier and inverse Fourier operators, u1(x1, y1) and u2(x2, y2) are the source and destination planes with their corresponding coordinates x1, y1 and x2, y2, z is the distance between them and ωx, ωy are the spatial frequencies of x1, y1 in the fourier domain. This propagation method provides the advantage of maintaining the sampling rate on both planes, contrary to other methods.

More specifically, each WRP is propagated by angular spectrum propagation to the next WRP +1, i.e. inter-WRP propagation, and only the last WRP L is propagated to the hologram plane. This approach provides the advantage of using a precalculated kernel for angular spectrum propagation between the equidistant, successive WRPs, and also enables the occlusion processing method proposed in the next section, because of the sequential propagation.

3.2. Volume points

To generate high-quality object reconstructions, we propose to use volume points, as introduced in [3]. Each point is attributed a Gaussian distribution, so that the objects have a smoother surface. The 2D Gaussian distribution in a particular depth plane – we do assume a planar distribution – is given by the formula:

g(x,y)=exp(x2+y22σ2)
where, σ2 is the variance of the distribution (σ is the standard deviation). By changing the value of σ, the smoothness of the surface can be tuned, since the surface of the points changes accordingly. For σ → 0, we approximate a Dirac delta function, which corresponds to assuming normal dimensionless points.

In contrast to the original WRP method [5], where the ray tracing method is used for the calculation of the contribution of each point, we use angular spectrum propagation, which enables also the use of volume points. In practice, a point is modelled as a discrete 2D map by sampling the corresponding 2D Gaussian profile expressed by Eq. (5) and is positioned in parallel to the WRP planes. Thereafter, the 2D map modelling the point is propagated to the WRP using the angular spectrum method of Eq. (4) for all possible depth planes, and the result for each depth plane is stored as a separate entry in the LUT. This way the LUT consists of all the possible optical fields that should be calculated in a WRP zone, as shown in Fig. 2(b).

3.3. Occlusion processing

Occluded object points have been an unresolved problem in digital holography for the past years, and no general solutions for point clouds exist. The method proposed includes occlusion processing by calculating the contribution of the points ordered from back to front. This enables masking the field contributions from the previously calculated points, which are behind the currently processed point. Before adding the contribution of the point to a specific area of the WRP or the hologram plane, the optical field recorded in this area is suppressed by multiplying the field locally with the complement of a Gaussian distribution. As a result, the light emanating from points that are situated directly behind this point is occluded, and is therefore not contributing to the optical field recorded at the WRP and by extension at the hologram plane.

The size of this occlusion mask is calculated based on the lateral density of the point cloud i.e. the mean distance between two points when projected in the hologram plane μp. The aim is that two points of the same depth level do not overlap and occlude each other. The density of the point cloud is considered uniform and the point density per pixel ρp is given by dividing the point density on the projected surface by the pixel surface:

ρp=NH×Wp2
where, N is the number of points and W and H are the width and the height of the object, respectively. This leads to:
μp=12pρp
Therefore, the occlusion mask size Wocc is equal to the rounded number of pixels that corresponds to this distance between two points, Wocc = μp/p and can be further optimised to further improve the visual effect.

3.4. The proposed algorithm

In the subsequent paragraphs, we describe the algorithm. The general algorithm is illustrated in Fig. 2(a). Moreover, an example of the sequential calculation of the contribution of three subsequently processed points is given in Fig. 3. In the following explanation, steps 1 to 3 are preparation steps, step 4 handles the intra- and inter-WRP propagation and is processed for every WRP, and step 5 is calculating the diffraction field at the hologram plane by propagating WRPL.

Pre-processing steps
  • 1. The optimal number of WRPs L and the size of the occlusion mask Wocc (optional) are calculated based on the depth of the object and the lateral density of the point light sources of the object, respectively.
  • 2. The depth of the point cloud is subdivided uniformly in L WRP zones and each WRP zone is further quantized in depth levels, half of them in front of each WRP and half of them behind it. The points are grouped per WRP zone.
  • 3. The optical field of a point for each depth level in a WRP zone is precalculated by backward or forward propagation, creating a table with the fields for each depth level. Each level has a different support window 2Wj, as shown in Eq. (2). This LUT is the same for all WRP zones and its size depends on the depth resolution and the size of the WRP zone.
Intra- and inter-WRP propagation
  • 4. The WRP zones are processed sequentially starting at WRP1 and ending at WRPL, where for every WRP zone, the processing order of the points is by decreasing z-coordinates to facilitate the occlusion processing.

    Per point we:

    1. determine the depth level;
    2. apply the occlusion mask in the area around the point on the current WRP to suppress the diffraction pattern of the occluded points (except for the first point);
    3. add the optical field contribution from the LUT for the given depth level to the current WRP.

    When all the points within one WRP zone are calculated, the current WRP diffraction pattern is propagated by angular spectrum propagation to the next WRP. As explained in section 3.1, the kernel of the inter-WRP propagation is precalculated, since it is the same for all the inter-WRP propagations. This step is then repeated for the next WRP until WRPL is reached.

CGH Generation
  • 5. When the last WRP zone (L) is calculated, the diffraction pattern is propagated to the hologram plane (HP) by angular spectrum propagation.
 figure: Fig. 3

Fig. 3 Example of the propagation of 3 subsequently processed points A, B, C to the WRP. (a) The points and the WRP to which they propagate. (b) The occlusion of A by B, and the partial occlusion of A and B by C. (c) The occlusion mask and the optical field of a particular LUT depth level. (d) – (h) We alternatingly apply occlusion processing and the addition of the LUT field for all points assigned to the WRP, going from back to front.

Download Full Size | PPT Slide | PDF

4. Experiments

To demonstrate the validity of the proposed CGH approach, we have generated a set of digital holograms according to the specifications of currently commercially available Spatial light modulators (SLMs), i.e. 1,920 × 1,080 resolution at 8 μm pixel pitch (Table 1). We have used different 3D object point data sets, a.k.a. point clouds (see Table 2). The simulations ran on a computer with a 2.10 GHz Intel Core i7-4600U CPU processor and Windows 8.1 (x64) as operating system.

Tables Icon

Table 1. Simulation parameters for all experiments

Tables Icon

Table 2. Results from numerical reconstruction of holograms from six 3D point data sets.

4.1. CGH without occlusion processing

We validate our method by comparing the generated CGH to the original ray tracing method Eq. (1). In both cases, we do not apply any occlusion technique to demonstrate the equivalence of the methods for non-occluded surface renderings. For this experiment, we used only the front part of the point data set Venus (34,195 points). It has to be noted that when applying the proposed method we used a point size of σ = 0, corresponding to the perceived size of the ray tracing method – to have a more fair comparison, since it assumes dimensionless discrete points.

The results are depicted in Fig. 4. The visual comparison even suggests the superiority of the proposed method compared to the ray tracing method. Ray tracing appears to give rise to a noisier reconstruction and a minor loss of accuracy in the depth reconstruction. These observations can probably be attributed to the window support incorporated in the proposed method such that the wave propagation is confined by the diffraction angle.

 figure: Fig. 4

Fig. 4 Reconstruction of CGH generated by: (a) the ray tracing method and (b) the proposed method (17 WRPs).

Download Full Size | PPT Slide | PDF

4.2. CGH with occlusion processing

The importance of occlusion processing in CGH can be understood intuitively. Without applying an occlusion culling technique, the object’s surface seems transparent and it is impossible to distinguish between the front and the back parts of the surface, since all of the points are shown. With occlusion processing, only the visible part of the object is shown, accomplishing a more realistic effect. The methodology used is suitable for both sparse and dense models and can provide texture effects for dense point clouds. Figure 5(a) shows another modeled object in (a), while the reconstruction of the CGH generated by rotating the object are illustrated in Fig. 5(b) and Fig. 5(c). In Fig. 5(b), the orientation is such that the front holes match the back holes, since the model is symmetrical and angular propagation is an orthographic projection, while in Fig. 5(c) the back part of the ball is visible through the front holes, but is out of focus as expected.

 figure: Fig. 5

Fig. 5 (a) Object model of ”Perforated ball” (218,640 points) (b) and (c) Numerical reconstructions of CGH of the point cloud ”Perforated ball” with the proposed method at different positions with matching and non matching holes ( Visualization 1).

Download Full Size | PPT Slide | PDF

Additionally, Fig. 6 shows the numerically reconstructed images from the CGHs generated by the proposed method. For all CGHs the number of depth levels per WRP is 63 (LUT levels), while the other parameters are calculated automatically based on the properties of the data set, e.g. the density of the point cloud and the depth of the object. For the occlusion processing for the specifications used, a value of 12 to 20 [μm] was selected for σ.

 figure: Fig. 6

Fig. 6 Digital reconstructions of various 3D point data sets. Table 2 lists the number of points, the computation time, the number of WRPs and the size of the occlusion mask used for each data set.

Download Full Size | PPT Slide | PDF

We used angular spectrum propagation as reconstruction method and the reconstruction distance in all cases is the distance zwL of the last WRP to the hologram plane. The dimensions and the viewing angle of all images are 15.36 × 8.64 mm and 4.5º, respectively.

4.3. Full parallax

To demonstrate the occlusion effect and the full parallax properties of the CGHs, we generated a CGH with a resolution of 8,192 × 8,192 pixels and a pixel pitch of 2μm to achieve a wider viewing angle of 18.2º. The object is positioned at a distance of 15 cm from the hologram plane. To show the reconstruction from different viewing angles, sub-holograms are extracted from the hologram with a window aperture of 2,720 pixels from the hologram plane. The results are shown in Fig. 7. Additionally, we also visualize the parallax by providing a video demonstrating numerical reconstruction from different viewing angles ( Visualization 2).

 figure: Fig. 7

Fig. 7 Numerical reconstruction results when viewing from (a) left, (b) right, (c) top and (d) bottom, showing occlusion effect and full parallax. ( Visualization 2).

Download Full Size | PPT Slide | PDF

4.4. Performance tests of the proposed multiple-WRP method

Firstly, we compare the proposed method (without occlusion processing) to other CGH generation methods (Table 3). The proposed method is 2,500 faster than the ray tracing method and 5 times faster than the original WRP method [5]. Moreover, it is observed that placing the WRP in the middle of the object slightly reduces the computational complexity. This is due to the smaller distances to the WRP, which leads to smaller window support 2Wj and therefore less computation and memory readings. The speed improvement of the OpenCL implementation suggests that with a GPU implementation our proposed method will be even faster and potentially suitable for real-time CGH display systems.

Tables Icon

Table 3. Computation time of data set Venus by 5 methods without occlusion processing.

Figure 8 shows the time performance of our method for 1 up to 100 WRPs. The computation times are the averaged for 5 runs per number of WRPs for each model. As expected, the time cost depends on the number of 3D object points and total number of WRPs used, due to the propagation cost of each WRP to the next. It can be concluded that increasing the number of WRPs decreases the computation time with a factor of up to 19 for the investigated data sets, compared to a single WRP. The optimal number of WRPs depends on the number of points the object is composed of. Beyond this number of WRPs the computational gain is slightly decreasing.

 figure: Fig. 8

Fig. 8 (a) Computation time in seconds as a function of the number of WRPs. Absolute values in seconds are shown. (b) Computation gain as a function of the number of WRPs. Relative values, in reference to the computation time of 1 WRP only are shown. For example, for ”Bimba#2 Full” using 45 WRPs, makes the computation 19 times faster.

Download Full Size | PPT Slide | PDF

By comparing the same model with smaller depth and less points to the full model (e.g. Venus#1 and Venus#2 or Bimba#1 and Bimba#2) without occlusion processing it is noticed that the number of points matters only for a small number of WRPs, and it is insignificant for larger number of WRPs. It was calculated that the average computation time for the occlusion processing is 0.68 seconds for Venus#2 (occlusion mask size of 9×9 pixels) and 4.14 seconds for Bimba#2 (occlusion mask size of 7×7 pixels) and as expected, it is constant for any number of WRPs.

The time performance behavior can be understood in terms of algorithmic complexity, which can be described as:

O(αN/L2)+O(βLD)+O(γD2lnD)
where α, β and γ are the number of arithmetic operations for the intra-WRP propagation, the the inter-WRP propagation and the WRPL to hologram plane propagation, respectively, N is the number of the object points, L is the number of the WRPs and D is the pixel resolution of the hologram plane. For a small number of WRPs the first term is significant because of the bigger support of the LUT windows, due to the larger propagation distances between the points and the WRPs, while for a large number of WRPs this cost becomes insignificant and the main cost will come from the second part, i.e. the number of inter-WRP propagations; the last part uses the fast fourier transform which runs in linearithmic time. That way, a larger number of WRPs enhances the generation speed.

5. Conclusions

We have proposed a novel CGH technique with full parallax that improves the WRP method by proposing the use of multiple WRPs with intra-WRP light-field propagation utilising angular spectrum propagation and reducing as such the computational complexity with a factor 5. By exploiting the proposed intra- and inter-WRP propagation technique – utilising look-up tables and Gaussian point density profiles, an efficient occlusion culling solution, utilising inverse Gaussian filtering of the WRP, is proposed that represents a minimal computational overhead compared to classical WRP solutions. Compared to the classical ray tracing solution a speed-up factor of 2,500 could be achieved. In future work, we are planning to implement the proposed method on a GPU to further decrease the computation burden through parallel computing.

Acknowledgments

The models of Bimba, Venus (Head sculpture), Perforated Ball and Cat are courtesy of INRIA by the AIM@SHAPE-VISIONAIR Repository, Direct Dimensions Inc. (www.dirdim.com), Manuel Piñeiro from GrabCad.com, and Tosca hi-resolution 3D database v1.0, respectively. The research leading to these results has received funding from the European Research Council under the European Unions Seventh Framework Programme ( FP7/2007–2013)/ ERC Grant Agreement n. 617779 (INTERFERE).

References and links

1. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25 (2005). [CrossRef]  

2. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]   [PubMed]  

3. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48, 6841–6850 (2009). [CrossRef]   [PubMed]  

4. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015). [CrossRef]   [PubMed]  

5. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009). [CrossRef]   [PubMed]  

6. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010). [CrossRef]   [PubMed]  

7. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20, 4018–4023 (2012). [CrossRef]   [PubMed]  

8. A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014). [CrossRef]  

9. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

10. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993). [CrossRef]  

11. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17, 18543–18555 (2009). [CrossRef]  

12. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013). [CrossRef]   [PubMed]  

13. D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015). [CrossRef]  

14. J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53, 6287–6293 (2014). [CrossRef]   [PubMed]  

15. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011). [CrossRef]  

16. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21, 21811–21822 (2013). [CrossRef]   [PubMed]  

17. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004).

18. A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

References

  • View by:
  • |
  • |
  • |

  1. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25 (2005).
    [Crossref]
  2. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009).
    [Crossref] [PubMed]
  3. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48, 6841–6850 (2009).
    [Crossref] [PubMed]
  4. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015).
    [Crossref] [PubMed]
  5. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009).
    [Crossref] [PubMed]
  6. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010).
    [Crossref] [PubMed]
  7. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20, 4018–4023 (2012).
    [Crossref] [PubMed]
  8. A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
    [Crossref]
  9. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).
  10. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993).
    [Crossref]
  11. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17, 18543–18555 (2009).
    [Crossref]
  12. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013).
    [Crossref] [PubMed]
  13. D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015).
    [Crossref]
  14. J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53, 6287–6293 (2014).
    [Crossref] [PubMed]
  15. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
    [Crossref]
  16. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21, 21811–21822 (2013).
    [Crossref] [PubMed]
  17. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004).
  18. A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

2015 (2)

H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015).
[Crossref] [PubMed]

D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015).
[Crossref]

2014 (3)

J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53, 6287–6293 (2014).
[Crossref] [PubMed]

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

2013 (2)

2012 (1)

2011 (1)

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

2010 (1)

2009 (4)

2005 (1)

K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25 (2005).
[Crossref]

1993 (1)

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993).
[Crossref]

Alam, M. A.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Bronstein, A.

A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

Bronstein, M.

A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

Cao, L.

Chen, J.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Chen, R. H.-Y.

Chong, T.-C.

Chu, D.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Collings, N.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Crossland, B.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004).

Hiyama, D.

D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015).
[Crossref]

Ichihashi, Y.

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013).
[Crossref] [PubMed]

Ito, T.

Jeon, S.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Jia, J.

Jin, G.

Kakue, T.

D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015).
[Crossref]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013).
[Crossref] [PubMed]

Kim, N.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Kimmel, R.

A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

Lee, J.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Liang, X.

Liu, J.

Lucente, M. E.

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993).
[Crossref]

Masuda, N.

Matsushima, K.

Nakahara, S.

Nakayama, H.

Oi, R.

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013).
[Crossref] [PubMed]

Oikawa, M.

Okada, N.

Pan, Y.

Phan, A.-H.

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Shimobaba, T.

Solanki, S.

Tan, C.

Tanjung, R. B. A.

Wakunami, K.

Wang, Y.

Weng, J.

Wilkinson, T. D.

Xie, J.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Xu, X.

Yamaguchi, M.

Yamamoto, K.

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013).
[Crossref] [PubMed]

Yamashita, H.

Zhang, H.

H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015).
[Crossref] [PubMed]

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Zhao, Y.

Appl. Opt. (3)

J. Elec. Imag. (1)

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993).
[Crossref]

Opt. Commun. (1)

D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015).
[Crossref]

Opt. Eng. (1)

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Opt. Express (6)

Opt. Lett. (1)

Photon. Lett. Poland (1)

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).

Proc. SPIE (2)

K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25 (2005).
[Crossref]

A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014).
[Crossref]

Other (2)

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004).

A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.

Supplementary Material (2)

NameDescription
» Visualization 1: MP4 (4061 KB)      Visualization of the occlusion culling by demonstrating numerical reconstructions of the hologram for different viewing angles of the object
» Visualization 2: MP4 (14296 KB)      Visualization of the full parallax by demonstrating numerical reconstruction of the hologram from different viewing angles

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Previous WRP methods: (a) The WRP method as introduced in [5] with FFT, (b) The multiple WRP method [8] with FFT and (c) The WRP method with band-limited double step Fresnel diffraction (BL-DSF) for RGB and Depth data [9].
Fig. 2
Fig. 2 (a) The proposed multiple-WRP method with uniform segmentation of the 3D object/scene using intra- and inter-WRP propagation and (b) The look-up table with depth quantization levels for each WRP, showing the backward and forward propagation of the points.
Fig. 3
Fig. 3 Example of the propagation of 3 subsequently processed points A, B, C to the WRP. (a) The points and the WRP to which they propagate. (b) The occlusion of A by B, and the partial occlusion of A and B by C. (c) The occlusion mask and the optical field of a particular LUT depth level. (d) – (h) We alternatingly apply occlusion processing and the addition of the LUT field for all points assigned to the WRP, going from back to front.
Fig. 4
Fig. 4 Reconstruction of CGH generated by: (a) the ray tracing method and (b) the proposed method (17 WRPs).
Fig. 5
Fig. 5 (a) Object model of ”Perforated ball” (218,640 points) (b) and (c) Numerical reconstructions of CGH of the point cloud ”Perforated ball” with the proposed method at different positions with matching and non matching holes ( Visualization 1).
Fig. 6
Fig. 6 Digital reconstructions of various 3D point data sets. Table 2 lists the number of points, the computation time, the number of WRPs and the size of the occlusion mask used for each data set.
Fig. 7
Fig. 7 Numerical reconstruction results when viewing from (a) left, (b) right, (c) top and (d) bottom, showing occlusion effect and full parallax. ( Visualization 2).
Fig. 8
Fig. 8 (a) Computation time in seconds as a function of the number of WRPs. Absolute values in seconds are shown. (b) Computation gain as a function of the number of WRPs. Relative values, in reference to the computation time of 1 WRP only are shown. For example, for ”Bimba#2 Full” using 45 WRPs, makes the computation 19 times faster.

Tables (3)

Tables Icon

Table 1 Simulation parameters for all experiments

Tables Icon

Table 2 Results from numerical reconstruction of holograms from six 3D point data sets.

Tables Icon

Table 3 Computation time of data set Venus by 5 methods without occlusion processing.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

u w ( x , y ) = j = 1 N A j R j exp ( i 2 π λ R j )
W j = | z j | tan ( sin 1 λ 2 p ) | z j z w | tan ( λ 2 p ) .
u H P ( ξ , η ) = exp ( i 2 π λ z w ) i λ z w u w ( x , y ) exp ( i π λ z wrp ( ( ξ x w ) 2 + ( η y w ) 2 ) d x w d y w = = exp ( i 2 π λ z w ) i λ z w 1 [ [ u w ( ξ , η ) ] [ h ( ξ , η ) ] ]
u 2 ( x 2 , y 2 ) = 1 [ [ u 1 ( x 1 , y 1 ) ] exp ( 2 π i z 1 / λ 2 ω x 2 ω y 2 ) ]
g ( x , y ) = exp ( x 2 + y 2 2 σ 2 )
ρ p = N H × W p 2
μ p = 1 2 p ρ p
O ( α N / L 2 ) + O ( β LD ) + O ( γ D 2 lnD )

Metrics