Abstract

We propose a new method for occlusion culling in the computation of a hologram based on the mutual conversion between light-rays and wavefront. Since the occlusion culling is performed with light-ray information, conventional rendering techniques such as ray-tracing or image-based rendering can be employed. On the other hand, the wavefront is derived for the calculation of light propagation, the hologram of 3-D objects can be obtained in high accuracy. In the numerical experiment, we demonstrate that our approach can reproduce a high-resolution image for deep 3-D scene with correct occlusion effect between plural objects.

© 2013 Optical Society of America

1. Introduction

For high-quality 3-D display, holography is a superior medium since it can reproduce all the depth cues of human vision. For the electronic display of holography, the hologram pattern is calculated using the technique of computer generated hologram (CGH). In most of the current CGH calculation algorithms, the target objects are constructed by a number of point or polygon light source, and then physical phenomena of wavefront propagation from these light sources to the hologram plane is simulated [15]. Thanks to the progress of high-speed computing, it becomes possible to define huge number of light sources in high-density on the object surface, allowing to render smooth surfaces. However, it is still not easy to deal with some important physical phenomena for reconstructing photorealistic 3-D images, such as shading, occlusions, transparency, texture mapping and light reflectance property, etc. Especially, two kinds of occlusions must be considered; self-occlusion in which some surfaces are hidden by other surfaces of the same object, and mutual-occlusion between different objects. They are the crucial issues in CGH computation to represent a correct depth cues for observers. In [4], the exact occlusion culling method, in which the wavefront coming from background is obstructed by surface object in high accuracy, was proposed. Correct occlusion effect could be achieved while the computational cost would be extremely high in sequential light propagation process, especially for the complicated object scene or large-sized hologram. For the fast calculation, geometrical projections are used in light-ray based approaches [610], which yield the loss of accuracy [1113]. These conventional approaches are briefly summarized in the next section. It has not been still established the approach that realize high-resolution and high accuracy when simulating the occlusion effect in a deep 3-D scene with a simple algorithm.

In this paper, we propose a new method for mutual-occlusion processing based on the conversion between the light-ray information and the wavefront. In the proposed method, the light-ray domain is used for occlusion processing, and the light propagation is calculated in wavefront domain, this approach can realize an occlusion effect with higher accuracy with a simple algorithm.

The conversion from light-rays to wavefront was employed in our previous work [14], in which a virtual ray-sampling (RS) plane located near the object was used for the CGH computation. Light-ray information emitted from a target object is sampled spatially and angularly at RS points on the RS plane as a set of projection images, and is transformed into a complex amplitude distribution by Fourier transform of projection images. This transformation can be considered as a conversion from light-ray information into the wavefront. The CGH pattern is thus obtained by calculating the wavefront propagation from RS plane to the hologram. In this approach, the wavefront of a target object can be generated from simply a set of projection images, and also self-occlusion culling of a target object can be directly achieved in rendering the projection images by traditional computer graphic (CG) technique. Therefore, the method proposed in this paper is focused on the mutual-occlusion. In the mutual-occlusion, it can be assumed that plural objects are located at different depths, and front objects occlude background. The numerical simulation and the optical experiment demonstrated that the proposed method correctly processes the mutual-occlusion where the conventional methods suffer to consist with the accuracy of occlusion effect and the calculation cost.

2. Previous works of occlusion effect for CGH calculation

To simulate the physical phenomena of 3-D scene with occlusions, ideally it is necessary to consider the light interaction with an object surface by solving the wave equation. Since it needs a huge calculation cost, some approximations of physical phenomena is necessary to reduce the computational complexity. The initial approximation is to disregard the backscattering light, which is normally applied in the wave propagation theory. Another approximation that has been often employed in the conventional techniques is the use of mask generated by a geometrical projection of front object. In [15], a method for occlusion culling was presented in horizontal parallax only (HPO) hologram computation based on this concept. The objects are constructed of a collection of point sources, and the region of fringe pattern on the hologram from each source is defined by a data structure RunExtent with considering the occlusion culling by foreground object surfaces. Since a mask is applied on the hologram surface instead of the object surface, the artifact due to the diffraction may appear at the edge of front object, when an object is located distant from the hologram plane. In addition, it is required to generate the mask for each point source, and the data structures will become complicated and it cannot avoid severe increase of computational cost especially in full-parallax hologram and a large amount of point sources for higher resolution 3-D scene. The methods presented in [1618], can be interpreted as the full-parallax versions of the use of mask generated by a geometrical projection, as well as more efficient computation techniques. While the 3D scene is constructed by point cloud, the occlusion culling as judgment of the nearest object point is achieved in ray domain by using z buffering and bundle of ray shot from every hologram sample performed with graphics processing units (GPUs). These methods work well for the images reconstructed near the hologram plane, but the artifact due to the diffraction becomes serious when reconstructing deep 3D scene.

Matsushima revealed exact occlusion culling method [4]. In [4], target object is composed of polygon light sources, and self-occlusion process is executed by masking of the propagated wavefront from back polygons by fore polygons. The light propagation and masking process of tilted polygons to the hologram plane is carried out by rotational transformation in Fourier domain. Self-occlusion effect can be achieved with high accuracy even a complex object, however, sequential process of light propagation from back polygon to foreground requires us unrealistic calculation cost, especially for a 3-D scene existing plural objects. For synthesizing large-sized CGH with mutual-occlusion effect, Matsushima et al. also proposed a silhouette masking approach in which the wavefront from the background is interrupted by a mask of the foreground object’s orthographic silhouette [5]. Since mutual-occlusion process is achieved by low computational cost, synthesizing large-sized CGH over 4 billion of pixels of plural objects were demonstrated. However the mutual-occlusion errors are observed from oblique direction, since a silhouette mask is orthographically projected in this approach.

Light-ray based approaches or the techniques called holographic stereogram (HS) for CGH computation are also effective for occlusion culling [610,16]; geometrical self and mutual-occlusions can be achieved directly by using traditional computer graphic (CG) rendering techniques. Smithwick et al. proposed real time HS rendering in which CGHs are synthesized as a summation of overlapping amplitude modulated chirp gratings on the hologram [10]. Since there is no need to calculate the wave propagation in CGH synthesizing, they are handling to build a real time 3-D holographic display. Because occlusion process can be included in light-ray rendering process, it is possible to decrease the computational cost as compared with the wavefront-based approach. However, these approaches are based on light-ray reproduction, distant objects from the plane where the rays are sampled should be blurred due to the light-ray sampling and light diffraction [1113]. To remedy the degradation due to the light diffraction in HS based display, the optimum phase adding approach called phase-added stereogram (PAS) and accurate and compensated phase added stereogram (ACPAS) has been proposed, in which the optimized phases improve the phase mismatch on the image surface [8,9]. Occlusion culling can be achieved easily in these techniques in the same manner as light-ray based approaches. However they still suffer from the image degradation due to light-ray sampling and a deep 3-D scene cannot reconstruct in high-resolution.

To summarize this section, the conventional approaches for occlusion culling in computational holograms can be classified into following four categories: (1) masking the wavefront on the hologram plane using geometrical projection (ray-casting), (2) masking to wavefront by every polygon patch, (3) silhouette masking near each object, and (4) light-ray based computation of hologram (HS-based). For the display of deep 3D scene, however, the reconstructed image may degraded due to the diffraction effect in (1) and (3), and occlusion holes may appear when observing from oblique angles in (4). The method (2) requires huge increase of computation cost, and is only applicable to polygon-based rendering. It can be said as the approach allowing high-resolution image reconstruction for a deep 3-D scene by simple algorithm is still not established. In this paper, a new method for occlusion culling that is suitable for deep 3D scene reproduction without significant increase of calculation cost. In the simulation explained in section 5, therefore, we compared the proposed approach with the methods (1) (3) and (4) for the visual evaluation of reconstructed image quality.

3. The conversion between light-ray and wavefront

As the proposed method is based on the mutual conversion between ray (R) and wavefront (W), the methods of the forward and backward conversion are briefly reviewed before the principle of the proposed method for occlusion culling. R2W conversion is identical to the method presented in [14], and W2R conversion is the inverse extension of it.

3. 1 R2W conversion

The RS plane should be set near the target object to avoid image blur due to ray sampling and light diffraction [14]. Light-rays passing through (i, j) th RS point on the RS plane compose a projection image pij[m, n], where [m, n] are the indices that denote the direction of light rays and the center of projection is the given RS point. Whole light-ray information R passing through the RS plane can be expressed the class of projection images of all RS points as

R={p00,p10,,p(I1)(J1)}
where I and J are the total number of RS points in horizontal and vertical direction on the RS plane. Each projection image is multiplied by discrete random phase φij[m,n], then it is transformed by discrete Fourier transform into the complex amplitude distribution Pij[μ,ν] of a region centered at the same RS point. This process is based on the angular spectrum theory [19]. The conversion from R2W can be achieved by applying this process for all of projection images and tiling obtained complex amplitude distributions on the RS plane [see Fig. 1]. The discretized wavefront W on the RS plane is described as
W[k,l]=Pij[k(xiΔp+K2),l(yjΔp+L2)]
where Δp denotes the pixel pitch of wavefront on the RS plane, K and L are total number of pixels in the wavefront. Pij is allotted as the complex amplitude distribution to the pixel of the obtained wavefront W[k, l], xi and yj are the coordinates of (i, j) th RS point.

 

Fig. 1 The forward and backward conversion between R and W on the RS plane.

Download Full Size | PPT Slide | PDF

3. 2 W2R conversion

By following an inverse process of R2W conversion, i.e. applying an inverse Fourier transform of the wavefront in a small area around each RS point, wavefront W can be converted into a set of angular spectrums at each RS point. It corresponds to a light-ray information R on the RS plane. This process achieves W2R conversion., By keeping the sampling interval on the RS plane small enough compared with the resolution of human vision, the sampling in the conversion does not affect the reconstructed image.

4. Mutual-occlusion processing between the plural objects

Proposed occlusion processing is achieved on the RS plane set near the interrupting object by using the conversion between the light-ray information and the wavefront described in previous section.

From here, for simplicity, it is assumed that a target scene consists of two objects; background and interrupting object as shown in Fig. 2. If there exist more interrupting objects, following algorithm can be applied to each object. CGH calculation flow diagram with occlusion process is shown in Fig. 3. In the proposed approach, mutual-occlusion is dealt with by defining RS plane near the interrupting objects.

 

Fig. 2 A scheme of occlusion process using RS plane.

Download Full Size | PPT Slide | PDF

 

Fig. 3 CGH calculation flow with proposed occlusion processing based on the conversion between the light-ray information and the wavefront on the RS plane.

Download Full Size | PPT Slide | PDF

In the first step, the light-rays from the interrupting object on the RS plane #1 in Fig. 2 is derived. A set of projection images pij of the interrupting object at each RS point are rendered by ray-based rendering software in advance, yielding a light-ray information Rf of the interrupting object. Next, the wavefront propagation from the background object to the RS plane #1 is calculated by Fresnel transform to derive the wavefront Wb of background at the RS plane. The wavefront from the background object can be derived from any type of conventional techniques of CGH calculation, such as the method using a set of point sources, polygon light sources, and angular spectra. As the third step, the wavefront Wb from the background object is then converted into the light-ray information Rb on the RS plane #1 by W2R conversion explained in Section 3. The fourth step of the algorithm is the substitution of Rb by Rf in the light-ray domain. Let pbij[m, n] be the converted light-ray information at (i, j) th RS point from the wavefront of background. Then pbij[m, n] is replaced by the light-rays pfij[m, n] if there exist a light-ray from the interrupting object in the direction (m, n) at RS point (i, j). This substitution yields a new light-ray information poij[m, n] in which the occlusion is processed, as shown in the upper right box of Fig. 3. After overwriting the rays at all RS points, combined light-ray information Ro on the RS plane #1 is reconverted into the wavefront Wo by R2W conversion, and whole wavefront on the RS plane #1 after occlusion culling is obtained. In other words, the occlusion culling at each RS point is achieved by just overwriting the rays of interrupting object in light-ray domain. Finally the light propagation from the RS plane to the CGH plane is calculated by Fresnel transform, and hologram pattern is encoded.

Since occlusion process is achieved in light-ray domain, correct occlusion culling for variable observing angle is realized easily by substituting light-rays on the RS plane. In contrast to the silhouette masking approach in [5], the proposed approach does not cause error in occlusion when viewing from oblique directions. Comparing with the method presented by Underkoffler et al. [15], the artifact due to diffraction at the occlusion mask is suppressed, because masking process is carried out near the interrupting object on the RS plane. This method realizes higher accuracy mutual-occlusions, with high-resolution image even for a deep 3-D scene. The calculation cost is increased for the conversion between R and W by using FFT and IFFT for a set of projection images. Because each RS plane and CGH plane are set in parallel, it is possible to implement efficient calculation for light propagation by using look up table, discrete Fresnel transform algorithm by using FFT, and/or Shifted Fresnel diffraction [20].

5. Experimental results and discussion

5.1 Comparison with conventional approaches

In the first experiment, we compared the occlusion results with conventional methods discussed in section 2 and the proposed method using simple objects. The compared approaches are listed as follows;

  • (a) Ray-based method
  • (b) Geometrical mask at hologram plane (Underkoffler’s approach 4)
  • (c) Orthographic silhouette mask 6
  • (d) Proposed method

The target scene consists of two objects; a checkered sphere as a background object and a triangle-patch as an interrupting object set at 300mm and 150mm from the CGH plane, respectively [see Fig. 4]. For all of methods, CGH calculations including FFT and discretized Fresnel diffraction were executed by using a PC with a CPU of Intel Westmere-EP (2.93GHz) processor, a shared memory of 24Gbytes. Rendering process to obtain ray information was executed by using open-source computer graphic software “blender” on GPU accelerator of NVIDIA Quadro K2000. The wavelength was assumed to be 532 nm in CGH calculation and reconstruction. The final CGH size was 16. 3 × 16. 3 mm with 8,192 × 8,192 pixels, the sampling pitch was 2.0 μm, the viewing angle θv was 7.6 degree derived by θv = sin−1(λ/2*p), here λ is the wavelength and p is the sampling pitch.

 

Fig. 4 3D scene with simple interrupting object.

Download Full Size | PPT Slide | PDF

The method (a) is implemented by R2W conversion on the hologram plane, where all the light-rays from both objects (1 and 2) are sampled on the hologram plane. Dense rays were rendered as 256 × 256 projection images with 32 × 32 pixels including both self and mutual occlusions culling directly, and then each image was converted into the wavefront by FFT in the same manner of R2W conversion introduced in subsection 3-1. Elementally hologram cell size was 64 μm on CGH plane, angular sampling pitch of rays was 7.6 degree/32 pixel = 0.23 degree.

The simulation of (b) is implemented using RS plane instead of using polygon patches or z-buffer. In this experiment, we set RS planes at 5mm front of both objects and the parameters of ray sampling was identical to the ray-based CGH case. The geometrical mask of foreground object was multiplied to the projection images of background object. Then executing R2W conversion and simulating the propagation into the CGH from both objects, finally multiplying them and CGH pattern was encoded. Even though the wavefront was generated by different way from the previous works, the influence of the occlusion in our simulation is equivalent to them, because the wavefront from nearest object generated the holographic fringe in both methods.

In the simulation of (c), the wavefront of each object was generated by using RS plane, then wavefront coming from background was interrupted by the orthographic silhouette mask on fore object’s RS plane. The product was then propagated into the CGH plane and interference pattern was encoded.

Finally, in the proposed method (d), RS planes were set in the same manner with above two methods and the occlusion culling was carried out on the front RS plane by using the approach introduced in section 4. The generated wavefront after mutual-occlusion process was propagated to the CGH plane and hologram was encoded.

The purpose of this experiment is to demonstrate the difference of mutual occlusion effect by the listed methods. Thus to avoid other factors that is not related to occlusion culling, the wavefront was generated by using RS plane instead of point or polygon sources even in the methods (b) and (c).

The numerical reconstruction was simulated as following procedure; initially, wavefront propagation from CGH plane to the imaging lens was calculated by discrete Fresnel diffraction. The imaging lens mimicked a human eye, and was set at 200mm from the CGH plane. The pupil size of the lens was 7mm. Then, the wavefront inside the lens pupil was multiplied by lens phase function, and finally, the wavefront propagation from lens pupil to image plane was calculated. The reconstructed images focusing on 150mm with 0, ± 1.0 degree of observing angle and 300mm were simulated. To observe the image from off-axis position with ± 1.0 degree, we multiply the tilted phase delay on the hologram for horizontal axis. The simulation results are shown in Fig. 5.

 

Fig. 5 The reconstructed images by numerical simulation with variable focusing distances and observation angles. (a): ray-based method, (b): Geometrical mask, (c): orthographic silhouette mask, (d): Proposed method.

Download Full Size | PPT Slide | PDF

According to the results of (a), ray-based CGH can produce correct occlusion effect while the objects distant from the CGH plane suffered severe degradation due to the diffraction effect and the ray-sampling. It is clear that ray-based displays cannot reconstruct the deep 3D scene at high resolution, even occlusion effect can be reflected correctly in the image. In the results of (b), the artifact due to the diffraction of obstruct mask appear around the edge of fore object. This is due to the fact that masking by geometrical projection cannot represent the diffraction of the wavefront from the background at the foreground object. Although the proposed method uses the mask generated by geometrical projection, such diffraction can be incorporated because the mask is applied near the interrupting object. The results of (c) and (d) by the orthographic silhouette masking approach and the proposed method look no difference between them since mismatch of the orthographic silhouette and observer’s perspective view was quite small since the interrupting object was very thin. Therefore in next subsection, we compared these two methods for 3D scene with setting the complicated interrupting object.

5. 2 Comparison of silhouette masking method and proposed method for the complicated occlusion scene

We compared the reconstructed images of proposed method with orthographic silhouette masking approach by a numerical simulation and an optical experiment. The target scene consists of two objects; a checkered panel as a background object and a latticework of dice as an interrupting object located at 150mm and 100mm from CGH plane, respectively. The calculation of CGH was carried out in the same manner as subsection 5. 1. The parameters of RS planes and CGH plane of both methods are shown in Table 1.

Tables Icon

Table 1. The parameters of RS planes and CGH plane for the numerical simulation

The calculation times by each method are described in 5. 3 and the numerical reconstruction was simulated in the same manner as 5. 1.

Figures 6(a) and 6(b) show the results of numerical reconstructions with focusing different depths. Figures 6(c) and 6(d) are the reconstructed images when focusing on the latticework and observing from left, center and right directions in 2 degrees intervals, respectively. According to these results, the proposed method can represent accurate occlusion effect even though the interrupting object has complex shape. In contrast, the conventional method causes the occlusion errors depends on the focusing distance and observing angle due to use the orthographic silhouette as occlusion mask (some parts of occlusion error are circled in (d)).

 

Fig. 6 The reconstructed images by proposed method and conventional method (Media 1).

Download Full Size | PPT Slide | PDF

In the optical reconstruction, background and interrupting objects were scaled up 2 times and double-sized holograms compared with the simulation were calculated by both methods. Each parameter of RS planes and CGH plane are shown in Table 2. Calculated holograms were recorded on the holographic material (Geola. PFG 03-C) by our CGH printing system described in [14], and reconstructed by plane wave of laser light (λ = 532nm).

Tables Icon

Table 2. The parameters of RS planes and CGH plane for the optical reconstruction

Figures 7(a) and 7(b) show the optical reconstructed images observed from varying angles with focusing on the interrupting object. As with the simulation case, it is clear the proposed method represent high accuracy occlusion effect optically, whereas the silhouette masking method develops the errors around the edge of interrupting object.

 

Fig. 7 The reconstructed images by optical reconstruction with varying observing angles (Media 2).

Download Full Size | PPT Slide | PDF

5. 3 Comparison of the implementation time

In the first experiment, some common processes in the compared methods spent following calculation times;

  • 1) The rendering of 256 × 256 projection images with 32 × 32 pixels 2.3 min
  • 2) R2W/W2R conversion process as 256 × 256 times of FFTs/inverse FFTs 0.1 min.
  • 3) The discrete Fresnel diffraction with 8,192 × 8,192 pixels including 3 times of FFTs 1.5 min

The method (a) required one time of rendering step and R2W conversions, totally 1) + 2) = 2.4 min was spent for the CGH calculation.

The method (b) required two times of rendering and R2W conversion steps for the wavefront generation of two objects, two times of wave propagation from two objects into the CGH plane, additionally it spent 2.3 min for the rendering of the obstruct masks and multiplying them to object1’s RS plane, total implementation time was 2 × {1) + 2) + 3)} + 2.3 = 10.1 min.

The method (c) required two times of 1), 2) and 3) as like previous method, and 0.5 min was spent for rendering the silhouette mask and multiplying step on the wavefront, total time was 2 × {1) + 2) + 3)} + 0.5 = 8.3 min.

Finally the proposed method required two times of 1), 2) and 3) with extra 0.6 min for rendering binary ray mask as alpha-channel in rendering process 1) of fore object, additionally for the occlusion process, one time of both R2W and W2R conversion and 0.1 min for ray-overwriting process were needed, total implementation time was 2 × {1) + 2) + 3)} + 0.6 + 2 × 2) + 0.1 = 8.7 min.

In numerical simulation of the second experiment, the resolution of CGH was same with the first experiment as 8,192 × 8,192 pixels. Because the complicated object was set as the interrupting object, rendering time for object1 increased to 11.4 min for silhouette masking method and 12.6 min with binary ray mask for the proposed method. Other processes spent almost same times with the first experiment, the total implementation time was 1) + 11.4 + 2 × {2) + 3)} + 0.5 = 17.4 min by silhouette masking method and 1) + 12.6 + 2×{2) + 3)} + 2×2) + 0.1 = 18.4 min by proposed method.

For the optical reconstruction, since the CGHs and both objects were scaled up 2 times as 16,384 × 16,384 pixels, therefore rendering process, R2W/W2R conversion and masking processes spent simply almost 4 times, the process of wave propagation needed 4 × 4 = 16 times (since complex amplitude distributions of RS planes and CGH plane were segmented 2 × 2 with 8,192 × 8,192 pixels/segmentation) compared with the above simulation case. Off-axis wave propagations between segmented regions were achieved by shifted Fresnel diffraction [20]. Therefore the total implementation times were 4 × {1) + 11.4 + 2 × 2) + 0.5} + 16 × {2 × 3)} = 105.6 min by silhouette masking method and and 4×{1) + 12.6 + 2×2) + 2*2) + 0.1} + 16×{2×3)}= 109.6 min by the proposed method.

According to above comparison of implementation time, we can say that the proposed method can achieve high accuracy of mutual-occlusion effect by limited extra calculation cost including R2W/W2R conversion by FFTs and ray overwriting process. Since these additional occlusion processes are independent with each ray sampling point, it is possible to decrease further the calculation cost by applying parallel processing.

About the generation of wavefront of the object, if we consider the case when different method is used for wavefront generation, such as point-source-based or polygon-based approach, it is not easy to straightforwardly compare the execution times of wavefront generation and ray-based rendering. In the case of “the venus” hologram in [5], the generation of wavefront (excluding wave propagation) required 7.6 hours for rendering 718 polygons where the resolution of hologram was 65,536 × 65,536 pixels. In the case of our experiment the 3D model was composed of about 15,000 polygons and the CGH resolution was 16,384 × 16,384 pixels. The ray-based rendering step in the proposed method is equivalent to the techniques of HS-based hologram calculation [610]. The HS-based methods for hologram calculation have been introduced to decrease the computational cost. Therefore, we can say that ray-based rendering requires less amount of calculation cost than point-source-based or polygon-based methods.

6. Conclusion

We proposed a new method for occlusion culling in CGH calculation based on the conversion between light-rays and wavefront. Because occlusion masking is carried out in light-ray domain, simple processing enables full-masking considering the direction of each light-ray. This is similar to the geometrical methods such as [1518], but the mask is applied near the interrupting object such that the diffraction by the interrupting object can be taken into account. It is the novelty of this paper. On the other hand, the wave propagation is computed in wavefront domain, mutual-occlusion culling can be implemented with high accuracy. In numerical simulation, we showed that our method could reproduce the image with correct occlusion culling for deep 3-D scene, while the conventional method produces occlusion errors. As a future work, the algorithm should be optimized to decrease the computational cost.

References and links

1. M. Lucente, “Optimization of Hologram Computation for Real-Time Display,” Proc. SPIE 1667, 32–43 (1992). [CrossRef]  

2. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–406 (1966). [CrossRef]  

3. K. Yamamoto, T. Senoh, R. Oi, and T. Kurita, “8K4K-size computer generated hologram for 3-D visual system using rendering technology,” in Proceedings of IEEE Conference on Universal Communication Symposium (Institute of Computing Technology, Beijing, 2010), pp. 193–196. [CrossRef]  

4. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005). [CrossRef]  

5. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]   [PubMed]  

6. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

7. H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).

8. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

9. H. Kang, F. Yaras, and L. Onural, “Quality comparison and acceleration for digital hologram generation method based on segmentation,” in Proceedings of the 3DTV-Conference 2009, pp.1–4 (2009).

10. Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove Jr., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

11. J. T. McCrickerd, “Comparison of Stereograms: Pinhole, Fly’s Eye, and Holographic Types,” J. Opt. Soc. Am. A 62(1), 64–70 (1972). [CrossRef]  

12. L. E. Helseth, “Optical transfer function of three-dimensional display systems,” J. Opt. Soc. Am. A 23(4), 816–820 (2006). [CrossRef]   [PubMed]  

13. P. S. Hilaire, “Modulation transfer function and optimum sampling of holographic stereograms,” Appl. Opt. 33(5), 768–774 (1994). [CrossRef]   [PubMed]  

14. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).

15. J. S. Underkoffler, “Occlusion processing and smooth surface shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997). [CrossRef]  

16. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011). [CrossRef]  

17. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48(21), 4246–4255 (2009). [CrossRef]   [PubMed]  

18. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48(36), 6841–6850 (2009). [CrossRef]   [PubMed]  

19. J. Goodman, Introduction to Fourier optics (McGraw-Hill, 1996).

20. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. M. Lucente, “Optimization of Hologram Computation for Real-Time Display,” Proc. SPIE 1667, 32–43 (1992).
    [Crossref]
  2. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–406 (1966).
    [Crossref]
  3. K. Yamamoto, T. Senoh, R. Oi, and T. Kurita, “8K4K-size computer generated hologram for 3-D visual system using rendering technology,” in Proceedings of IEEE Conference on Universal Communication Symposium (Institute of Computing Technology, Beijing, 2010), pp. 193–196.
    [Crossref]
  4. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005).
    [Crossref]
  5. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009).
    [Crossref] [PubMed]
  6. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976).
    [Crossref] [PubMed]
  7. H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).
  8. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).
  9. H. Kang, F. Yaras, and L. Onural, “Quality comparison and acceleration for digital hologram generation method based on segmentation,” in Proceedings of the 3DTV-Conference 2009, pp.1–4 (2009).
  10. Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).
  11. J. T. McCrickerd, “Comparison of Stereograms: Pinhole, Fly’s Eye, and Holographic Types,” J. Opt. Soc. Am. A 62(1), 64–70 (1972).
    [Crossref]
  12. L. E. Helseth, “Optical transfer function of three-dimensional display systems,” J. Opt. Soc. Am. A 23(4), 816–820 (2006).
    [Crossref] [PubMed]
  13. P. S. Hilaire, “Modulation transfer function and optimum sampling of holographic stereograms,” Appl. Opt. 33(5), 768–774 (1994).
    [Crossref] [PubMed]
  14. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).
  15. J. S. Underkoffler, “Occlusion processing and smooth surface shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997).
    [Crossref]
  16. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
    [Crossref]
  17. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48(21), 4246–4255 (2009).
    [Crossref] [PubMed]
  18. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48(36), 6841–6850 (2009).
    [Crossref] [PubMed]
  19. J. Goodman, Introduction to Fourier optics (McGraw-Hill, 1996).
  20. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007).
    [Crossref] [PubMed]

2011 (2)

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).

2009 (3)

2007 (1)

2006 (1)

2005 (1)

K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005).
[Crossref]

1997 (1)

J. S. Underkoffler, “Occlusion processing and smooth surface shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997).
[Crossref]

1995 (1)

H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).

1994 (1)

1993 (1)

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

1992 (1)

M. Lucente, “Optimization of Hologram Computation for Real-Time Display,” Proc. SPIE 1667, 32–43 (1992).
[Crossref]

1976 (1)

1972 (1)

J. T. McCrickerd, “Comparison of Stereograms: Pinhole, Fly’s Eye, and Holographic Types,” J. Opt. Soc. Am. A 62(1), 64–70 (1972).
[Crossref]

1966 (1)

J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–406 (1966).
[Crossref]

Baradas, J.

Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

Bove, V. M.

Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

Chen, J.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Chen, R. H.-Y.

Chu, D.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Collings, N.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Crossland, B.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Helseth, L. E.

Hilaire, P. S.

Honda, T.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

Hoshino, H.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

Kameyama, H.

H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).

Lucente, M.

M. Lucente, “Optimization of Hologram Computation for Real-Time Display,” Proc. SPIE 1667, 32–43 (1992).
[Crossref]

Matsushima, K.

McCrickerd, J. T.

J. T. McCrickerd, “Comparison of Stereograms: Pinhole, Fly’s Eye, and Holographic Types,” J. Opt. Soc. Am. A 62(1), 64–70 (1972).
[Crossref]

Muffoletto, R. P.

Nakahara, S.

Ohyama, N.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

Smalley, D. E.

Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

Smithwick, Q. Y. J.

Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

Tohline, J. E.

Tyler, J. M.

Underkoffler, J. S.

J. S. Underkoffler, “Occlusion processing and smooth surface shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997).
[Crossref]

Wakunami, K.

Waters, J. P.

J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–406 (1966).
[Crossref]

Wilkinson, T. D.

Xie, J.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Yamaguchi, M.

K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

Yatagai, T.

Yoshikawa, H.

H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).

Zhang, H.

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Appl. Opt. (5)

Appl. Phys. Lett. (1)

J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–406 (1966).
[Crossref]

J. Opt. Soc. Am. A (2)

J. T. McCrickerd, “Comparison of Stereograms: Pinhole, Fly’s Eye, and Holographic Types,” J. Opt. Soc. Am. A 62(1), 64–70 (1972).
[Crossref]

L. E. Helseth, “Optical transfer function of three-dimensional display systems,” J. Opt. Soc. Am. A 23(4), 816–820 (2006).
[Crossref] [PubMed]

Opt. Eng. (1)

H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011).
[Crossref]

Opt. Express (2)

Proc. SPIE (5)

J. S. Underkoffler, “Occlusion processing and smooth surface shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997).
[Crossref]

M. Lucente, “Optimization of Hologram Computation for Real-Time Display,” Proc. SPIE 1667, 32–43 (1992).
[Crossref]

K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005).
[Crossref]

H. Yoshikawa and H. Kameyama, “Integral Holography,” Proc. SPIE 2406, 226 (1995).

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–33 (1993).

Other (4)

H. Kang, F. Yaras, and L. Onural, “Quality comparison and acceleration for digital hologram generation method based on segmentation,” in Proceedings of the 3DTV-Conference 2009, pp.1–4 (2009).

Q. Y. J. Smithwick, J. Baradas, D. E. Smalley, and V. M. Bove., “Real-time shader rendering of holographic stereograms,” Proc. SPIE Practical holography XXIII, v. 7233 (2009).

K. Yamamoto, T. Senoh, R. Oi, and T. Kurita, “8K4K-size computer generated hologram for 3-D visual system using rendering technology,” in Proceedings of IEEE Conference on Universal Communication Symposium (Institute of Computing Technology, Beijing, 2010), pp. 193–196.
[Crossref]

J. Goodman, Introduction to Fourier optics (McGraw-Hill, 1996).

Supplementary Material (2)

» Media 1: MOV (19378 KB)     
» Media 2: MOV (7450 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The forward and backward conversion between R and W on the RS plane.
Fig. 2
Fig. 2 A scheme of occlusion process using RS plane.
Fig. 3
Fig. 3 CGH calculation flow with proposed occlusion processing based on the conversion between the light-ray information and the wavefront on the RS plane.
Fig. 4
Fig. 4 3D scene with simple interrupting object.
Fig. 5
Fig. 5 The reconstructed images by numerical simulation with variable focusing distances and observation angles. (a): ray-based method, (b): Geometrical mask, (c): orthographic silhouette mask, (d): Proposed method.
Fig. 6
Fig. 6 The reconstructed images by proposed method and conventional method (Media 1).
Fig. 7
Fig. 7 The reconstructed images by optical reconstruction with varying observing angles (Media 2).

Tables (2)

Tables Icon

Table 1 The parameters of RS planes and CGH plane for the numerical simulation

Tables Icon

Table 2 The parameters of RS planes and CGH plane for the optical reconstruction

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

R={ p 00 , p 10 ,, p ( I1 )( J1 ) }
W[ k,l ]= P ij [ k( x i Δp + K 2 ),l( y j Δp + L 2 ) ]

Metrics