## Abstract

Computer Generated Holograms (CGH) are generated on computers; however, a great deal of computational power is required because the quality of the image is proportional to the number of point light sources of a 3D object. The Wavefront Recording Plane (WRP) method is an algorithm that enables reduction of the amount of calculations required. However, the WRP method also has a defect; it is not effective in the case of a 3D object with a deep structure. In this study, we propose two improved WRP methods: “Least Square Tilted WRP method” and “RANSAC Multi-Tilted WRP method.”

© 2015 Optical Society of America

## 1. Introduction

Holography is a technique used for playing back natural 3D images, because it faithfully duplicates the wave front of light from a real 3D object. A Computer Generated Hologram (CGH) is generated by simulating the physical process of holography on a computer. A 3D movie can be displayed on a Liquid Crystal Display (LCD) by electronic holography [1]. Various CGH calculation algorithms have been proposed, some of which deal with 3D objects as the aggregate of point light sources [2–4], as the aggregate of polygons [5, 6], as a RGB-D image [7], and as a multi-view image, e.g., holographic stereogram or integral photography technique [8, 9]. Electronic holography has great potential, but it also has problems: replay gives a small 3D image, the viewing area is narrow, and the calculation amount increases. In this paper, we focus on the acceleration of CGH calculations using the point light source method.

In representing the 3D object model as the aggregate of object points, each object point is considered as a point light source emitting a spherical wave. It is possible to generate CGHs by adding spherical waves. We define the complex amplitude on a CGH plane as${U}_{h}(x,y)$, the object is composed of$N$points, and the complex amplitude on the CGH created by the $i$-th point in distance${r}_{i}$ is ${U}_{i}({r}_{i})$. The final complex amplitude ${U}_{h}(x,y)$ is expressed by,

The WRP method (Fig. 1) sets a WRP that is a virtual plane between 3D objects and the CGH plane. It calculates a CGH in two steps [11]. In the first step, the light wave from 3D objects is recorded on a WRP. In the second step, it is possible to accelerate the calculation of the light wave from the WRP to CGH by diffraction calculation based on fast Fourier transform (FFT). If the WRP is set near the 3D objects, the calculation amount in the first step is significantly reduced, so acceleration of the CGH calculation is possible. In the past, acceleration methods for improving the first step calculations [12,13], and the multi-WRPs method have been suggested for deep 3D objects [15,16].

In this study, we propose two improved methods to further accelerate the WRP method: “Least Square Tilted WRP method” and “RANSAC Multi-Tilted WRP method.”

## 2. Proposed method

In the first step of the original WRP method, the light wave on the WRP is calculated from object points by Eq. (1). The radius of zone plate ${R}_{wi}$ [11] on the WRP from the $i$-th point light source is defined by ${R}_{wi}={d}_{wi}\times \mathrm{tan}\left\{\mathrm{sin}\left(\frac{\lambda}{2{p}_{w}}\right)\right\}$ where ${d}_{wi}$ is the distance between point $i$ and the WRP, ${p}_{w}$is the sampling interval of the WRP. If the WRP is set near objects, the calculating area becomes smaller. In the second step, the complex amplitude on the CGH is calculated from the WRP to the CGH plane by diffraction calculation.

Let us consider the calculation amount of the WRP method. The WRP and CGH planes are defined as ${M}^{2}$ pixels. The calculation amount in the first step is $O{}_{first}=O\left(N{R}_{w}{}^{2}\right)=O\left(N{d}_{w}{}^{2}\right)$ where the average radius${R}_{w}=\frac{1}{N}{\displaystyle \sum _{i}^{N}{R}_{wi}}$, the average distance${d}_{w}=\frac{1}{N}{\displaystyle \sum _{i}^{N}{d}_{wi}}$. The calculation amount in the second step using FFT is ${O}_{\mathrm{sec}ond}=O\left({M}^{2}\mathrm{log}M\right)$. The calculation amount in the WRP method is the sum of these calculation amounts. The calculation amount becomes smaller than the calculation amount of $O(N{M}^{2})$ of Eq. (1) because of ${R}_{wi}<<M{p}_{w}$. The calculation amount of the second step is not dependent on the number and the distribution of object points. Whereas, the calculation amount in the first step depends on${d}_{w}$, in other words, the distribution of the 3D objects. In this study, we propose two improved methods using non-parallel WRPs for reducing the distribution problem.

#### 2.1. Least square tilted WRP method

A WRP is set parallel to the CGH plane in the original WRP method shown in Fig. 1. The calculation amount is determined by the average distance ${d}_{w}$ between the objects and WRP as noted earlier. So, the calculation amount is the smallest in the case of setting the WRP to be the smallest${d}_{w}$. We propose the Least Square Tilted WRP (LST-WRP) method in which the WRP position is determined to be the smallest calculation amounts by the least square method. The determined WRP may be non-parallel to the CGH. The conceptual diagram of the LST-WRP method is shown in Fig. 2. The concrete algorithm of this method is described below.

- 1. A plane is defined as the sum of the distance $\sum {d}_{wi}$ between the plane and the object points that becomes minimum by the least square method. We set the plane as the WRP.
- 2. Calculate the first step in the WRP method.
- 3. If the WRP is non-parallel to the CGH depending on the object shape, we calculate diffraction in non-parallel planes.

The WRP is discretized by the sampling interval${p}_{w}$, so that the object points cannot be recorded if ${d}_{wi}$ is too near. Therefore, in fact, we need to move the WRP determined by the least square method $\delta d$ from the object along to the normal vector of the plane. We empirically decided $\delta d$ in terms of the calculation time and the reconstruction quality. The conceptual diagram of this correction is shown in Fig. 3.

The normal vector of the least square plane $n={\left[a,b,c\right]}^{T}$ ($c>0$) is defined as shown in Fig. 3. And, the farthest and nearest distances between Least square plane and the object point are${d}_{far}$and${d}_{near}$, respectively. When ${d}_{far}$>${d}_{near}$, we need to correct the location of the WRP by moving it${d}_{near}$ + $\delta d$ to the negative direction of the normal vector. In the same way, when ${d}_{near}$>${d}_{far}$, we need to correct the location of the WRP by moving it ${d}_{far}$ + $\delta d$ to the positive direction of the normal vector. In this way, it is possible for the corrected WRP to record all object points.

However, when the angle of the WRP to CGH is large, the object is distorted because the reference light of Eq. (1) propagates in the same direction as the normal vector of the WRP (Fig. 4). In the case where CGH and WRP are parallel such as in Fig. 4 (a), the reconstruction position is the same as the recording position because the normal directions of the WRP and CGH coincide. Whereas, in the case where the WRP is tilted to the CGH plane such as in Fig. 4 (b), the propagation direction of the object points is the normal direction of the WRP. Therefore, when illuminating the planar wave parallel to the CGH, the actual reconstruction points are shifted from the original recording positions. Namely, the object points must be recorded on the WRP as the propagation direction of the object light coincides with the normal direction of the CGH such as in Fig. 4(c).

In Fig. 5, we show reconstructed images by the WRP parallel to the CGH (the case of Fig. 4(a)), tilted WRP without the correction (the case of Fig. 4(b)) and with the correction (the case of Fig. 4(c)). In the case without the correction, the reconstruction image is distorted as compared with Fig. 5 (a). Whereas, the reconstructed image is not changed with the correction. The Signal to Noise Ratio (SNR) is 25dB between Figs. 5 (a) and 5 (c).

#### 2.2. RANSAC multi-tilted WRP method

In the previous section, we stated that the WRP method is efficient when the WRP is set along the distribution of the object points. If the object is divided into small sub-areas and multiple WRPs are properly placed at each sub-area, we can expect to obtain a faster method.

In this section we propose the RANSAC (RANdom Sample Consensus) multi-tilted WRP method (RMT-WRP). As described in Sec. 2.1, it is difficult to record the lightwave of object points on a WRP when they are too close to the WRP. In LST-WRP, the WRP is corrected by parallel displacement. In RMT-WRP, instead of the parallel displacement, object points are directly set on the WRP if the object points are too close. The other object points are calculated by Eq. (1). In addition, RMT-WRP uses the RANSAC algorithm [17] to estimate the optimal locations of the multiple WRPs corresponding to the sub-areas, not the least square method. The RANSAC algorithm that is widely used in computer vision is a robust method for noises. The RANSAC algorithm is used because the algorithm can appropriately estimate WRPs that include many object points.

We show how to estimate the optimal location of a WRP corresponding to a sub-area having *N* object points by the RANSAC algorithm.

- 1. Three object points are randomly selected from N object points.
- 2. The coefficients $a,b,c,d$ in the plane equation $ax+by+cz+d=0$ are calculated by the selected three points.
- 3. The distance between the object points except for the selected three points and the plane calculated in Step 2 is calculated.
- 4. We select if the distance is smaller than the allowable distance$\Delta d$.
- 5. After repeating
*k*times the Steps 1 to 4, the plane with the most selected parameter is defined as a tentative plane. - 6. The points whose distances to the tentative plane are smaller than$\Delta d$ are extracted.
- 7. We determine the WRP from the extracted object points by the least square method.

In this paper, we define $\Delta d$ = 1[mm] and *k* = 30. We empirically decided $\Delta d$ = 1[mm] and *k* = 30 in terms of the calculation time and the reconstruction quality. The entire process of the RMT-WRP method is shown in Fig. 6. The algorithm is described below.

- A) The object is divided into some small sub-areas.
- B) The WRPs of the sub-areas are defined by Steps 1 to 7 above.
- C) The brightness values of object points are directly set on the WRP when the distances ${d}_{wi}$ of the object points are ${d}_{wi}$ ≦$\delta d$.
- D) The complex amplitude on the WRP of the object points except for the object points of Step C is calculated by Eq. (1).
- E) All the WRPs are once propagated to a parallel WRP that is parallel to the CGH by a tilted diffraction calculation [5, 6]. The parallel WRP is set near the 3D object.
- F) We calculate the propagation from the parallel WRP to the CGH by normal Fresnel diffraction calculation.

## 3. Results

First, we compared the calculation time of the previous WRP method [11] and LST-WRP method. The parameter of the simulation is shown in Table 1. We used 8 CPU threads of Intel Core i7 2600K CPU for this calculation.

The calculation results of the first steps are shown in Table 2. And, we measured the maximum, minimum and average calculation times when a 3D object is rotated 360 degrees, because the calculation time of the previous WRP method is strongly dependent on the distribution of the object points. The difference between maximum and minimum times of the previous WRP method is about 2 seconds. In contrast, the difference between maximum and minimum times of LST-WRP is almost constant.

In average time, we revealed that LST-WRP was about 3.5 times faster than the previous one. Table 3 shows all the calculation times including the second step of the WRP methods. Note that we use the average time shown in Table 2 as the first step of the WRP methods. The total calculation time of LST-WRP is about 1.5 times faster than the previous one. We show the reconstructed images of the previous WRP method and LST-WRP in Fig. 7.

Next, we show the results of the RMT-WRP method. The parameters of the simulation are shown in Table 4. The number of WRPs was defined as two in this simulation. We show the calculation times and reconstruction results in Table 5and Fig. 8, respectively. The RMT-WRP method was twice as fast as the previous WRP.

## 4. Conclusion

We proposed two improved WRP methods: LST-WRP and RMT-WRP. We succeeded in achieving a time that was twice as fast as the previous WRP method by RMT-WRP.

## Acknowledgments

This work is partially supported by JSPS KAKENHI Grant Numbers 25330125 and 25240015, and the Kayamori Foundation of Information Science Advancement and Yazaki Memorial Foundation for Science and Technology.

## References and links

**1. **T. C. Poon, ed., *Digital Holography and Three-dimensional display: Principles and Applications*, Springer (2006).

**2. **N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. **51**(30), 7303–7307 (2012). [CrossRef] [PubMed]

**3. **S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef] [PubMed]

**4. **H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging **2009**, DWC4 (2009).

**5. **T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A **10**(2), 299305 (1993). [CrossRef]

**6. **K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A **20**(9), 1755–1762 (2003). [CrossRef] [PubMed]

**7. **N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express **21**(7), 9192–9197 (2013). [CrossRef] [PubMed]

**8. **M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE **1914**, 25–31 (1993). [CrossRef]

**9. **Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express **20**(19), 21645–21655 (2012). [CrossRef] [PubMed]

**10. **T. Shimobaba, S. Hishinuma, and T. Ito, “Special-Purpose Computer for Holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. **148**(2), 160–170 (2002). [CrossRef]

**11. **T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. **34**(20), 3133–3135 (2009). [CrossRef] [PubMed]

**12. **T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express **18**(19), 19504–19509 (2010). [CrossRef] [PubMed]

**13. **P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express **19**(16), 15205–15211 (2011). [CrossRef] [PubMed]

**14. **J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express **20**(4), 4018–4023 (2012). [CrossRef] [PubMed]

**15. **T. Shimobaba, T. Kakue, N. Masuda, Y. Ichihashi, K. Yamamoto, T. Ito, “Computer holography using wavefront recording method, ” Digital Holography and Three-Dimensional Imaging(DH) DH2013, DTu1A.2 (2013).

**16. **A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE **9006**, 900612 (2014). [CrossRef]

**17. **D. A. Forsyth and J. Ponce, *Computer Vision: A Modern Approach (2nd Edition)*, Prentice Hall (2011).