Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features

Open Access Open Access

Abstract

We propose a real-time spatiotemporal division multiplexing electroholography utilizing the features of movies. The proposed method spatially divides a 3-D object into plural parts and periodically selects a divided part in each frame, thereby reconstructing a three-dimensional (3-D) movie of the original object. Computer-generated holograms of the selected part are calculated by a single graphics processing unit and sequentially displayed on a spatial light modulator. Visual continuity enables a reconstructed movie of the original 3-D object. The proposed method realized a real-time reconstructed movie of a 3-D object composed of 11,646 points at over 30 frames per second (fps). We also displayed a reconstructed movie of a 3-D object composed of 44,647 points at about 10 fps.

© 2014 Optical Society of America

1. Introduction

Electroholography based on computer-generated holograms (CGHs) is considered to potentially realize the ultimate three-dimensional (3-D) television [1]. However, the practical use of CGH is limited by the complexity of the calculations. Therefore, to realize such technologies, we must investigate high-speed CGH computation.

Recently, high-performance low-cost floating point calculation was achieved by a graphics processing unit (GPU). Fast CGH computation using a GPU was first reported in 2006 [2]. Simplified color electroholography using a GPU and a liquid crystal display projector has also been reported [3]. A real-time color electroholography has been constructed using a PC with several GPUs, referred to as a multiple GPU (multi-GPU) environmental PC [4, 5]. Multi-GPUs have also been adopted in a real-time capture and reconstruction system [6]. Multi-GPU clusters are groups of many PCs, each of which is a multi-GPU environmental PC. Such multi-GPU clusters can realize large-pixel-count CGH computation [7, 8]. CGH calculation can be accelerated by the wavefront-recording plane (WRP) method, which inserts a virtual plane between a 3-D object and the CGH [911]. Thus, a GPU is extremely effective for fast CGH calculation.

Time-division color reconstruction methods have also been proposed [1216]. These methods use a single spatial light modulator (SLM) and CGHs of the three primary colors (red, green, and blue). The color of the reference light, which must be suitable for the CGH displayed on the SLM, is switched in synchrony with the switching of the three CGHs at regular intervals. If the three CGHs are calculated at a sufficiently high speed, time-division color reconstruction methods can realize visual continuity, and thereby obtain a reconstructed color 3-D movie. Furthermore, time division multiplexing is useful for speckle reduction [1719] and enhancement of the viewing angle [20, 21] of electroholography.

In the present paper, we propose a spatiotemporal division multiplexing electroholography utilizing movie features to achieve a real-time electroholography. To calculate the CGH calculation at high speeds in each frame of a reconstructed movie, the original 3-D object is divided into several parts, one of which is selected in each frame. The CGHs of the selected parts are calculated by a single GPU and sequentially displayed on an SLM. Due to persistence of vision, the proposed method obtains a reconstructed movie of the original 3-D object, identical to time-division color reconstruction methods [1215]. The proposed method successfully generated visually continuous real-time reconstructed movies of 3-D objects. We also investigated the efficiency of the proposed method.

The present paper is structured as follows. Section 2 outlines the proposed method, and Section 3 presents and discusses the experimental results. Conclusions are presented in Section 4.

2. Spatiotemporal division multiplexing electroholography utilizing movie features

Figure 1 shows an outline of the proposed method with the number of divisions set to three. At each frame, the original 3-D object is divided into three parts, labeled Div 1, Div 2, and Div 3. One of the three parts is selected as shown in Fig. 1. A single GPU calculates the CGH of the selected part only, and displays it on the SLM. For example, Frame 1 is the first frame in the reconstructed 3-D movie of the original 3-D object. Frame 1′ uses the divided part Div 1. Frame 1′ is the first frame in the 3-D movie reconstructed by the proposed method. Note that the CGH of Div 1 is computed by a single GPU. The other frames are created in the same way. At every three frames, we periodically select the same divided part. That is, Frame 4′ uses the divided part Div 1. In the CGH calculation, the light intensity at each point is given by [2, 22]

I(xh,yh,0)=j=1NAjcos(2πλ{12zj[(xhxj)2+(yhyj)2]}),
where I(xh, yh, 0) is the light intensity of point (xh, yh, 0) on the hologram, (xj, yj, zj) is the coordinate of the j-th point on the 3-D object, Aj is the intensity of the object point, and λ is the wavelength of the reference light. Equation (1) is obtained by the Fresnel approximation. The number of hologram points equals the resolution (H × W) of SLM, where H and W are the height and width of the display resolution, respectively. Because the computational complexity of Eq. (1) is O(NHW), the CGH calculation becomes prohibitively large.

 figure: Fig. 1

Fig. 1 Outline of the proposed method.

Download Full Size | PDF

In the present paper, we apply the space-division multiplexed technique to the serial number of each object point on the original 3-D object. Figure 2 illustrates the dividing technique when the number of divisions is three. The dividing technique proceeds as follows.

  • Step 1. All object points on the original 3-D object are serially numbered as P1, P2, P3, P4, . . . .
  • Step 2. In the original data file shown in Fig. 2, the coordinate data of the object points are listed in ascending order of serial object point number.
  • Step 3. As shown in Fig. 2, the listed coordinate data in the original data file are divided into three sub-files (Files 1–3) in ascending order of serial object point number. The sub-file number is the remainder after dividing the object point number by 3.
Note that all object points on the original 3-D object are serially numbered as P1, P2, P3, P4, . . . at Step 1 so that the object points stored in each sub-file are distributed evenly over the whole area of the original 3-D object. The coordinate data of the divided part Div 1 shown in Fig. 1 are stored in File 1. Similarly, the coordinate data of the divided parts Div 2 and Div 3 are stored in Files 2 and 3, respectively.

 figure: Fig. 2

Fig. 2 Spatial division of an original 3-D object.

Download Full Size | PDF

In the proposed method, each frame is constructed from one of these three sub-files (Files 1–3). Therefore, we use File 1, File 2, File 3, File 1, File 2, . . . to construct Frame 1′, Frame 2′, Frame 3′, Frame 4′, Frame 5′, . . ., respectively. The coordinate data files, including File 1 at Frame 1′, File 2 at Frame 2′, File 3 at Frame 3′, File 1 at Frame 4′, File 2 at Frame 5′, . . . are prepared in advance.

We attempted two algorithms using the listed object points in ascending order of coordinate value at each frame [23, 24]. However, the image qualities of the reconstructed movie using these algorithms are a little bad compared with the proposed algorithm because all object point does not periodically light.

3. Experimental results and discussion

The PC used in this study was installed with an Intel Core i7 3770 (Clock Speed: 3.4GHz, quad-core), a Linux (CentOS 6.4 x86_64) as operating system and an Intel C++ Compiler XE 13.1 as the C compiler. We used a NVIDIA GeForce GTX TITAN as the GPU, and CUDA 5.0 as software development kit for the GPU programming [25]. The liquid crystal display (LCD) panel, which we extracted from a projector (Epson Inc. EMP-TW1000) and is the L3C07U series, is used as SLM. For the specification of the LCD panel, the pixel interval is 8.5μm, the resolution is 1, 920× 1, 080, and the size is 16mm × 9mm. The viewing-zone angle is about 4°. The distance between CGH and 3-D object is 2.0m.

Using our single GPU system, we calculated a 1, 920 × 1, 024 pixel CGH. The maximum computational performance of the GPU is derived from the resolution of the CGH. The performances of the proposed method are presented in Tables 1 and 2. The original 3-D objects were “Dinosaur” and “Chess”. Table 1 shows the results of “Dinosaur”, composed of 11,646 points, while Table 2 shows those of “Chess”, with 44,647 points.

Tables Icon

Table 1. Calculation time of a computer-generated hologram (Dinosaur).

Tables Icon

Table 2. Calculation time of a computer-generated hologram (Chess).

As evident in Table 1, the proposed method with three or more divisions achieved over 30 frames per second (fps). This frame rate is sufficient to realize real-time electroholography. For the “Chess” object, the proposed method with three or more divisions achieved around 10 fps (Table 2).

We now investigate the efficiency of the proposed method, using “Dinosaur” as the original 3-D object. To this end, we compare two reconstructed movies, as shown in Figure 3. Media 1 is the 3-D movie reconstructed by the proposed method for three divisions of “Dinosaur”. In Media 2, a long time interval (approximately 3 seconds) is added between the frames of Media 1. The reference light was a laser with a wavelength of 532 nm. The strong light spots at the bases of the figures are areas of direct light. In Fig. 3, the number of object points in Media 1 appears larger than in Media 2.

 figure: Fig. 3

Fig. 3 Comparison of two reconstructed movies of the original 3-D object “Dinosaur”. Snapshots of (a) Media 1 and (b) Media 2 with a time interval of three seconds ( Media 1).

Download Full Size | PDF

4 We then compared the movies reconstructed by the proposed method with that of the original 3-D object, as shown in Figure 4. Again, the original 3-D object was “Dinosaur”. As shown in Figs. 4(b) and 4(c), the resolution is very similar to that of the reconstructed movie of the original 3-D object (Fig. 4(a)). However, Fig. 4(d) is darker and coarser than Figs. 4(b) and 4(c).

 figure: Fig. 4

Fig. 4 Movies reconstructed by the proposed method: (a) movie of the original 3-D object “Dinosaur”, (b)–(d) movies when the original 3-D object “Dinosaur” is divided into two, three, and four parts, respectively ( Media 2).

Download Full Size | PDF

Finally, we investigate the 3-D movie reconstructed by the proposed method using “Chess” as the original 3-D object. Comparisons between the movies reconstructed by the proposed method and a reconstructed movie of the original 3-D object are presented in Figure 5. “Chess” is composed of approximately four times the number of object points on “Dinosaur”. As evident in Table 2, the proposed method cannot achieve a real-time electroholography of “Chess” because the frame rate of the reconstructed 3-D movie “Chess” is less than 30 fps. However, the resolution of Figs. 5(b) and 5(c) is very similar to that of the reconstructed movie of the original 3-D object (Fig. 5(a)), whereas Fig. 5(d) presents a coarser image. We consider that the proposed method is very effective for the original 3-D objects comprising around 45,000 object points.

 figure: Fig. 5

Fig. 5 Movies reconstructed by the proposed method: (a) movie of the original 3-D object “Chess”, (b)–(d) movies when the original 3-D object “Chess” is divided into two, three, and four parts, respectively ( Media 3).

Download Full Size | PDF

4. Conclusion

We proposed a spatiotemporal division multiplexing electroholography utilizing the features of movies. The proposed method is very simple and easily computes a real-time electroholography by a single GPU. The method realized a real-time reconstructed movie of a 3-D object composed of 11,646 points. The frame rate exceeded 30 frames per second (fps). We also demonstrated a reconstructed movie of a 3-D object comprising 44,647 points, operating at about 10 fps.

Acknowledgments

The present research was supported in part by the Japan Society for the Promotion of Science (JSPS) through a Grant-in-Aid for Scientific Research (C), 24500071 and a Grant-in-Aid for Scientific Research (A), 25240015, and the Kayamori Foundation of Infomation Science Advancement.

References and links

1. P. S. Hilaire, S. A. Benton, M. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, and J. Underkoffler, “Electronic display system for computational holography,” Proc. SPIE 1212–20, 174–182 (1990). [CrossRef]  

2. N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14, 603–608 (2006). [CrossRef]   [PubMed]  

3. A. Shiraki, N. Takada, M. Niwa, Y. Ichihashi, T. Shimobaba, N. Masuda, and T. Ito, “Simplified electroholographic color reconstruction system using graphics processing unit and liquid crystal display projector,” Opt. Express 17, 16038–16045 (2009). [CrossRef]   [PubMed]  

4. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48, H48–H53 (2009). [CrossRef]  

5. H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multi graphics processing units,” Appl. Opt. 47, 5784–5789 (2009).

6. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multi-GPUs for a 3D live scene by a generation from 4K iP images to 8K holograms,” Opt. Express 20, 21645–21655 (2012). [CrossRef]   [PubMed]  

7. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51, 7303–7307 (2012). [CrossRef]   [PubMed]  

8. Y. Pan, X. Xu, S. Solanki, and X. Liang, “Fast distributed large-pixel-count hologram computation using a GPU cluster,” Appl. Opt. 52, 6562–6571 (2013). [CrossRef]   [PubMed]  

9. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2012). [CrossRef]  

10. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20, 4018–4023 (2012). [CrossRef]   [PubMed]  

11. P. Tsang, W-K. Cheung, T-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19, 15205–15211 (2011). [CrossRef]   [PubMed]  

12. T. Shimobaba and T. Ito, “A color holographic reconstruction system by time division multiplexing method with reference lights of laser,” Opt. Rev. 10, 339–341 (2003). [CrossRef]  

13. T. Shimobaba, A. Shiraki, N. Masuda, and T. Ito, “An electroholographic colour reconstruction by time division switching of reference lights,” J. Opt. A, Pure Appl. Opt. 9, 757–760 (2007). [CrossRef]  

14. T. Shimobaba, A. Shiraki, Y. Ichihashi, N. Masuda, and T. Ito, “Interactive color electroholography using the FPGA technology and time division switching method,” IEICE Electron. Express 5, 271–277 (2008). [CrossRef]  

15. M. Oikawa, T. Shimobaba, T. Yoda, H. Nakayama, A. Shiraki, N. Masuda, and T. Ito, “Time-division color electroholography using one-chip RGB LED and synchronizing controller,” Opt. Express 19, 12008–12013 (2011). [CrossRef]   [PubMed]  

16. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators, ” Sci. Rep. 4, 6177 (2014). [CrossRef]   [PubMed]  

17. L. Golan and S. Shoham, “Speckle elimination using shift-averaging in high-rate holographic projection,” Opt. Express 17, 1330–1339 (2009). [CrossRef]   [PubMed]  

18. Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express 19, 7567–7579 (2011). [CrossRef]   [PubMed]  

19. M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21, 29205–29216 (2013). [CrossRef]  

20. M. Tanaka, K. Nitta, and O. Matoba, “Wide-angle wavefront reconstruction near display plane in three-dimensional display system,” J. Disp. Technol. 6, 517–521 (2010). [CrossRef]  

21. T. Kozacki, G. Finke, P. Garbat, W. Zaperty, and M. Kujawińska, “Wide angle holographic display system with spatiotemporal multiplexing,” Opt. Express 20, 27473–27481 (2012). [CrossRef]   [PubMed]  

22. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]  

23. H. Niwase, H. Araki, N. Takada, H. Nakayama, A. Sugiyama, T. Kakue, T. Shimobaba, and T. Ito, “Time-Division Electroholography of the Three-Dimensional Object,” Proc. of Three Dimensional Systems and Applications 2013 , P4-2, 1–3 (2013).

24. H. Niwase, H. Araki, N. Takada, H. Nakayama, A. Sugiyama, T. Kakue, T. Shimobaba, and T. Ito, “One-Colored Time-Division Electroholography Using a NVIDIA GeForce GTX TITAN,” Proc. of 20th International Display Workshop, 1098–1101 (2013).

25. NVIDIA, “NVIDIA CUDA C Programming Guide ver. 5.0”, (NVIDIA, 2012).

Supplementary Material (3)

Media 1: MP4 (6467 KB)     
Media 2: MP4 (13334 KB)     
Media 3: MP4 (13185 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Outline of the proposed method.
Fig. 2
Fig. 2 Spatial division of an original 3-D object.
Fig. 3
Fig. 3 Comparison of two reconstructed movies of the original 3-D object “Dinosaur”. Snapshots of (a) Media 1 and (b) Media 2 with a time interval of three seconds ( Media 1).
Fig. 4
Fig. 4 Movies reconstructed by the proposed method: (a) movie of the original 3-D object “Dinosaur”, (b)–(d) movies when the original 3-D object “Dinosaur” is divided into two, three, and four parts, respectively ( Media 2).
Fig. 5
Fig. 5 Movies reconstructed by the proposed method: (a) movie of the original 3-D object “Chess”, (b)–(d) movies when the original 3-D object “Chess” is divided into two, three, and four parts, respectively ( Media 3).

Tables (2)

Tables Icon

Table 1 Calculation time of a computer-generated hologram (Dinosaur).

Tables Icon

Table 2 Calculation time of a computer-generated hologram (Chess).

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

I ( x h , y h , 0 ) = j = 1 N A j cos ( 2 π λ { 1 2 z j [ ( x h x j ) 2 + ( y h y j ) 2 ] } ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.