Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

360-degree color hologram generation for real 3D objects

Open Access Open Access

Abstract

Recently, holographic display and computer-generated holograms calculated from real existing objects have been more actively investigated to support holographic video applications. In this paper, we proposed a method of generating 360-degree color holograms of real 3D objects in an efficient manner. 360-degree 3D images are generated using the actual 3D image acquisition system consisting of a depth camera and a turntable and intermediate view generation. Then, 360-degree color holograms are calculated using a viewing-window-based computer-generated hologram. We confirmed that floating 3D objects are faithfully reconstructed around a 360-degree direction using our 360-degree tabletop color holographic display.

© 2017 Optical Society of America

1. INTRODUCTION

Thanks to the recent developments in electronic technologies, electronic holography has been actively investigated. As one of the emerging technologies for electronic holography, holographic display has the ability to provide all four eye mechanisms: binocular disparity, motion parallax, accommodation, and convergence. Therefore, 3D objects can be viewed without wearing any special glasses, and no visual fatigue will be caused to an observer’s eyes. Moreover, there is aspiration for realization of the holography technology experienced in movies such as Star Wars. However, due to the current technical limits, state of the art of the holographic display is far from the expectation. For example, the holographic display provides an extremely limited viewing angle within a few degrees due to the spatial light modulator (SLM), which is the principal component of the holographic display. In order to deal with these issues, several holographic displays using multiple SLMs [13] were proposed that can increase the viewing angle over 20 degrees. To get closer to people’s desire, numerous 360-degree holographic 3D video displays [47] have been proposed. Yoshida [4] demonstrated a super multi-view type of the 360-degree tabletop 3D display based on spatial multiplexing techniques. It was configured as circularly arranged projectors and a conical anisotropic rear-projection screen. For the circularly arranged projectors, 288 multi-perspective images with their resolution of 400×400 were used to reconstruct 360-degree 3D images. As a result, it can display full-color, 5-cm-tall and 10-cm-wide 3D characters rendered from 3D models. Inoue and Takaki [5] proposed a monochromatic 360-degree holographic 3D video display based on temporal multiplexing techniques. It was composed of a microelectromechanical system SLM, a magnifying imaging system, and a rotating screen. While the screen lens rotates, 800 reduced and localized viewing zones are correspondingly located along a circle. The 3D objects used for experiments are a 3D symbol and a 3D model composed of two airplanes, and their reconstructed size measured on the screen was 81.0mm×60.5mm. We also proposed a 360-degree holographic 3D video display [6,7]. They exploited a fast operating speed of a digital micromirror device (DMD) to apply the temporal multiplexing for horizontally 360-degree viewing with different 1024 perspectives. Due to the specially designed optical path for the temporal multiplexing and image floating optics, they reconstructed a floating 3D image or 3D video over the top center surface of the displays. However, previous works [47] have not yet demonstrated reconstructions of 360-degree 3D images generated from real existing objects.

Regarding the hologram generation for the holographic display, a computer-generated hologram (CGH) is widely used. CGH is the method of digitally generating holographic interference patterns using numerical methods that simulate the physical process of a real hologram’s optical recording and reconstruction. In CGH, the light propagation from the 3D object’s surface to the hologram plane needs to be calculated to generate a hologram. To this end, a majority of previous CGH methods used a 3D computer graphics model to extract the accurate 3D information of a target 3D object easily. For example, Zhao et al. [8] presented layer-oriented CGH based on an angular spectrum algorithm. In this method, 3D objects are divided into multiple layers according to their depth information. The angular spectrum algorithm was used to calculate wave propagation from each layer to the hologram. By adding all the layers’ wavefront distributions, a hologram pattern is generated. For the accurate depth cue, 3D models with accurate depth information were used, as well as computed tomography images. Zhang et al. [9] proposed a layer-based CGH algorithm with an occlusion effect. 3D objects are sliced into multiple layers with slab-based orthographic projection. A hologram pattern with an occlusion effect is generated by the layer-based angular spectrum algorithm and silhouette mask culling. For the verification, a 3D model was used as in [8]. Recently, a lot of off-the-shelf and/or prototype depth cameras, including a low-end product such as Kinect [10], are available. Therefore, CGH calculated from real existing objects is more actively investigated to support holographic video applications [1116]. Most of the CGH based on real existing objects involve the generation of holograms from the video plus depth format consisting of a regular 2D color image and its associated 8-bit depth map image, thanks to the technical advances in depth cameras such as Kinect [11] and Axi-Vision [12]. Besides using the video plus depth format in order to generate the hologram of real existing objects, stereoscopic images [13,14] and multi-view images [15,16] are used. Kim et al. [13] presented the CGH method using stereoscopic images of a real 3D object. The stereoscopic images were captured by a 3D camera. Depth information was calculated from them using stereo matching. Using the color and the estimated depth information, the hologram was computationally generated. Ding et al. [14] proposed the circular-view CGH using a stereo image pair of a real-world scene. Depth information was extracted by the stereo matching and then scaled based on an estimated dimension ratio. A 3D point cloud was divided into several depth layers, and each layer was propagated using the wavefront recording plane method for fast calculation. A two-stage occlusion culling process was adopted to support the occlusion effect. Also, the impact of the accuracy of the depth information onto CGH was analyzed. We [15] proposed a CGH method of the natural 3D scene from multi-view images. Multi-view images were captured using a multi-view camera system consisting of five color cameras and one depth camera. After applying camera calibration, color correction, depth estimation, and 3D points unification, a 3D point source set describing the captured 3D scene was generated. A hologram supporting motion-parallax viewing was synthesized by this unified 3D point source set using the point-based CGH method. Takaki and Ikeda [16] proposed a synthesizing method of computer-generated holographic stereograms from multi-view images of real objects. To capture multi-view images, a digital camera mounted on a translation stage was used. The camera moved both horizontally and vertically to acquire 10×5 parallax images. A wavefront was calculated having its phase as a quadric phase distribution of a spherical wave converging to a corresponding viewpoint. An object wave was obtained by summing up multiple wavefronts calculated for multi-view images. Holograms were generated by adding a reference wave to the object wave. To generate a 360-degree 3D hologram for the real existing objects using conventional approaches [1116], it might require very complex hardware composed of lots of cameras or a number of 3D camera sets. This approach causes increased cost and hardware complexity issues. Also, in terms of software processing, it will inevitably require delicate processes, such as calibration and the stereo matching. To avoid the aforementioned problems, one or one set of a camera can be used with the translation stage. However, this kind of approach requires multiple measurements to repeatedly perform “capture-move-stop” operations. It will be very tedious and will induce increased time-consuming issues as pointed out in the previous work [16]. Therefore, easy and fast generation of the 360-degree 3D information for the real 3D objects and fast hologram calculation [8,14,17] will be crucial issues. To make holographic video applications feasible, real-time computation will have the highest priority over any other factors.

In this paper, we investigated the problem of generating 360-degree color holograms of real 3D objects in an efficient manner. First, a 3D image acquisition system composed of a depth camera and a turntable is used to acquire a few numbers of view images encompassing 360-degree direction for the target 3D objects for low cost and reduced acquisition time. Then, intermediate view generation (IVG) is applied to generate a number of view images that are not captured from the 3D image acquisition system for the fast calculation of 360-degree 3D information. Finally, the 360-degree color holograms representing the target 3D objects are generated from this 3D information using the viewing window (VW)-based CGH method [18]. In order to show the validity of the proposed method, optical reconstructions using our 360-degree tabletop color holographic display are conducted. This paper is organized as follows. In Section 2, the proposed 360-degree 3D information and color hologram generation method is described. Experimental results are given in Section 3, and we give concluding remarks and recommendations for further research in Section 4.

2. 360-DEGREE 3D INFORMATION AND COLOR HOLOGRAM GENERATION

A. 360-Degree 3D Information Generation

1. 3D Information Acquisition

There are various 360-degree 3D reconstruction methods. Recently, 3D scan methods using Kinect have been actively investigated [19,20]. Cui and Stricker [19] demonstrated that a 3D mesh model of a human body can be reconstructed from 360 images within about 5 min. Mao et al. [20] demonstrated that a 3D avatar model can be reconstructed from 18 view images taken at six different viewpoints within around 10 min. They can provide 3D models for the human body with satisfactory quality. However, they were involved with time-consuming processes, such as iterative closest point (ICP) algorithms [21]. Therefore, the processing time between 5 and 10 min was required. Furthermore, when we would like to use these previous works as our 360-degree 3D information generation technique, we need additional computation time. Since we used a point-based CGH method [18] for the hologram generation, projections of the 3D model more than a thousand times were required to calculate 3D information of the 1024 view images.

Therefore, we need to develop an efficient method of 360-degree 3D information generation targeting for our 360-degree tabletop color holographic display, as well as general 360-degree holographic 3D displays. In order to acquire 3D images to be used for extracting 3D information of a target object efficiently, we use the 3D image acquisition system consisting of one depth camera and a turntable, as depicted in Fig. 1. For the depth camera, Kinect v2 is used. For the turntable, Crayfish 50 and ComXim MT380WL120H are used according to a target object’s characteristics, such as its weight, respectively. Using the turntable, we can capture multi-view images without stopping. The distance from the depth camera and the target object is recommended more than 60 cm to obtain stable depth information. And, to reflect characteristics of our 360-degree tabletop color holographic display designed to look down at a 3D holographic image or video, the depth camera overlooks the target object about 45 degrees. Maximum volume of the target object is considered about 50cm3. Prior to capturing, the center of the target object needs to be aligned with the rotation axis to be correctly reconstructed at the 360-degree tabletop color holographic display.

 figure: Fig. 1.

Fig. 1. 3D image acquisition system.

Download Full Size | PDF

Besides the resolution, position and field of view of the color sensor and the depth sensor of the Kinect v2 are different. Therefore, the covered region of their captured images is different, as depicted in Fig. 2(a). Due to this inconsistency, registration between the color image and the depth map image is the essential process. To establish 11 correspondence, the color image needs to be aligned to the depth map image, as in Fig. 2(b). Moreover, owing to farther distance between the depth camera and the target object, a larger area is taken compared to the intended region of interest (ROI). To resolve this problem, 3D image capture software was implemented, which provides the user interactive operability using the OpenCV widget. The user can adjust the minimum and the maximum depth distances and the ROI of the 3D images in the acquisition step, as depicted in Fig. 3. After applying the depth range and ROI selection, the resolution of the 2D color image and its depth map image representing the designated region by the user is up-scaled to that of the SLM of our 360-degree tabletop color holographic display. During one turn of rotation, the depth camera captures RGB color plus the depth format consisting of a regular 2D color image and its associated 8-bit depth map image. The depth camera captures hundreds of view images during one turn of rotation. The number of captured view images depends on the rotation speed of the turntable, performance of the processing terminal, etc.

 figure: Fig. 2.

Fig. 2. Captured color and depth map images (a) before calibration and registration and (b) after calibration and registration.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. 3D image capture software (a) before depth range adjustment and ROI selection and (b) after depth range adjustment and ROI selection.

Download Full Size | PDF

2. Intermediate View Generation

After capturing, captured view images are processed by the IVG in order to make equally spaced 1024 view images for the target object to meet the specifications of our 360-degree tabletop color holographic display [7]. During the IVG process, a suitable number of virtual view images is generated, as in Fig. 4. When the total number of captured view images is NC, the number of virtual view images NV between each captured neighboring view image is determined by Eq. (1):

NV=1,024NC.

The IVG composed of 3D warping, backward projection, and hole filling generates virtual view image as the RGB color plus the depth format using neighboring view images as references. 3D warping is the process that calculates the actual world coordinates information by projecting each pixel of a given view image into 3D space using its corresponding depth map image and the camera’s intrinsic and extrinsic parameters:

[XYZ1]=RTK1[xy1]RTt,
Z(i,j)=1.0{D(i,j)255.0×(1.0MINZ1.0MAXZ)+1.0MAXZ}.

Equations (2) and (3) convert 2D image coordinates into 3D world coordinates. The parameters x and y are the 2D image coordinates, and X, Y, and Z are the 3D world coordinates. K is the camera’s intrinsic parameters, and R and t are a camera’s extrinsic parameters. R denotes a rotation matrix, and t represents a translation vector. Z(i,j) is the depth value of its depth map image, and MINZ and MAXZ represent the minimum and the maxim depth distances determined at the time of acquisition, respectively, as depicted in Fig. 3(b).

 figure: Fig. 4.

Fig. 4. Intermediate view generation.

Download Full Size | PDF

Backward projection generates the virtual view image by back projecting the 3D world coordinates information based on the desired virtual camera position by Eq. (4) through Eq. (8). KV, RV, and tV are the desired virtual camera’s intrinsic and extrinsic parameters, respectively. The camera parameters of the virtual view image are determined by linearly interpolating those of captured neighboring view images, such as #(N)-th view image and #(N+1)-th view image. In Eq. (8), virtual_view_index represents the index of the virtual view image to be generated, and its value is between 1 to NV. There are holes in the virtual view image after applying the 3D warping and backward projection due to occlusions. Since the information for the occlusion region is not present at the captured view images, it appears as holes in the virtual view image. There are many algorithms to fill these holes using edge-dependent Gaussian filter and interpolation [22], graph-based interpolation [23], depth adaptive hierarchical hole filling [24], modified background-oriented rectangle method [25], and so on. We adopted a simple hole filling algorithm [26] to reduce computational complexity. In our proposed method, holes are filled with the background pixel value comparing the depth values of their neighboring regions taking into account the assumption that the adjacent view images are taken close enough:

[xvyv1]=KV[RV|tV][XYZ1],
KV=(1Ratio)×KL+Ratio×KR,
RV=(1Ratio)×RL+Ratio×RR,
tV=(1Ratio)×tL+Ratio×tR,
Ratio=virtual_view_indexNV+1.

Computation for hundreds of virtual view images, including RGB color images and their corresponding depth map images, demands significantly high computational complexity. To speed up the IVG compared with our prior work in [27], modifications on Eqs. (5) and (6) are applied. Since we use only one depth camera, intrinsic parameters of the camera K are the same for all the view images (KL=KR=K). Therefore, KV becomes the constant matrix K, as in Eq. (9). Consequently, we do not need to compute Eq. (5) at all for all the virtual cameras. As the adjacent viewpoint positions are closer to each other, an angle between them becomes smaller. In this case, the rotation matrixes R of left and right view images become more similar. If the angle is negligibly small, RL and RR can be approximated with the same value as either one of them. Therefore, if we assume that the adjacent view images are taken close enough, then we could use either Eq. (10) or Eq. (11) instead of Eq. (6) by applying the above assumption. Moreover, if we regard each pair-wise computation independently, we can approximate rotation matrix R further with identity matrix I. In summary, fast computation is possible by the constant matrix K and the approximated rotation matrix RV:

KV=K,
RV=RL,
RV=RR.

To improve the computational efficiency further, IVG using a graphics processing unit (GPU) hardware acceleration algorithm based on compute unified device architecture (CUDA) [28] is implemented. In a CPU computation environment, as in Fig. 5(a), virtual view images are sequentially generated one by one. However, in a GPU computation environment, as depicted in Fig. 5(b), a number of virtual view images are concurrently calculated using parallel computation. In our proposed method, 20 view images are simultaneously generated using 768×768×20 threads in parallel. The number of simultaneously generated view images, 20, was determined empirically, which shows the best performance in our experiments.

 figure: Fig. 5.

Fig. 5. Virtual view computation in a (a) CPU and (b) GPU environment.

Download Full Size | PDF

B. 360-Degree Color Hologram Generation

To calculate a 360-degree color hologram for our 360-degree tabletop color holographic display, a VW-based CGH method [18] is used to generate the respective red (R), green (G), and blue (B) hologram. The schematic view of the VW-based display [29] can be simplified into a SLM and a lens positioned in front of the SLM, as described in Fig. 6.

 figure: Fig. 6.

Fig. 6. Schematic diagram of a hologram generation method [18].

Download Full Size | PDF

A hologram is generated by two steps: (i) complex fields diffracted from multiple object planes are superposed on the VW plane, and (ii) a complex field on the hologram plane is calculated by backward propagating that of the VW plane. Light diffraction of step (i) is computed by fast Fourier transform (FFT) based on Eq. (12) [18]. (u,v) are coordinates of the VW plane, and (xdn,ydn) are coordinates of the object plane placed on the n-th depth level. dn is the distance from the hologram plane to the object plane on the n-th depth level. U(xdn,ydn) represents the complex object field on the n-th depth level, and Uvw(u,v) denotes the complex field on the VW plane. f is the focal length of the lens forming the VW. λ and k denote wavelength and wave number of a light source, respectively. The backward propagation of step (ii) is calculated by Eq. (13) [18]. (x,y) are coordinates of the hologram plane, and UH(x,y) is the complex field on the hologram plane. In order to separate the real image and the conjugate image, an off-axis hologram is widely used in general. However, in our case, finding off-axis values for the 1024 perspective holograms might be very tedious and time consuming. Therefore, we need to remove conjugate images of the amplitude hologram by means other than the off-axis. In our proposed method, we can remove conjugate images of the amplitude hologram by making Uvw(u,v), which has equivalent spatial frequency distribution of UH(x,y) as a band-limited signal. We assumed that the object has random phase distribution, and the frequency distribution of the Uvw(u,v) might spread over almost the entire bandwidth. Therefore, in our proposed method, half of the bandwidth of Uvw(u,v) needs to be cut off before backward propagation, as in Fig. 6, in order to make Uvw(u,v) into band limited [18]:

UVW(u,v)=exp{jk2(fd)(u2+v2)}iλ(fd)ffd×U(xdn,ydn)exp{j2πλ(fd)}dxdndydn,
UH(x,y)=jλfUvw(u,v)exp(jk2f(u2+v2))exp(j2πλf(xu+yv))dudv.

There are remaining issues to be handled during hologram generation. First, we used a fixed SLM and rotating relaying mirror optics [6,7] to provide horizontal 360-degree viewing of a floating 3D image. This optical design makes the VW formed along a virtual circle rotate [30]. Therefore, the viewer sees a rotated 3D image if this issue is not properly handled. To compensate the VW rotation, we adopt a simple 2D rotation applicable to both 3D information and the hologram. There might be artifacts when we apply 2D rotation to the hologram due to the interpolation process. Therefore, in our proposed method, the compensation is performed on the 3D information. 3D information composed of 1024 perspectives of RGB color plus the depth format is rotated a certain amount of angle in the inverse direction according to their perspective position along the virtual circle before applying the VW-based CGH method. To make this simple method feasible, we use a square region of the SLM, which is almost the same as the overlapping region among rotating SLMs. The exact overlapping region is the inscribed circle of the rectangles for rotating SLMs [30]. This compensation makes resolution loss of the hologram, but this loss might not be crucial, because the target object is usually positioned in the central part of the SLM in general. Second, there is another issue to deal with. To provide 360-degree viewing, our 360-degree tabletop color holographic display used the time multiplexing method by taking advantage of high-speed operation for the DMD at its maximum speed of tens of thousands of switches per second. Therefore, a binary hologram needs to be used, and a complex field on the hologram plane UH(x,y) needs to be binarized. In our proposed method, to binarize amplitude values of the complex field UH(x,y), a global threshold method is used for the fast computation. The threshold value is determined based on the median value of the given amplitude values and a heuristically chosen offset value. Finally, there is a last issue to be treated. Our 360-degree tabletop color holographic display uses three DMDs for respective color channels, R, G, and B. Almost all the budget for high-speed operation for the DMD is used for the 360-degree viewing, so we cannot do color reconstruction in a time-multiplexed manner. Even though three DMDs are precisely aligned during the optical setup, there might be mismatch among R/G/B color hologram planes. Therefore, registration for R/G/B color hologram planes is necessary with a signal-processing manner after the elaborated optical setup. To this end, we extracted R/G/B spatial shift values based on the pre-defined target distance.

To summarize the 360-degree color hologram generation, the following steps are performed: (1) resize 3-D information with RGB color plus depth format if needed, (2) compensate the VW rotation by inversely rotating given 3D information, (3) apply random phase distribution to the given 3D information, (4) diffract each object plane to the VW plane at a distance of (fdn) and superpose them, (5) cut off half of the bandwidth of Uvw(u,v), (6) diffract the complex field on the VW plane to the hologram plane at a distance of f, (7) extract the amplitude values of the complex field UH(x,y) and binarize them based on the median and offset values, and (8) spatially shift each R/G/B color binary hologram.

3. EXPERIMENTAL RESULTS

In order to validate the proposed method for the 360-degree 3D image generation, we have tested 1024 view images for the four test sequences using our 3D image acquisition system and IVG, as shown in Fig. 7. At the time of acquisition, the center of the target object is aligned with the rotation axis, and suitable ROI and depth range are determined.

 figure: Fig. 7.

Fig. 7. 360-degree 3D information generation results with intermediate view generation of a (a) man, (b) doll, (c) flower, and (d) robot.

Download Full Size | PDF

After calibration and registration, depth range adjustment, ROI selection, and image resizing (up-sampling), the resolution of the 3D image becomes 768×768 to be equal to that of the hologram used for one of the prototype systems (System 2) for our 360-degree color holographic 3D video display [7]. During the acquisition using our 3D image acquisition system, hundreds of view images are captured during one turn of rotation. As we described in Section 2, the number of captured view images will be different depending on the circumstances at the time of acquisition. Table 1 summarizes the acquisition results for the four test objects, such as depth range, the number of captured view images during one turn of rotation, and processing time for capturing.

Tables Icon

Table 1. Test Results for Acquisition of the Test Objects

Later, the remaining hundreds of view images are generated by IVG in order to generate equally spaced 1024 viewpoints from pre-obtained view images. As we described in Section 2, fast computation is possible by the constant matrix K and the approximated rotation matrix RV when the angle between adjacent viewpoint positions is sufficiently small. In the case of the test object robot, the angle is 1.5 degrees at the time of acquisition. The number of virtual view images between each captured neighboring view images is 4 by Eq. (1). The IVG results without or with the approximation of rotation matrix RV is shown in Fig. 8. When comparing the results from Figs. 8(c) and 8(d), there is no noticeable quality degradation after the approximation. To validate the errors introduced due to the approximation objectively, we calculate the differences between Figs. 8(c) and 8(d), as denoted in Fig. 8(e). For reference, brightness and contrast of the difference images are adjusted for the visualization, and we calculate peak signal-to-noise ratio (PSNR) of Fig. 8(d) with Fig. 8(c). As a result, each virtual view image shows PSNR of 35. 2 dB, 36.5 dB, 36.7 dB, and 35.6 dB, respectively. There is almost no quality degradation after the approximation, as confirmed both subjectively and objectively. To check the computational efficiency improvements by the approximation, we compared computation time. For the test, a custom-built PC with Intel core i7-7770 3.6 GHz CPU and 16 GB of RAM running a 64-bit Windows 10 operating system is used. As shown in Table 2, the approximation accelerates the computations by a factor of 5.8 on average with the CPU computation.

 figure: Fig. 8.

Fig. 8. Intermediate view generation results for a robot without or with the approximation of rotation matrix RV (a) left image, (b) right image, (c) virtual view images without approximation, and (d) virtual view images with approximation; (e) difference images from (c) and (d).

Download Full Size | PDF

Tables Icon

Table 2. Test Results for Intermediate View Generation without or with Rotation Matrix Approximation

As we discussed in Section 2, we implemented IVG using the GPU hardware acceleration algorithm based on CUDA in order to improve computational efficiency further. In the case of CPU-only computation, view images are captured, and virtual view images are sequentially generated one by one using CPU. However, with GPU computation, view images are captured using CPU, and 20 virtual view images are concurrently calculated using GPU. In order to validate computational efficiency, we have compared computation times and their utilization between CPU and GPU computation, as described in Fig. 9. A custom-built PC with Intel core i5-4670 3.4 GHz CPU, and 16 GB of RAM, and NVIDIA GTX 770 of graphics card running a 64-bit Windows 10 operating system is the hardware for the test. Regarding the programming environment, C++ using Visual Studio 2013 and CUDA 7.5 were used. As we can see from Fig. 9, 3.6 times computational efficiency was achieved using GPU parallel processing compared with the CPU-only computation. The number of generated virtual view images, processing time for IVG, and total processing time are described in Table 3. The smaller the total number of captured view images NC is, the larger the number of virtual view images between each captured neighboring view images NV is, as we can easily infer from Eq. (1). And the smaller the number of captured view images NC is, the larger the total number of generated virtual view images is in general. However, they are not linearly dependent. Thanks to the parallel processing algorithm, IVG requires around 40 s. The larger the number of captured view images NC is, the longer the total processing time, including both acquisition and IVG, is, as we can see from Tables 1 and 3. Therefore, total processing time is mostly dependent on the processing time for the acquisition. In terms of calculation time, sparse measurements on the 3D object are preferred, so that IVG handles the remaining view images to be generated with fast computation. However, quality degradation in virtual view images might appear due to IVG, compared with the captured view images. As we can see from Fig. 7(c), generated virtual view images have some artifacts. Even though there are some computational complexity increases, further post-processing or elaborate hole filling might be applied to enhance the quality of the virtual view images. On the contrary, regarding reconstruction quality, dense measurements on the 3D object are preferred. With respect to the geometrical complexity of the target 3D objects, the more complex their geometry is, like the flower and the robot, the more distortion is likely to occur, as shown in Figs. 7(c) and 7(d). Total processing time for 360-degree 3D image generation configured as 1024 viewpoints is less than one-and-a-half minutes. Our proposed method for 360-degree 3D image generation is computationally efficient compared with the previous 3D scan methods using Kinect [19,20]. Moreover, further computational efficiency improvement was achieved compared with our previous result [27].

 figure: Fig. 9.

Fig. 9. Computational efficiency comparison results on CPU versus GPU environment (a) computation time, and (b) utilization of CPU and GPU.

Download Full Size | PDF

Tables Icon

Table 3. Test Results for 360-degree 3D Information Generation

Table 4 shows the specification of our 360-degree tabletop color holographic display used for experiments. It is the latest prototype system not addressed in [7]. The basic configuration based on the temporal multiplexing techniques is the same as the previous prototype systems, shown in Fig. 10, used in our previous work [27]. It is designed and implemented to provide the enhanced spatial bandwidth product and resolution, which are the fundamental factors to determine the performance of an optical system. As described in Section 2, a binary color hologram consisting of 1024 viewpoints for the reconstruction is generated using image resizing, VW rotation compensation, VW-based CGH method, binarization, zero padding, and R/G/B color registration. As denoted in Section 2, the resolution of the 3D image is 768×768 rather than 1200×1200, because the 360-degree 3D information generation is designed and implemented according to the specification of one of the prototype systems (System 2) for our 360-degree tabletop color holographic display [7]. Therefore, image resizing was performed on the respective view images up to 1200×1200 to match the hologram resolution in the first step. Also, zero padding on both left and right sides is conducted in order to convert 1200×1200 of the hologram resolution into 1920×1200 to be compatible with the SLM resolution. The RGB color binary holograms for the 1st viewpoint of test sequences are shown in Fig. 11. The test objects were located between 5 cm and 10 cm from the SLM.

Tables Icon

Table 4. Specification of the 360-degree Holographic 3D Display

 figure: Fig. 10.

Fig. 10. 360-degree tabletop color holographic display [7].

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. R/G/B color binary hologram of the (a) man, (b) doll, (c) flower, and (d) robot for the 1st view image.

Download Full Size | PDF

In order to validate faithful 3D reconstruction and 360-degree viewing of the proposed method, we have reconstructed 360-degree 3D images. Figure 12 shows optically reconstructed several view images taken at different viewing positions on our 360-degree tabletop color holographic display. Compared with our prior results presented in [27], quality of the reconstructed holographic 3D images is substantially improved, as well as the computational efficiency improvement. In addition, its size is increased by a factor of 1.5.

 figure: Fig. 12.

Fig. 12. Optical reconstructions of the (a) man, (b) doll, (c) flower, and (d) robot obtained at different viewing positions.

Download Full Size | PDF

In this paper, we demonstrated the whole process of the 360-degree holographic 3D image display for real 3D objects composed of acquisition, generation, and reconstruction. To the authors’ knowledge, there has been no trial to demonstrate this whole process of the 360-degree holographic 3D image display for real 3D objects with satisfactory quality. We addressed fast calculation of the 360-degree 3D information generation in detail. Fast generation of a 360-degree color hologram will be developed as future work. Our goal is to improve our systems to the level of commercialization. For this, we need to develop the 360-degree 3D information and hologram generation in real time in the near future.

4. CONCLUSIONS

In this paper, we investigated the problem of generating 360-degree color holograms of real 3D objects. To generate 360-degree 3D images, acquisition using an actual 3D image acquisition system and IVG are performed. Then, using a VW-based CGH method, 360-degree color holograms are calculated. Test results using the 360-degree tabletop color holographic display demonstrated that the real 3D objects are correctly reconstructed with supporting 360-degree viewing. For future works, we will build an integrated form of sophisticated system for the 3D image acquisition and improve the capturing and IVG algorithm. Furthermore, we will intensively investigate approaches for quality enhancement of the reconstructed holographic 3D images.

Funding

The Cross-Ministry Giga KOREA Project, Korea Government (MSIT) (GK17D0100, Development of Telecommunications Terminal with Digital Holographic Table-top Display).

Acknowledgment

Portions of this work were presented at the Digital Holography and 3D Imaging (DH) in 2017, W2A.28 [27]. The authors are grateful to the reviewers whose valuable suggestions and comments largely improved the quality of this paper.

REFERENCES

1. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16, 12372–12386 (2008). [CrossRef]  

2. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19, 9147–9156 (2011). [CrossRef]  

3. M. Park, B. G. Chae, H.-E. Kim, J. Hahn, H. Kim, C. H. Park, K. Moon, and J. Kim, “Digital holographic display system with large screen based on viewing window movement for 3D video service,” ETRI J. 36, 232–241 (2014). [CrossRef]  

4. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24, 13194–13203 (2016). [CrossRef]  

5. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23, 6533–6542 (2015). [CrossRef]  

6. Y. Lim, K. Hong, H. Kim, H.-E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999–25009 (2016). [CrossRef]  

7. J. Kim, Y. Lim, K. Hong, E.-Y. Chang, and H.-G. Choo, “360-degree tabletop color holographic display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2017), paper W3A.1.

8. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23, 25440–25449 (2015). [CrossRef]  

9. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56, F138–F143 (2017). [CrossRef]  

10. https://developer.microsoft.com/en-us/windows/kinect.

11. MIT Media Lab, http://www.media.mit.edu/.

12. K. Nomura, R. Oi, T. Kurita, and T. Hamamoto, “Electronic hologram generation using high quality color and depth information of natural scene,” in Proceedings of the Picture Coding Symposium (PCS) Conference (2010), pp. 46–49.

13. S.-C. Kim, D.-C. Hwang, D.-H. Lee, and E.-S. Kim, “Computer-generated holograms of a real three-dimensional object based on stereoscopic video images,” Appl. Opt. 45, 5669–5676 (2006). [CrossRef]  

14. S. Ding, S. Cao, Y. F. Zheng, and R. L. Ewing, “From image pair to a computer generated hologram for a real-world scene,” Appl. Opt. 55, 7583–7592 (2016). [CrossRef]  

15. E.-Y. Chang, Y.-S. Kang, K. Moon, Y.-S. Ho, and J. Kim, “Computer-generated hologram for 3D scene from multi-view images,” Proc. SPIE 8738, 87380H (2013). [CrossRef]  

16. Y. Takaki and K. Ikeda, “Simplified calculation method for computer-generated holographic stereograms from multi-view images,” Opt. Express 21, 9652–9663 (2013). [CrossRef]  

17. H. Kang, E. Stoykova, and H. Yoshikawa, “Fast phase-added stereo-gram algorithm for generation of photorealistic 3D content,” Appl. Opt. 55, A135–A143 (2016). [CrossRef]  

18. S. Lee, E.-Y. Chang, H.-G. Choo, and J. Kim, “Computer-generated-hologram for holographic display based on viewing window and removal of conjugate images,” in Proceedings of the 3DSA Conference (2015).

19. Y. Cui and D. Stricker, “3D body scanning with one Kinect,” in Proceedings of the 3D Body Scanning Technologies (3DBST) Conference (2015).

20. A. Mao, H. Zhang, Y. Liu, Y. Zheng, G. Li, and G. Han, “Easy and fast reconstruction of a 3D avatar with an RGB-D sensor,” Sensors 17, 1113 (2017). [CrossRef]  

21. Y. He, B. Liang, J. Yang, S. Li, and J. He, “An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features,” Sensors 17, 1862 (2017). [CrossRef]  

22. W.-Y. Chen, Y.-L. Chang, S.-F. Lin, L.-F. Ding, and L.-G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation,” in IEEE International Conference on Multimedia and Expo (2005), pp. 1314–1317.

23. Y. Mao, G. Cheung, A. Ortega, and Y. Ji, “Expansion hole filling in depth-image-based rendering using graph-based interpolation,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2013).

24. M. Solh and G. AlRegib, “Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video,” IEEE J. Sel. Top. Signal Process. 6, 495–504 (2012). [CrossRef]  

25. S.-W. Nam, K.-H. Jang, Y.-J. Ban, H.-S. Kim, and S.-I. Chien, “Hole-filling methods using depth and color information for generating multiview images,” ETRI J. 38, 996–1007 (2016). [CrossRef]  

26. K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-D video,” in Proceedings of the Picture Coding Symposium (PCS) Conference (2009), pp. 1–4.

27. E.-Y. Chang, J. Choi, S. Lee, S. Kwon, J. Yoo, H.-G. Choo, and J. Kim, “360-degree color hologram generation for real 3D object,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2017), paper W2A.28.

28. “CUDA toolkit,” https://developer.nvidia.com/cuda-toolkit/.

29. S. Reichelt, R. Haussler, N. Leister, G. Futterer, and A. Schwerdtner, “Large holographic 3D displays for tomorrow’s TV and monitors—solutions, challenges, and prospects,” in LEOS 2008—21st Annual Meeting of the IEEE Lasers and Electro-Optics Society (2008), pp. 194–195.

30. T. Kim, H.-E. Kim, E.-Y. Chang, H.-G. Choo, and J. Kim, “Analysis of viewing window motion and a simple CGH technique for a table–top hologram display,” in Proceedings of the 3DSA Conference (2015).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. 3D image acquisition system.
Fig. 2.
Fig. 2. Captured color and depth map images (a) before calibration and registration and (b) after calibration and registration.
Fig. 3.
Fig. 3. 3D image capture software (a) before depth range adjustment and ROI selection and (b) after depth range adjustment and ROI selection.
Fig. 4.
Fig. 4. Intermediate view generation.
Fig. 5.
Fig. 5. Virtual view computation in a (a) CPU and (b) GPU environment.
Fig. 6.
Fig. 6. Schematic diagram of a hologram generation method [18].
Fig. 7.
Fig. 7. 360-degree 3D information generation results with intermediate view generation of a (a) man, (b) doll, (c) flower, and (d) robot.
Fig. 8.
Fig. 8. Intermediate view generation results for a robot without or with the approximation of rotation matrix RV (a) left image, (b) right image, (c) virtual view images without approximation, and (d) virtual view images with approximation; (e) difference images from (c) and (d).
Fig. 9.
Fig. 9. Computational efficiency comparison results on CPU versus GPU environment (a) computation time, and (b) utilization of CPU and GPU.
Fig. 10.
Fig. 10. 360-degree tabletop color holographic display [7].
Fig. 11.
Fig. 11. R/G/B color binary hologram of the (a) man, (b) doll, (c) flower, and (d) robot for the 1st view image.
Fig. 12.
Fig. 12. Optical reconstructions of the (a) man, (b) doll, (c) flower, and (d) robot obtained at different viewing positions.

Tables (4)

Tables Icon

Table 1. Test Results for Acquisition of the Test Objects

Tables Icon

Table 2. Test Results for Intermediate View Generation without or with Rotation Matrix Approximation

Tables Icon

Table 3. Test Results for 360-degree 3D Information Generation

Tables Icon

Table 4. Specification of the 360-degree Holographic 3D Display

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

NV=1,024NC.
[XYZ1]=RTK1[xy1]RTt,
Z(i,j)=1.0{D(i,j)255.0×(1.0MINZ1.0MAXZ)+1.0MAXZ}.
[xvyv1]=KV[RV|tV][XYZ1],
KV=(1Ratio)×KL+Ratio×KR,
RV=(1Ratio)×RL+Ratio×RR,
tV=(1Ratio)×tL+Ratio×tR,
Ratio=virtual_view_indexNV+1.
KV=K,
RV=RL,
RV=RR.
UVW(u,v)=exp{jk2(fd)(u2+v2)}iλ(fd)ffd×U(xdn,ydn)exp{j2πλ(fd)}dxdndydn,
UH(x,y)=jλfUvw(u,v)exp(jk2f(u2+v2))exp(j2πλf(xu+yv))dudv.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.