Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A 360-degree floating 3D display based on light field regeneration

Open Access Open Access

Abstract

Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

©2013 Optical Society of America

1. Introduction

Creating a real 3D image in space has always been the dream of human beings. A real 3D image means that the observers can see the display in space with naked eyes and look around the display with correct field of view and correct spatial occlusion effect. Current glasses-free 3D display, for example, autostereoscopic display [1,2] can just provide two views or a few correct views of 3D scenes to the observers to perceive the depth information of displayed objects. Hologram can reconstruct both amplitude and phase of 3D objects, but it is difficult to be observed from all directions and get large-size dynamic 3D display currently [3,4]. Volumetric display [5] is a spatial addressing display technique, it just only reconstructs spatial voxels’ location information, but not the voxels’ light angular distribution, and in consequence, the displaying 3D scene has no occlusion effect. For example, the Perspecta 3D System developed by Actuality Systems Inc [6], and the color 3D display using a rotating 2D LED array proposed by Xie et al. [7] were typical volumetric display systems.

Occlusion-correct 3D display can be achieved by reconstructing the light field of 3D scene. Light field regeneration method simulates the way in which real 3D scene emits light rays, to create the light field distribution in the surrounding area of the 3D scene, in result, the observers around the display can watch the corresponding images with correct view from different places. Light field display has attracted lots of attention in the last few years. Based on Perspecta 3D System, Cossairt et al. [8] limited the screen’s light diffusing angle, and realized a 198-view occlusion-capable volumetric 3D display. Jones et al. [9,10] employed high-speed projector and rotating holographic screen to realize an interactive monochrome 360-degree light field display. Seelinder [11], achieved a 360-degree dynamic color 3D display, which utilized the slowly rotating cylindrical parallax barrier combining with a rapidly counter-rotating linear LED array. Our previous LED-based high-speed color sequential projector projected images onto a 45-degree tilted direction selective-diffusing rotating screens to reconstruct 3D light field of color scenes [12,13]. Moreover, based on the frequency analysis, the display performance of light fields display has been demonstrated [14].

However, all above-mentioned light field 3D displays are not penetrable, it means that the display region is not floating in the air. Since the space of the 3D scene displayed and light field scanning module in the display system, for example, moving screen or barrier of display media, are nearly overlapped. Implementation of a 3D display floating in the air is an ideal 3D display, which could make observers penetrate into the reconstructed 3D scenes and interact with virtual world. Some research works about the floating 3D display recently have been proposed [1518]. Yoshida et al. [15] used 103 microprojectors to project a series light field images on the special diffusion hollow cone screen, and formed a 3D scene floating above the cone. Takaki et al. [16] has proposed to use a scanning multi-views method to create floating 3D display, it is not a real light field reconstruction, so that the viewing range was limited in a narrow region in the height direction and also distance dependent. Wetzstein et al. [17,18] proposed layered display, using multilayer LCD panels laminated to form a light field 3D display which locates partially in the air, partially in the different LCD panels.

A novel method to implement a 360-degree floating light field 3D display is presented in this paper. Combining a high-frame-rate color projector with a flat light field scanning screen, a color 3D scene is reconstructed floating in the air above the spinning flat screen. This system has a very simple architecture, and the flat light field scanning screen creates a stable light field distribution, and the displayed 3D scene can be located completely in the air above the screen.

2. System configuration and principle

2.1 System configuration

The system is mainly comprised of a high-frame-rate projector, a flat light field scanning screen and revolving mechanism as shown in Fig. 1(a). The high-frame-rate projector is located on the top of the system. It projects images on the flat light field scanning screen at a high frame rate. The flat light field scanning screen is rotated on optical axis of projector lens. The flat light field scanning screen here is a circular reflective directional-diffusing screen. By the tilted reflective diffusion through the microstructure on the screen, the screen can deflect the normal incident light to a certain tilted angle, and diffused in the height direction z, as shown in Fig. 1(b). Considering chief deflect rays’ direction as a benchmark, this screen limits the horizontally (or circularly) reflected light diffusion in a very small angular range, and diffuses light in a large angle vertically to ensure that observers at different height can watch the whole images. This kind of screen consists of microstructures which could be fabricated by holography method, binary optics method or some other etching methods. High-frame-rate projector projects synthesized images for different views to the high-speed rotating flat light field scanning screen. After deflected and diffused by the screen, 3D light field is regenerated, and the 3D scene is reconstructed floating above the spinning screen in the air. The displaying 3D scene with the correct occlusion effect can be observed from all directions horizontally without any glasses.

 figure: Fig. 1

Fig. 1 (a) System configuration. (b) Configuration of flat light field scanning screen.

Download Full Size | PDF

2.2 Display principle

Generally, 3D objects can be considered as an assemblage of all spatial points, and each spatial point emits or reflects light rays with specific direction distribution, due to the effects of environment illumination and occlusion. Based on this characteristic, a perfect light field 3D display must reconstruct all rays emitting from 3D object with corresponding direction distribution. In this proposed system, we just regenerate the correct horizontal light field and diffuse one view in the vertical direction. In practice, a high-frame-rate projector projects a synthetic image for different views onto the flat light field scanning screen which deflects and diffuses light to reconstruct horizontal light field of 3D object. The projected synthetic images are synchronized with high-speed revolving screen to achieve a 3D display floating in the air, above the spinning flat screen.

Jones et al. [9] presented a multiple-center-of-projection rendering technique for the 3D display with a rotating tilted anisotropic screen. Since we have use flat screen instead of the tilted anisotropic screen, we have to evaluate the rendering approach proposed in reference [9] to accomplish our floating 3D display with a flat light field scanning screen. The mapping relationship between projection image and 3D object is demonstrated in Fig. 2. Assuming that the high-frame-rate projector locates at P, and the location of mirror image P' of the projector can be calculated. When the screen turns an angle of θp from the initial position, the mirror image projector P' rotates simultaneously and the coordinate of P' is changed as follows.

{xp=Hpsinαcosθpyp=Hpsinαsinθpzp=Hpcosα
Where, Hp is the height of the projector apart from screen, and α is the angle between the projected central light normal to the screen and its chief rays of the reflected light. In Fig. 2, if the center of the screen is the origin of coordinates, the viewpoints are all located at the circumference with a radius of RV and a height of h above the screen. The mirror image projector is moving around the circumference with a radius ofHpsinα and height ofHpcosαunder the screen. To display a spatial point Q(x0,y0,z0) of the real 3D scene, it needs to reconstruct specific light rays distribution emitting from point Q in 360-degree horizontally. It means the light rays from projector will be reflected by the screen and then, pass through the point Q to the surrounding area. For an arbitrary rotation place of the screen, the mirror image projector P' is located at a certain position, only the ray QVemitting from point Q can be reconstructed, and its projection direction in the plane of screen is the same as that ofPQ in the same plane.

 figure: Fig. 2

Fig. 2 Mapping relation between projection image and reconstructed 3D object.

Download Full Size | PDF

The line P'Q intersects the viewers’ cylinder at the point V1 which has the same x-y coordinates as the viewpoint V. The line VQ intersects the screen at point S. To ensure the viewpoint to watch point Q, the mirror image projector P' should project the ray P'S to reconstruct the ray QV. The equation of straight line P'Q and the viewpoint cylinder can be expressed as following:

xx0xPx0=yy0yPy0=zz0zPz0=kandx2+y2=RV2
So we can get the coefficient k as below.
k=[x0(xPx0)+y0(yPy0)][(xPx0)2+(yPy0)2]RV2(xPy0x0yP)2(xPx0)2+(yPy0)2
Therefore, the coordinates of viewpoint V is described as:(k(xPx0)+x0,k(yPy0)+y0,h). The coordinates of point S on the plane of screen can be calculated as follows.

[xSyS]=kz0hz0[xPyP]+(k1)z0+hhz0[x0y0]

The image on the circumscribed square of the screen has a resolution of M × M pixels, so the mapping relationship between projection image I and spatial point Q can be established. In order to reconstruct the ray QV, intensity information of the ray is assigned to pixel (px, py) of projection image I.

[pxpy]=round(MRS[xSyS]+M2)(1pxM,1pyM)
Where, the round () function is a function whose value is the nearest integer to the value in the bracket, and RS is the radius of the screen. In order to reconstruct a 3D object, all the spatial points of the object need to be calculated to get one projection image Ii for the corresponding mirror image projector Pi. This image can only reconstruct the rays emitting from the object in one direction. To obtain 3D light field in 360-degree horizontally, above-mentioned calculation process for all the positions of mirror image projector P' is required, then floating light field 3D display is generated.

3. Image analysis

It should be noted that the observed 3D image at a specific direction is a composite image which includes multiple sub-images from successive projection images. Assuming the projector projects N images when the screen rotates a circle, the mirror image projector’s angular pitch θ is 2π/N. The image composition for an arbitrary viewpoint is shown in Fig. 3(a), which is a top view of the system. For a given position, if the mirror projector P' is located in Pi, the observed image for the viewpoint V is a narrow strip of the projection image for the projector location at Pi, with the center line SiTi. The width of the narrow strip is dependent on the horizontal diffusing angle of the screen and pupil size of the projector’s lens. When the screen rotates an angle of θ, the mirror image projector P' rotates from Pi to Pi+1, the projecting image is changed to the image corresponding to Pi+1. So the center line of observed narrow stripe image changes from SiTi to Si+1Ti+1. In this process, the observed image for viewpoint V is the sub-image in the region SiTiTi+1Si+1 (the dashed lines area on the screen in Fig. 3(a)). Therefore, we know that the observed image for given viewpoint V is actually combination of sub-images of different projection images on different regions of the screen. Assuming the image Ii is projected by the mirror projector P' at Pi, there are NV sub-images in the observed image, which satisfies:NV=ceiling(N/πarcsin(RS/RV)). Where the ceiling () function is a function whose value is the smallest integer more than or equal to integer to the value in the bracket. So the observed image IV for the viewpoint V can be expressed as:

IV=j=NV/2NV/2Ii+j(Si+jTi+jTi+j+1Si+j+1)
Where Ii+j(Si+jTi+jTi+j+1Si+j+1) is the sub-image in the region Si + jTi + jTi + j+1Si + j+1 of image projected by the mirror projector Pi + j. The more images the projector projects when the screen completes one revolution, the more sub-images there are in the observed image. It means that it can reconstruct rays in more directions and correspond more closely to 3D object’s characteristics of luminescence, in other words, the reconstructed 3D scene has higher resolution.

 figure: Fig. 3

Fig. 3 The image composition for an arbitrary viewpoint (a) Image composition for one viewpoint V. (b) Common display zone.

Download Full Size | PDF

In Fig. 3(b), there is a cone-shaped region with green boundary above the flat scanning light field screen, which is defined as: common display zone. The bottom of the zone is coincident with the screen, and the height of the zone HC is

HC=RSh/(RS+RV)

If the displayed 3D scene is located inside of this zone, the light field in all 360-degree horizontal directions can be reconstructed, and displayed 3D scene can be observed at any direction horizontally. Otherwise, if the 3D scene is partially outside of the common display zone, the outside part can only be watched in some specific region, but not for all the 360-degree directions horizontally. Therefore, the size of common display zone presents the size of the 3D scene which can be displayed for this system. From the relation 7, we know that if observer locates at a higher position, he can see a larger 3D scene, because HC is larger.

4. Experiment and results

A floating light field 3D display system with 360-degree horizontal view has been developed. A three-chip DMD (Digital Micro mirror Device)-based color high-frame-rate projector is utilized to project R,G,B color images simultaneously. The three-chip DMD-based color high-frame-rate projector is implemented with R,G,B LEDs (PhlatLight PT54 of Luminus Devices) as light source for three primary color channels. The spatial light modulator DMD of high-frame-rate projector is Discovery 4100 Kit purchased from Texas Instruments, which can display at most 32552 single-bit frames per second with the resolution of 1024 × 768. In the projector three chips of DMD are utilized to display R,G,B channel images simultaneously, and X-Cube prism, which consists of 4 rectangle prisms coated with a wavelength-selected thin film filter, is used as a color-combination component. The prototype configuration is shown in Fig. 4. Due to the limitation of space, the mirror is utilized to fold the image from the projector to the screen perpendicularly. The equivalent optical distance between the high-frame-rate projector and the screen is 3000 mm. The diameter of the flat light field scanning screen is 400 mm, and the image resolution on the screen is equal to the resolution of the inscribed circle area of 768 × 768. The screen is placed just under the glass surface of a table, and is driven by a servo motor. The utilized light field scanning screen is a reflective type of flat microstructured screen, which reflects normal incident chief ray at the direction with a tilt angle of 45-degree. At the same time, taking the chief reflective ray as a benchmark, it diffuses light with 60-degree in vertical direction but only 0.5-degree in horizontal direction. If the observer locates at a position with horizontal distance RV = 500 mm (the distance between the viewpoints and axis) and the height h = 500 mm(distance form viewpoints to the screen), so the common display zone is a cone-shaped region whose bottom is a circle with a diameter of 400 mm and a height of 142.9 mm. In this zone, reconstructed 3D scene can be always watched from all horizontal directions.

 figure: Fig. 4

Fig. 4 Configuration of 360-degree floating light field 3D display prototype system.

Download Full Size | PDF

As we have mentioned that the number of projection images in one revolution, N, is one of the most important display parameters. The interpupillary distance of observers, e, is usually 65 mm. To ensure observer’s eyes obtain different rays emitting from any spatial point, the minimum image number N for our system must satisfy:N2π(RV+RS)Hpsinα/[e(HpsinαRS)]75.

Taking a simple spatial straight line to display as an example, different consequences of projection images with different image number N in one circle are experimentally investigated in the display. The photos taken from different viewpoints with different image number N are shown in Fig. 5. Figures 5(a)-5(d) are photos of reconstructed spatial line with image number N of 100, 300, 500 and 700 respectively. The images on the top row and the images on the bottom row are taken from two views with a 10-degree angular interval in the same height plane. With the increase of number N, the observed image for single viewpoint includes more sub-images. The observed line becomes more consecutive and image quality is improved. When the image number N is small, a part of reconstructed spatial line near the screen is consecutive and has a smaller error, but the part far from the screen has poor continuity (aliasing) with line segmentation and a larger error compared with the ideal spatial line. With the increase of number N, the area of reconstructed spatial line with good continuity expands to the region far from the screen. When the image number N reaches to 700, the reconstructed spatial line presents no evident aliasing error comparing with the ideal line.

 figure: Fig. 5

Fig. 5 Photos of spatial line with different N. (a) N = 100(NV = 14); (b) N = 300(NV = 40); (c) N = 500(NV = 66); (d) N = 700(NV = 92).

Download Full Size | PDF

In the experiment, the number of projection images N is set as 700 and the rotation speed of the screen is 1800 rpm, to get better display performance. In order to reduce flicker of the display, the refresh frequency of this 3D display must be more than 30 Hz, and the display time for each projection image is less than 47.6 µs. In result, the projector projected at least 21000 frames color images per second. The series of color projection images are compressed and transferred to the dynamic memories of the TI discovery kits for dynamic 3D display at a speed of 2 Gbit/s.

A 3D cube model with a symbol of “Zhejiang University” was designed for display, as shown in Fig. 6(a). Figures 6(b)-6(d) are photos of the reconstructed cube taken from different directions. It shows that with our proper rendering the reconstructed 3D scene doesn’t have evident distortion in the display. To reconstruct dynamic 3D scenes, we also designed a 3D animation model for display. Figures 7(a)-7(c) are photos of the reconstructed scene taken from different directions. In the figures, you can see a displayed 3D cartoon model, a real doll and a real box in the photos and find out the spatial relation among them. It is clear that the displayed 3D scene is floating above the screen in the air and has vivid display performance compared with the real object nearby. The display overcomes the drawbacks of occlusion-missing and impenetrability in the conventional volumetric display. Compared with other 360-degree viewable scanning 3D displays, this 3D display is fairly safe and observers could put out hand or fingers to touch displayed 3D scene. It offers possibility for a new potential interaction. It can present a vivid floating 3D scene just like the real 3D object. We have also developed an animation 3D scene displaying in this system, it shows the potential for the future 3D telepresence or 3D TV.

 figure: Fig. 6

Fig. 6 3D cube model and photos of the reconstructed floating display in different directions. (a) A 3D cube model; photos of the reconstructed 3D cube at (b) 0°; (c) 45° and (d) 90°.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Photos taken from differentdirections of the floating 3D scene (Media 1). (a) 0°; (b) 40°; (c) 80°.

Download Full Size | PDF

5. Discussion

Zwicker et al. [14] employed ray-space analysis to quantify display depth of field which is mostly concerned with lenticular displays. In lenticular displays, the math of depth to ray is simple and the rays’ distribution is homogeneous. But in our light field 3D display the math of depth to ray is much more complicated, and the rays’ distribution in common display zone is inhomogeneous, so it is difficult to adapt the method mentioned in the reference [14] directly. In our system, the depth of the common display zone depends on the viewers’ positions and the screen size, which is 142.9 mm mentioned in Section 4. The cone shape could be considered as an assemblage of cross-sections with different heights. The depth resolution of common display zone is dependent on the number of cross-sections. The spatial point distribution and horizontal angular distribution of rays emitting from each point in cross-sections with different heights are analyzed as shown in Fig. 8(a).

 figure: Fig. 8

Fig. 8 The resolution properties in common display zone. (a) Spatial point distribution of arbitrary cross-section in 3D display zone. (b) Distribution of spatial points in 3D display zone of our prototype system (where, RV = 500mm, h = 500mm, RS = 200mm, M = 768).

Download Full Size | PDF

The cross-section with the height of z0 inside of the common display zone can be expressed as:x02+y02[RS(RS+RV)z0/h]2. The 2D resolution of image on the circumscribed square of the screen is M × M, so the number of effective pixels on the circular screen is η × M × M, where η is the coefficient of effective resolution (η≈π/4).

If z0 = 0, the reconstructed cross-section is coincident with the screen surface, with the resolution of η × M × M. When the screen completes scanning one revolution, each pixel in this cross-section scans one circle to reconstruct N rays in all horizontal directions. So the horizontal angular resolution of each point in this cross-section is N.

If 0<z0RSh/(RS+RV), the reconstructed cross-section above the screen is a circle region C0 with the radius of RS(RS+RV)z0/h and center O0 of (0, 0, z0). For an arbitrary viewpoint V, the observed image is always a circle region CS which is the projection region of circle C0 along VO0 direction onto the screen. The center OS of circle CS is (xVz0/(hz0),yVz0/(hz0), 0), and its radius isRSRVz0/(hz0). Therefore, the pixel number of reconstructed cross-section C0 is equal to that of the circle region CS. Assuming the ratio of pixel number in cross-section C0 to the screen is η0, the pixel number of cross-section C0 is denoted as η0η × M × M, whereη0=[1RV/RSz0/(hz0)]2. With the increase of height, the pixel number of reconstructed cross-section in common display zone is decreasing.

The center of the cross-section Cm with the height of zm deviates from the center with m pixels, so the radius of the corresponding circle CS reduces m pixels. With one pixel offset as a reference, the common display zone can be divided into M/2 spatial planes. The height zm of cross-section Cm is denoted as following:

zm=mphRV+mp=2hRSmMRV+2RSm(0mM/2)
Where, p = 2RS/M, is the pixel size of the image on the screen.

In our prototype system, the distribution of spatial resolution in common display zone is shown in Fig. 8(b), where it takes No. of cross-section Cm as horizontal coordinate. The red line in Fig. 8(b) shows that the height zm of different cross-sections is not linear with No. of the cross-sections. It means that interval distances between two adjacent cross-sections are not equal in height. The blue line indicates the variation of pixel amount in different cross-sections. With the increase of the distance from the screen, the pixel amount of the cross-section decreases, but due to the shrink of the area of cross-section, the pixel density increases. The cross-section at the height of z0 is coincident with the screen surface, with the pixel amount of η × M × M. When the cross-section is elevated to the height zm, the pixel amount in the cross-section drops by ηm=[1RV/RSzm/(hzm)]2and the pixel density increases by h/ (h-zm). The total spatial points in the common display zone Nsp can be expressed as:

Nsp=m=0M/2ηmη×M×M
Therefore, our prototype system can reconstruct nearly 60 million spatial points (or voxels) and each spatial point emits 700 different horizontal rays. In this 3D display, one ray in a certain direction is able to multiplex the rays in the same direction emitting from all the collinear points. The number of all the reconstructed rays in the common display zone is usually considered as the product of the spatial points’ number and the rays’ number from each point. But for ray multiplexing, the ray’s number is much larger than the product of pixels’ number on the screen and the projecting images’ number in one circle.

6. Conclusion

In this paper, we present a floating 360-degree light field 3D display based on a high-frame-rate projector and a flat light field scanning screen, which is suitable for numbers of observers without any glasses, and overcomes the drawback of missing occlusion and impenetrability. Moreover it is a true 3D color display system, which can display vivid 3D scenes floating in the air. The experimental results verify the feasibility of this method. It means that one can use light field reconstruction technique to get the real 3D display without holographic method, and can achieve wider viewable angle, better color performance and dynamic 3D scene than the hologram 3D display. Meanwhile, in our system, due to the limitation of the frame rate of the projector, we just show the half-tone gray scale 3D images. For an ideal display, full gray-scale color 3D display is absolutely necessary. It can be realized by either increasing the rotating scanning speed and projector’s frame rate or utilizing multi-projectors system. High-speed transmission and processing of massive data are also essential for real-time video 3D display. It is our future work to improve the 3D display performance.

Acknowledgments

The authors thank Luting Chen for carefully reading the manuscript. This work is supported by National Basic Research Program of China (973 Program) (No. 2013CB328802) National Natural Science Foundation of China (Grant No. 61177015) and Research Funds for The Central Universities of China (No. 2012XZZX013).

References and links

1. N. S. Holliman, N. A. Dodgson, G. E. Favalora, and L. Pockett, “Three-dimensional displays: A review and applications analysis,” IEEE Trans. Broadcast 57(2), 362–371 (2011). [CrossRef]  

2. N. A. Dodgson, “Autostereoscopic 3D Displays,” Computer 38(8), 31–36 (2005). [CrossRef]  

3. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

4. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, P. St Hilaire, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]   [PubMed]  

5. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]  

6. G.E. Favalora, “100-million-voxel volumetric display,” Proc. SPIE 4712, Cockpit Displays IX: Displays for Defense Applications, 300 (August 28, 2002).

7. X. Xie, X. Liu, and Y. Lin, “The investigation of data voxelization for a three-dimensional volumetric display system,” J. Opt. A, Pure Appl. Opt. 11(4), 045707 (2009). [CrossRef]  

8. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46(8), 1244–1250 (2007). [CrossRef]   [PubMed]  

9. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

10. A. Jones, M. Lang, G. Fyffe, X. Yu, J. Busch, I. McDowall, M. Bolas, and P. Debevec, “Achieving eye contact in a one-to-many 3D video teleconferencing system,” ACM Trans. Graph. 28(3), 64 (2009). [CrossRef]  

11. T. Yendo, “The Seelinder: Cylindrical 3D display viewable from 360 degrees ,” J. Vis. Commun. Image . 21, 586–594 (2010).

12. C. Yan, X. Liu, H. Li, X. Xia, H. Lu, and W. Zheng, “Color three-dimensional display with omnidirectional view based on a light-emitting diode projector,” Appl. Opt. 48(22), 4490–4495 (2009). [CrossRef]   [PubMed]  

13. X. Xia, Z. Zheng, X. Liu, H. Li, and C. Yan, “Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen,” Appl. Opt. 49(26), 4915–4920 (2010). [CrossRef]   [PubMed]  

14. M. Zwicker, “Antialiasing for automultiscopic 3D displays,” InProceedings of 17th Eurographics Workshop on Rendering, June 2006, 73–82.

15. S. Yoshida, “fVisiOn: glasses-free tabletop 3-D display,” in Proceedings of Digital Holography and 3-D Imaging (Tokyo, 2011), DTuA1.

16. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]   [PubMed]  

17. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 95 (2011). [CrossRef]  

18. D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, and R. Raskar, “Polarization fields: dynamic light field display using multi-layer LCDs,” ACM Trans. Graph. 30(6), 186 (2011). [CrossRef]  

Supplementary Material (1)

Media 1: MOV (11596 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a) System configuration. (b) Configuration of flat light field scanning screen.
Fig. 2
Fig. 2 Mapping relation between projection image and reconstructed 3D object.
Fig. 3
Fig. 3 The image composition for an arbitrary viewpoint (a) Image composition for one viewpoint V. (b) Common display zone.
Fig. 4
Fig. 4 Configuration of 360-degree floating light field 3D display prototype system.
Fig. 5
Fig. 5 Photos of spatial line with different N. (a) N = 100(NV = 14); (b) N = 300(NV = 40); (c) N = 500(NV = 66); (d) N = 700(NV = 92).
Fig. 6
Fig. 6 3D cube model and photos of the reconstructed floating display in different directions. (a) A 3D cube model; photos of the reconstructed 3D cube at (b) 0°; (c) 45° and (d) 90°.
Fig. 7
Fig. 7 Photos taken from differentdirections of the floating 3D scene (Media 1). (a) 0°; (b) 40°; (c) 80°.
Fig. 8
Fig. 8 The resolution properties in common display zone. (a) Spatial point distribution of arbitrary cross-section in 3D display zone. (b) Distribution of spatial points in 3D display zone of our prototype system (where, RV = 500mm, h = 500mm, RS = 200mm, M = 768).

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

{ x p = H p sinαcos θ p y p = H p sinαsin θ p z p = H p cosα
x x 0 x P x 0 = y y 0 y P y 0 = z z 0 z P z 0 =k and x 2 + y 2 = R V 2
k= [ x 0 ( x P x 0 )+ y 0 ( y P y 0 )] [ ( x P x 0 ) 2 + ( y P y 0 ) 2 ] R V 2 ( x P y 0 x 0 y P ) 2 ( x P x 0 ) 2 + ( y P y 0 ) 2
[ x S y S ]= k z 0 h z 0 [ x P y P ]+ (k1) z 0 +h h z 0 [ x 0 y 0 ]
[ p x p y ]=round( M R S [ x S y S ]+ M 2 ) (1 p x M,1 p y M)
I V = j= N V /2 N V /2 I i+j ( S i+j T i+j T i+j+1 S i+j+1 )
H C = R S h/( R S + R V )
z m = mph R V +mp = 2h R S m M R V +2 R S m (0mM/2 )
N sp = m=0 M/2 η m η×M×M
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.