Abstract

Portable display devices, such as intelligent telephones and panel PCs, have become parts of modern people’s daily life. Their mainstream display interfaces are based on two-dimensional (2D) images. Although some three-dimensional (3D) technologies have been proposed for portable devices, comfortable visual effects are untouched until now. A super multi-view (SMV) system with comfortable 3D effects, constructed by a group of OLED microdisplay/projecting lens pairs, is proposed in this paper. Through gating different segments of each projecting lens sequentially and refreshing the virtual image of the corresponding microdisplay synchronously, the proposed SMV system greatly decreases the demand on the number of employed microdisplays and at the same time takes a thin optical structure, endowing great potential for portable devices.

© 2016 Optical Society of America

1. Introduction

Nowadays, portable display devices, such as intelligent telephones and panel PCs, become parts of modern people’s daily life. It would be very appreciated if they can present comfortable three-dimensional (3D) images to consumers. A thin optical structure is always necessary for portability. Among conventional 3D display technologies, only the stereoscopic display and integral imaging can provide a thin structure. Many researchers tried to implement them on a portable display device [1–10 ]. But the inherent accommodation-convergence confliction for the former method and the poor display resolution for the latter technology prevent them from providing comfortable 3D effects. The super multi-view (SMV) technology is a method for comfortable 3D display. Through projecting numerous 2D perspective views to viewpoints (VPs) with intervals smaller than the pupil diameter of the eyes, the SMV display can deliver at least two different views to a single eye pupil [11]. So, the eyes can focus on the displayed spatial spots naturally. A thin-type SMV system was demonstrated in [12]. But its ultra-high demand on the resolution made this technology only suitable for a large-size (22.2 inch) screen, not for portable devices. Other existing SMV systems couldn’t be applied to portable devices due to their thick optical structures. In short, at present portable display devices offering comfortable 3D effects are still not possible.

In this paper, we firstly demonstrate a novel SMV system with a thin optical structure which exhibits great potentials for portable devices. Only a moderate number of display panels are used in the proposed system.

The rest of this paper is organized as follows. In section 2, the theory of the proposed novel SMV system is explained. Experiments and results are shown in Section 3. Section 4 discusses on some limitations of the prototype system. Section 5 provides conclusions.

2. The novel SMV system with a thin optical structure

2.1 Elementary projecting unit providing multiple VPs by one microdisplay

An elementary projecting unit of the proposed system is drawn in Fig. 1 , which can provide multiple VPs by only one OLED microdisplay through time-multiplexing. The unit is constructed by an OLED microdisplay k, a rectangular projecting lens k with a horizontal size of ΔD, two vertical baffles, and a group of M gating apertures attached to the projecting lens. Through the lens k, the virtual image of the microdisplay k is projected onto the display zone (i.e. EF zone of the projection plane) by a magnification factor β = v/u, where u represents the object distance and –v is the image distance. The focal length of the projecting lens is denoted as f. The horizontal and vertical sizes of the effective display area of the microdisplay are dx and dy, respectively. Two baffles encompass the lens/microdisplay pair along the horizontal x-direction to block light-rays from the OLED microdisplay which exceed the projecting lens. The clear apertures of the M gating apertures (e.g. Ak1, Ak2, Ak3, and Ak4 in Fig. 1 when M = 4) are horizontal aligned seamlessly, and can be gated sequentially at an equal time interval Δt/M.

 

Fig. 1 An elementary projecting unit of the proposed SMV system.

Download Full Size | PPT Slide | PDF

Taking a time-point t + Δt/M = t + Δt/4 as an example, the gating aperture Ak2 gets gated at this time-point, as shown in Fig. 1. In this case, according to ray optics, M = 4 equal-sized segments of the microdisplay’s virtual image (EP1, P1P2, P2P3, and P3F segments) get viewable to M = 4 VPs (i.e. VP2, VP3, VP4, and VP5) on the observation plane, respectively. Synchronously, the 4 segments are refreshed by the microdisplay k with partial perspective views converging to corresponding VPs. For example, the P2P3 segment is refreshed with the partial perspective view converging to the VP3. Then, at the 4 VPs, their corresponding partial perspective views will be viewable separately. These partial perspective views of the target object are named as sub-views in this paper. According to the geometrical relationship, the distance between the observation plane and the projecting lens can be deduced by:

L/(L+v)=(ΔD/M)/(βdx/M)L=ΔDv/(βdxΔD)
and the horizontal interval between adjacent VPs is:
Δd=(βdx/M)(L/v)=(βdxΔD)/(M(βdxΔD))
Repeating this process for all M = 4 gating apertures sequentially and cyclically, totally 7 VPs appear. When the time cycle Δt is small enough for the persistence of vision, tiled sub-views will be observed at each VP without flicker. However, except for VP4 where the observed image is a complete perspective image tiled by M = 4 complementary sub-views, other VPs observe incomplete perspective images, tiled by three, two, or even only one sub-view.

2.2 The SMV system constructed by planarly aligned Elementary projecting units

To obtain more VPs where complete perspective views are viewable, N projecting units are aligned seamlessly along the horizontal x-direction, as shown in Fig. 2 . Here only N = 2 elementary projecting units are drawn for simplification. The horizontal interval between adjacent projecting units is equal to ΔD. For each projecting unit, a specific horizontal offset value of the projecting lens’s optical axis with respect to the microdisplay is set (e.g. δk for a projecting unit k) in order to guarantee precise overlapping of all OLED microdisplays’ virtual images. In Fig. 2, Ok and Ok + 1 denote the geometrical centers of the two microdisplays. Ck and Ck + 1 are the optical centers of the two projecting lenses, respectively. At each time-point, a group of N ΔD-spaced gating apertures are gated simultaneously. For each gated gating aperture, the corresponding M sub-views are refreshed synchronously, just as what was done in the above discussion on Fig. 1. Gating M groups of ΔD-spaced gating apertures sequentially and cyclically, (N + 1) × M-1 VPs appear. Among them, the adjacent (N-1) × M + 1 VPs can provide complete perspective views, which are tiled by M sub-views projected from one or adjacent two microdisplays in a time-sequential manner. For example, the M = 4 sub-views for above mentioned VP4 are all projected from the microdisplay k. Differently, for the VP5 in the Fig. 2, three sub-views are projected from three segments of the microdisplay k, i.e. EP1, P1P2 and P2P3 segments, respectively. But the fourth sub-view is projected from the P3F segment of the microdisplay k + 1. When the time cycle Δt is small enough for the persistence of vision, at these (N-1) × M + 1 VPs, complete flicker-free perspective views will be observed. When the interval Δd between adjacent VPs is smaller than the pupil of a viewer, a super multiview 3D display get implemented. The above (N-1) × M + 1 VPs constitute the viewing zone (VZ) along the horizontal direction.

 

Fig. 2 Optical arrangements of the proposed SMV system. Two elementary projecting units are drawn here to demonstrate the proposed ideas.

Download Full Size | PPT Slide | PDF

The light from an OLED pixel inherently has a large divergence angle. Through a projecting lens, the light intensity distribution on the whole exit pupil of the projecting lens is approximately homogeneous when the exit pupil is not very large (e.g. the situation in the proposed system). So, one segment of a microdisplay will provide sub-views with approximately equal intensities to related different VPs. As a result, the perceived tiled perspective views show no obvious light intensity fluctuation as a pupil scans across the VZ horizontally.

According to Eq. (2), the size of the horizontal VZ constructed by (N-1) × M + 1 VPs can be denoted as:

W=(N1)MΔd=(N1)βdxΔD/(βdxΔD)
Obviously, more projecting units can enlarge the horizontal VZ effectively. Actually, the number of projecting units accommodated by the proposed system depends on the N.A. of the projecting lens, due to the existence of δk for a projecting unit k. In the case of accommodating N = 2n + 1 projecting units, the offset value δ0 for the middle projecting unit is set as 0. Then, the horizontal offset value δn of the boundary units should be designed specifically to guarantee precise overlapping of the boundary and the middle OLED microdisplays’ images based on the following equation:
βδn=δn+nΔDδn=nΔD/(β1)
This equation is also suitable for other projecting units when n is replaced by the corresponding serial number. The projecting lenses for different projecting units are cut out from a group of mother lenses with identical characteristics. A virtual rectangular zone is marked on the mother lens centro-symmetrically to cover all the projecting lenses. Geometrically, the minimum horizontal size of the virtual rectangular zone should be
Ax=2(δn+ΔD/2)
which increases with n. Along the vertical direction, the projecting lens should have a size of Ay to form a vertical VZ, as will be discussed in the following experiment. So, the projecting lens needs a larger N.A., i.e. (Ax2+Ay2)1/2/f, when more projecting units are used in the proposed system. That is, the number of accommodated projecting units is limited by the N.A. value of the used projecting lens.

3. Experiments and results

To implement the idea described above, a prototype is set up, as shown in Fig. 3 . A number of 2n + 1 = 11 color OLED microdisplays from OLEiD of China are used. The microdisplays have a frame rate of 85Hz, a display area dx × dy = 7.56 × 10.08mm 2 and a resolution 600 × 800. Achromatic lenses (f = 60mm) with an effective aperture 50mm are taken as the mother lenses. The parameter u takes 160/3mm, resulting in β = 9, v = 480mm, and an available size 68.04 × 90.72mm 2 for the tiled perspective views. For the symmetric consideration, the displayed 3D object is set as 68 × 68 × 68mm 3 in the prototype system. Since each OLED microdisplay is packaged with a driving board individually, its mechanical size is 17 × 22mm 2. Aligning microdisplays side by side in one row, the value of ΔD will be 17mm, which is rather larger than the value of dx. As discussed above, a smaller ΔD allows the system accommodating more projecting units to generate more VPs. So, in the prototype system, the 11 microdisplays are arranged into two parallel rows alternatively for a smaller ΔD, as shown in the Fig. 3. To show the inner structure more clearly, 4 of the 12 baffles are removed for photograph. The vertical interval between two rows is set as 22.2mm, a little larger than the mechanical size of the microdisplay, which ensures no spatial conflict between adjacent microdisplays (one in the upper row and another in the lower row) along the horizontal direction. Thus, the horizontal interval ΔD between adjacent microdisplays decreases to 8.6mm, close to the value of dx. According to Eqs. (1) and (4) , L = 69.5mm and δmax = δ5 = 5.375mm are obtained. During the experiment, anti-distortion through a correction Table [13] is performed to alleviate the wavefront aberration for the projecting units with larger offset values.

 

Fig. 3 Photograph of the experimental display system.

Download Full Size | PPT Slide | PDF

Similar as the situation along the x-direction shown in Fig. 2, in order to overlap the virtual images of the microdisplays belong to different rows precisely, the optical axises of the projecting lenses in different rows should be arranged by a vertical offset δy = ± [11.1/(β-1)]mm = 1.3875mm with respect to the corresponding microdisplay. This arrangement leads to offsets between the vertical VZs of the projecting units belonging to different rows. The vertical VZ of one projecting unit is confined by two lines: one line connecting the upper marginal point of the projected perspective view and the upper marginal point of the corresponding projecting lens, and the other line connecting the two lower marginal points. The overlapping zone between the vertical VZs of the projecting units in different rows defines the vertical VZ of the system. According to Eq. (5), Ax = 19.35mm. Since the used mother lens has an effective aperture 50mm, the allowable vertical size of the virtual rectangular aperture (i.e. the maximum vertical size of the marginal projecting lenses) reaches 46mm, which is taken as the common vertical size of all projecting lenses. Under this condition, the vertical VZ of the system reaches 14mm on the observation plane geometrically.

The sequential gating of apertures is implemented by an 85Hz transmission-mode liquid crystal panel, which is attached to the projecting lens array closely. The gating aperture for one projecting unit is taken as M = 6, resulting in a displaying frequency of 14Hz. Based on Eq. (2), Δd = 1.64mm. For a pupil of 5mm (i.e. the average diameter), at least three adjacent VPs can be captured simultaneously. So the SMV effect keeps being active throughout the horizontal VZ. Putting the value of Δd into Eq. (3), a horizontal VZ of 98.4mm is obtained, which is about 1.5 times as large as the average interocular distance (64mm) of a viewer. The corresponding viewing angle is about 10.2°. With all the optical elements included, the thickness of the optical structure is as small as 65mm.

Figure 4 shows the captured images of an apple displayed by the prototype system when the CCD locates at different equal-interval horizontal positions. The apple-leaves present obvious changes with the moving view point. To exhibit the changes more clearly, the leaves of the captured imageries at different positions are cut out and amplified, as shown in Fig. 5 . A slight flickering due to the low display frequency 14Hz is observed, but comfortable 3D effects get demonstrated. Eight subjects observed the displayed 3D image in the lab environment and no discontinuous motion parallax and headache were perceived.

 

Fig. 4 Captured images of an apple with a horizontal interval of 30mm on the observation plane when the proposed display system works.

Download Full Size | PPT Slide | PDF

 

Fig. 5 The magnified leaves of the captured images in Fig. 4 for demonstrating changes more clearly.

Download Full Size | PPT Slide | PDF

4. Discussions on some limitations of the prototype system

The VP-interval Δd is an import parameter, which is expected to be as small as possible in order to allow more VPs be covered by a pupil of the viewer for better 3D effect. According to Eq. (2), more gating apertures for one projecting unit can contribute to decrease the value of Δd. But the allowable gating apertures for one projecting unit are restricted by the frame rate of the used microdisplay, because the display frequency of the displayed target object shall be high enough to avoid obvious flickering. In addition, a smaller ΔD and a larger βdx also can optimize the Δd. According to their spatial relationship, the minimum limit of the ΔD is dx. Under this condition, the Δd is proportional to βdx/(β-1). When the value of β is not very small, it has no obvious influence on the Δd. Therefore, the parameters of the used microdisplay, such as the frame rate and the size of the effective display area, play key roles on the 3D effect of the proposed system.

For a SMV system, the displayed light spot is formed by superimposing incoherent cone-shaped light beams coming from different 2D sub-views. In our Horizontal-Parallax-Only system, the horizontal size of displayed light spots on a plane being parallel with the projection plane is determined by the spatial extent of the cone-shaped light beam on this plane. So, the light beam divergence leads to varied horizontal sizes of the displayed spots on different parallel planes. The worst spots locate on the planes farthest away from the projection plane, i.e. the P1 and P2 planes in the Fig. 6 . Geometrically, the cone-shaped beam can be approximated by a straight-edge beam between polygonal lines M1T1Q1S1 and M2T2Q2S2 in the horizontal plane [14]. Here, M1 and M2 are the horizontal marginal points of a gating aperture. Q1 and Q2 denote the marginal points of an identifiable spot on the projection plane. T1, T2, S1, and S2 are intersection points of the planes P1 and P2 with the lines connecting the point T1 (or T2) with the point S1 (or S2). Taking tri() as the optical transmission function of the incoherent imaging, the horizontal spatial size of the projected spots on the projection plane is εd = (Mλv/ΔD)mm. According to the geometrical relationship shown in Fig. 6, the worst horizontal size of the displayed spots on the P1 and P2 planes can be estimated as:

{ε1=Δz(ΔD/M+εd)+2εdv2vε2=Δz(ΔD/Mεd)+2εdv2v
where Δz/2 is the spacing between P1 (or P2) and the projection plane. We can tell from the comparison of these two values, ε1>ε2. For other parallel planes, the horizontal sizes of the displayed spot are in-between ε1 and εd.

 

Fig. 6 Geometrical diagram showing the displayed spot sizes of the 2D display planes in the 3D display space.

Download Full Size | PPT Slide | PDF

A line-drawing pyramid is displayed to demonstrate the changing horizontal sizes of the displayed spots along the depth direction. The line-thickness of the pyramid is set as 1.0mm. A CCD (SenSys 1602E) with an objective aperture of 5mm is used to capture the presented image. Figure 7 shows a captured image and its two locally enlarged images at different field depths, i.e. around the projection plane and P1, respectively. The horizontal light intensity distribution curves at around the two field depths are measured. A line-segment of 1.0mm just covered by the light intensity distribution curve for the image at the projection plane is got. Its extension line intersects with another distribution curve by a length of 1.16mm. That is to say, a blur of about 0.16mm is happened at the farthest plane P1. Putting the system parameters into the Eq. (6), the resulted blur is about 0.1mm. The deterioration of the experimental blur may come from the following three sources: 1) diffraction effect; 2) the non-accurate overlapping of the cone-shaped beam due to discrete arrangement of the OLED pixels; 3) assembling errors of the optical structure. The 0.16mm is the maximum difference in the blur for different planes when the depth of the displayed three-dimensional object is 68mm. As the depth (i.e. size of the displayed object) increases, the blur will become worse. That is to say, the blur due to overlapping of divergent incoherent light beam sets a limitation on the maximum display depth of the prototype system.

 

Fig. 7 Captured image with two locally enlarged images at different depths.

Download Full Size | PPT Slide | PDF

Another parameter needs to be discussed is the L, the distance between the observation plane and the prototype system. With the system parameters used above, the prototype produces an L of 69.5mm, which means that the displayed image must be viewed at a distance of about 69.5mm. For practical applications, it is hard to imagine users holding a display device so close to their eyes. According to Eq. (1), increasing the value of ΔD will bring a larger L. For example, if ΔD = 17 is taken in the prototype with other parameters unchanged, the L reaches 160mm. In order to guarantee a small VP-interval Δd for super multi-view display, Eq. (2) requires that the value of M increases with the ΔD. Actually, the frame rate of the microdisplay is not unlimited and then the ΔD cannot be too large. In order to further increase the L to a normal value, e.g. about 300mm, decreasing the relative horizontal size of each sub-view will work. That is to say, with a projecting lens divided into M gating apertures horizontally, the display zone can be divided into a larger number M′ of segments to achieve a larger L. As shown in Fig. 8 , the L gets increased when M′/M = 5/4 is taken, compared with the situation M′/M = 4/4. In this case, the Eq. (1) changes to Eq. (7):

L/(L+v)=(ΔD/M)/(βdx/M')L=(M'/M)ΔDv/(βdx(M'/M)ΔD)
Combining these two methods together, a moderate L is expected by the thin-type prototype system. For example, taking f = 60mm, β = 9, display size = 68mm, ΔD = 17mm, M = 16, and M′ = 24, the L will reach 288mm according to Eq. (7). This value is close to 300mm. The needed frame rate of the used microdisplay should be 16 × 14 = 224Hz for a 14Hz display frequency. With these parameters, the interval between adjacent VPs is still 1.65mm and the thin structure keeps being 65mm.

 

Fig. 8 The changed distance between the observation plane and the prototype system along with the varied value of M′/M.

Download Full Size | PPT Slide | PDF

With a display zone of 68mm and an observing distance of L + v = (69.5 + 480)mm = 549.5mm, the prototype produces an image that subtends a rather small field angle of 7 degree. When the display zone and the ΔD are divided into the same M segments, each VP is able to see images from M sub-views, which cover a distance of ΔD. If a bigger ΔD is used, according to above Eq. (2), a larger value of M is needed to guarantee a small Δd for super multi-view display. So, the display zone increases along with the ΔD, resulting in a greater observing distance and no obvious change of the field angle. That is to say, a larger ΔD, accompanying with a higher frame rate, will not improve the field angle provided by the prototype system. As discussed above, compared with the gating aperture number M for one projecting unit, the display zone could be divided into more segments M′. In this case, each VP is able to see imageries from M′ sub-views, which cover a distance of (M′/M) ΔD. Figure 8 shows an example of M′/M = 5/4. Under this condition, the subtended angle by the displayed image will depend on the display size βdx and the observing distance v + L. In the prototype system, the maximum display size is βΔD, because the maximum value of the microdisplay’s display area cannot be larger than ΔD. Employing the same system parameters (f = 60mm, β = 9, ΔD = 17mm) as those used for L = 288mm, the maximum display size reaches ΔD × β = 153mm when the display area of the microdisplay dx is equal to ΔD. Then, the resulted field angle is about 11 degree at the observing distance of (480 + 288)mm = 768mm. Taking a popular mobile phone IPhone 6s as an example, which is equipped with a horizontal display screen of about 69mm, its field angle is about 2arctan(69/2/300)≈13 degree for the normal distance of 300mm, which is only a little larger than 11 degree. So, the improved display system with a field angle of 11 degree is acceptable for portable devices. With these parameters, the vertical size of the projecting lens needs to be 57mm, in order to guarantee the whole perceiving of the imagery on the projection plane. That is to say, projecting lenses with a higher N.A. are needed and the available space for horizontal offset δn of the boundary projecting units becomes more limited. Further settlement is to align different projecting units circularly, which is our future research work. In summary, the field angle subtended by the prototype system can get some improvements if more suitable microdisplays are available. Although a very large value may not be reached, the improved field angle is acceptable for a portable device.

Actually, the OLED itself has a very fast response time, at the microsecond level. So, the OLED microdisplay has potentials to become ultra-high-speed display techniques. Once OLED microdisplays with ultra-high frame rates are available, the proposed portable three-dimensional display system is expected to provide a practical field angle and an appropriate observing distance for the viewer.

5. Conclusions

In conclusion, a novel SMV display system based on spatial-time multiplexing is realized by using a moderate number of OLED microdisplays. The most worthy is that the proposed system only needs a thin optical structure, offering great potential for portable devices provided that the related driving and control systems can be integrated into a monolithic chip.

Limited by numerical apertures of the used lenses and the properties of the used OLED microdisplays, only 60 1.64mm-spaced VPs are presented in our prototype system and slight flickering is observed. If Fresnel lenses with a larger aperture, OLED microdisplays with a higher frame rate and a smaller pixel interval are available, more comfortable 3D effects can be obtained by the proposed technology and the system will become further thinner. This is the focus of our future work.

Acknowledgments

The authors gratefully acknowledge supports by the National Science Foundation (U1201254, 11204834); The 863 Program (2015AA03A101); Guangzhou Technical Plan (201510010280); Science and Technology Major Project of Guangdong Province (2014B010119003; 2014B010122005); Nature Science Foundation of Guangdong (2015A030310142).

References and links

1. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image arrray,” Optica 1(5), 332–335 (2014). [CrossRef]  

2. D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015). [CrossRef]  

3. Y. Kim, J. Kim, Y. Kim, H. Choi, J. H. Jung, and B. Lee, “Thin-type integral imaging method with an organic light emitting diode panel,” Appl. Opt. 47(27), 4927–4934 (2008). [CrossRef]   [PubMed]  

4. C. C. Ji, C. G. Luo, H. Deng, D. H. Li, and Q. H. Wang, “Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display,” Opt. Express 21(17), 19816–19824 (2013). [CrossRef]   [PubMed]  

5. Y. S. Hwang, F. K. Bruder, T. Fäcke, S. C. Kim, G. Walze, R. Hagen, and E. S. Kim, “Time-sequential autostereoscopic 3-D display with a novel directional backlight system based on volume-holographic optical elements,” Opt. Express 22(8), 9820–9838 (2014). [CrossRef]   [PubMed]  

6. G. Wang, Y. Huang, T. Chang, and T. Chen, “Bare fringer 3D air-touch system using an embedded optical sensor array for mobile displays,” J. Display Technol. 10(1), 13–18 (2014). [CrossRef]  

7. C. G. Luo, X. Xiao, M. Martínez-Corral, C. W. Chen, B. Javidi, and Q. H. Wang, “Analysis of the depth of field of integral imaging displays based on wave optics,” Opt. Express 21(25), 31263–31273 (2013). [CrossRef]   [PubMed]  

8. W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010). [CrossRef]  

9. D. Teng, Y. Xiong, L. Liu, and B. Wang, “Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays,” Opt. Express 23(5), 6007–6019 (2015). [CrossRef]   [PubMed]  

10. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]   [PubMed]  

11. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

12. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005). [CrossRef]  

13. Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003). [CrossRef]  

14. D. Teng, Z. Pang, Y. Zhang, D. Wu, J. Wang, L. Liu, and B. Wang, “Improved spatiotemporal-multiplexing super-multiview display based on planar aligned OLED microdisplays,” Opt. Express 23(17), 21549–21564 (2015). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. A. Markman, J. Wang, and B. Javidi, “Three-dimensional integral imaging displays using a quick-response encoded elemental image arrray,” Optica 1(5), 332–335 (2014).
    [Crossref]
  2. D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
    [Crossref]
  3. Y. Kim, J. Kim, Y. Kim, H. Choi, J. H. Jung, and B. Lee, “Thin-type integral imaging method with an organic light emitting diode panel,” Appl. Opt. 47(27), 4927–4934 (2008).
    [Crossref] [PubMed]
  4. C. C. Ji, C. G. Luo, H. Deng, D. H. Li, and Q. H. Wang, “Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display,” Opt. Express 21(17), 19816–19824 (2013).
    [Crossref] [PubMed]
  5. Y. S. Hwang, F. K. Bruder, T. Fäcke, S. C. Kim, G. Walze, R. Hagen, and E. S. Kim, “Time-sequential autostereoscopic 3-D display with a novel directional backlight system based on volume-holographic optical elements,” Opt. Express 22(8), 9820–9838 (2014).
    [Crossref] [PubMed]
  6. G. Wang, Y. Huang, T. Chang, and T. Chen, “Bare fringer 3D air-touch system using an embedded optical sensor array for mobile displays,” J. Display Technol. 10(1), 13–18 (2014).
    [Crossref]
  7. C. G. Luo, X. Xiao, M. Martínez-Corral, C. W. Chen, B. Javidi, and Q. H. Wang, “Analysis of the depth of field of integral imaging displays based on wave optics,” Opt. Express 21(25), 31263–31273 (2013).
    [Crossref] [PubMed]
  8. W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
    [Crossref]
  9. D. Teng, Y. Xiong, L. Liu, and B. Wang, “Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays,” Opt. Express 23(5), 6007–6019 (2015).
    [Crossref] [PubMed]
  10. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014).
    [Crossref] [PubMed]
  11. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
    [Crossref] [PubMed]
  12. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
    [Crossref]
  13. Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
    [Crossref]
  14. D. Teng, Z. Pang, Y. Zhang, D. Wu, J. Wang, L. Liu, and B. Wang, “Improved spatiotemporal-multiplexing super-multiview display based on planar aligned OLED microdisplays,” Opt. Express 23(17), 21549–21564 (2015).
    [Crossref] [PubMed]

2015 (3)

2014 (4)

2013 (3)

2010 (1)

W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
[Crossref]

2008 (1)

2005 (1)

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

2003 (1)

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Aksit, K.

D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
[Crossref]

Bruder, F. K.

Chang, T.

Chen, C. W.

Chen, T.

Choi, H.

Deng, H.

Fäcke, T.

Geng, J.

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Hagen, R.

Huang, Y.

G. Wang, Y. Huang, T. Chang, and T. Chen, “Bare fringer 3D air-touch system using an embedded optical sensor array for mobile displays,” J. Display Technol. 10(1), 13–18 (2014).
[Crossref]

W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
[Crossref]

Hwang, Y. S.

Javidi, B.

Ji, C. C.

Jung, J. H.

Kade, D.

D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
[Crossref]

Kim, E. S.

Kim, J.

Kim, S. C.

Kim, Y.

Lee, B.

Li, D. H.

Li, S. L.

Liu, L.

Luo, C. G.

Markman, A.

Martínez-Corral, M.

Mphepö, W.

W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
[Crossref]

Nakanuma, H.

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Özcan, O.

D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
[Crossref]

Pang, Z.

Shieh, H. D.

W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
[Crossref]

Takaki, Y.

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Teng, D.

Ürey, K.

D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
[Crossref]

Walze, G.

Wang, B.

Wang, G.

Wang, J.

Wang, Q. H.

Wu, D.

Xiao, X.

Xiong, Y.

Xiong, Z. L.

Zhang, Y.

Adv. Opt. Photonics (1)

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref] [PubMed]

Appl. Opt. (1)

J. Display Technol. (1)

J. Display. Technol. (1)

W. Mphepö, Y. Huang, and H. D. Shieh, “Enhancing the brightness of parallax barrier based 3D flat panel mobile displays without compromising power consumption,” J. Display. Technol. 6(2), 60–64 (2010).
[Crossref]

Opt. Express (6)

Optica (1)

Pers. Ubiquitous Comput. (1)

D. Kade, K. Akşit, K. Ürey, and O. Özcan, “Head-mounted mixed reality projection display for games production and entertainment,” Pers. Ubiquitous Comput. 19(3-4), 509–521 (2015).
[Crossref]

Proc. SPIE (2)

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 An elementary projecting unit of the proposed SMV system.
Fig. 2
Fig. 2 Optical arrangements of the proposed SMV system. Two elementary projecting units are drawn here to demonstrate the proposed ideas.
Fig. 3
Fig. 3 Photograph of the experimental display system.
Fig. 4
Fig. 4 Captured images of an apple with a horizontal interval of 30mm on the observation plane when the proposed display system works.
Fig. 5
Fig. 5 The magnified leaves of the captured images in Fig. 4 for demonstrating changes more clearly.
Fig. 6
Fig. 6 Geometrical diagram showing the displayed spot sizes of the 2D display planes in the 3D display space.
Fig. 7
Fig. 7 Captured image with two locally enlarged images at different depths.
Fig. 8
Fig. 8 The changed distance between the observation plane and the prototype system along with the varied value of M′/M.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

L / ( L + v ) = ( Δ D / M ) / ( β d x / M ) L = Δ D v / ( β d x Δ D )
Δ d = ( β d x / M ) ( L / v ) = ( β d x Δ D ) / ( M ( β d x Δ D ) )
W = ( N 1 ) M Δ d = ( N 1 ) β d x Δ D / ( β d x Δ D )
β δ n = δ n + n Δ D δ n = n Δ D / ( β 1 )
A x = 2 ( δ n + Δ D / 2 )
{ ε 1 = Δ z ( Δ D / M + ε d ) + 2 ε d v 2 v ε 2 = Δ z ( Δ D / M ε d ) + 2 ε d v 2 v
L / ( L + v ) = ( Δ D / M ) / ( β d x / M ' ) L = ( M ' / M ) Δ D v / ( β d x ( M ' / M ) Δ D )

Metrics