The interocular affine similarity of three-dimensional scenes is investigated and a novel accelerated reconfiguration algorithm for intermediate-view polygon computer-generated holograms based on interocular affine similarity is proposed. We demonstrate by using the numerical simulations of full-color polygon computer-generation holograms that the proposed intermediate view reconfiguration algorithm is particularly useful for the computation of wide-viewing angle polygon computer-generated holograms.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Three-dimensional (3D) imaging and display technologies have been in active development over the past two decades. The basic principle of 3D display technologies  is the utilization of binocular 3D cues for the human visual perception system, with interocular disparity being the most effective of these cues. In a classical sense, the interocular disparity supposes that the parallax views of a 3D scene are considered completely different.
In general, holographic 3D displays are considered the ultimate form of 3D display because they are able to deliver the most natural 3D images with accommodation-vergence match . This accommodation-vergence match is ascribed to the interocular disparity included in the CGH pattern. The computer-generated holograms (CGHs) for holographic 3D displays contain all of the information on the continuous parallax views of a three-dimensional (3D) scene, which is recorded in a CGH using single two-dimensional continuous complex fringe patterns and produces motion parallax effect as well as accommodation-vergence match.
Representation theory and rapid calculation algorithms have been two of the main CGH research issues. Various CGH representation theories have been developed, such as point clouds [3–8], ray-sampling [9–11], depth-map [12–14], and polygon [15–21] based CGH models. The polygon CGH is well known by computational efficiency, rigorousness of modeling and flexibility. The polygon CGH can be efficiently calculated by using fast Fourier transform (FFT) method, but the analytic theory of polygon CGHs has steadily continued [22,23]. The development of the fast algorithms has focused on parallel implementation on parallel computing hardware and related algorithm development [23–28]. The reduction in the complexity of CGH algorithms through mathematical analysis [29–31] from an information theoretic perspective is fundamentally important, but it is relatively rare in comparison with research on parallel computing.
From an information theory perspective of CGH, we need to introduce the concept contrast to the interocular disparity, interocular similarity, where the different directional views of a 3D scene share a strong similarity. Interocular similarity is worth analyzing in depth since it gives new insight on the information of CGH and its utilization enables the acceleration of CGH. If we can exploit the interocular similarity of a 3D scene with finite viewing angle to synthesize intermediate view CGHs and its total calculation amount is reduced, it would be expected that we have a mathematical complexity reduction for the acceleration of CGH calculation. In this context, interocular similarity leads to the expectation that continuous parallax views share informational similarity and the actual informational capacity of a CGH can be smaller than the informational capacity of the conventional space-bandwidth product [32, 33]. With this in mind, we can extend this to understand that the space-bandwidth product is the upper-bound of the amount of information that is containable in a finite-viewing-angle 3D image, because the conventional space-bandwidth product assumes that there is no relationship between adjacent views.
This fundamental information theoretic perspective on CGH is the motivation of this paper with the primary questions being how we can efficiently use the interocular similarity of 3D objects to develop an accelerated algorithm for CGH synthesis and how interocular similarity can be represented efficiently. This paper presents a theoretical analysis of the interocular similarity among adjacent holographic images with angular separation. The interocular similarity between adjacent views can be represented by the affine transform of corresponding points and this property is extensively investigated and extended to efficiently synthesize wide-view polygon CGHs. An application of the proposed method to 360-degree multi-view CGH content generation [34–38] is presented.
This paper is structured as follows. In Section 2, a geometric model of 3D scene perception is described. In Section 3, the affine transform analysis of the interocular similarity of a 3D scene is presented. In Section 4, an accelerated CGH algorithm based on the interocular similarity is proposed based on the wave optic interpretation with affine transformation for CGH calculation. Numerical experiments and the subsequent evaluation of the proposed accelerated CGH algorithm are presented with an example of 360-degree multi-view CGH content generation. Finally, concluding remarks are provided in Section 5.
2. Geometric model of three-dimensional scene perception
In this section, we present a geometric model of 3D scene perception and analyze the interocular similarity of a 3D scene. The focus of the analysis is the non-linear relationship between two different parallax views in retina spaces derived from the 3D scene perception model. The non-linear relationship can be linearly approximated by an affine transformation even for quite large angular separation between two views, a process referred to the interocular affine similarity transform. The tolerance range for the interocular affine similarity is numerically analyzed using the interocular affine similarity transform. The interocular affine similarity established in this section is then applied to the accelerated CGH synthesis algorithm in Section 3.
A basic property of the visual perception system is that the monocular imaging system of the eye allows the viewer to see 3D objects by automatically adjusting its accommodation to a convergence point. In Fig. 1, two monocular imaging systems that share a convergence point are illustrated, with the global reference coordinate system and the local coordinate systems for the left and right eyes denoted as, , and , respectively. When both eyes are focused on the convergence point, then the foci of two eyes are automatically adjusted to the convergence point. The perceived image in the eye varies with changes in eye position. Here, we develop a geometric model of this monocular imaging based on an arbitrary location and rotation.
Let us set the convergence point as and the projection center of the eye as in the global coordinate system, where the projection center is the center of the eye lens. In normal conditions, the unit vector is on the viewing plane, which is the plane specified by the u and w vectors, and the unit vector is normal to the viewing plane. The optic axis vector of the eye in the global coordinate system is given by . The coordinates of the projection center is then solved byEq. (2). In addition, in order to consistently describe the wave optic imaging and CGH synthesis theory using the same framework, we need to define an adaptive global coordinate system for the eye, as seen in Fig. 1(b). In the adaptive global coordinate system of the eye , the 3D scene is rotated relatively to align the optic axis of the eye with the global coordinate z-axis. The optic axis is matched to the optic axis and the plane is parallel to the plane . As a result, the adaptive global coordinate systems are obtained, respectively, asFig. 2.
For simplicity, from this point forward, the notation will be used instead of to represent the adaptive global coordinate system. Consider the imaging of a 3D object through a single eye illustrated in Fig. 2, where the triangular facet in object space is imaged in the retina space of the viewer’s eye. The center of mass of the triangular facet is denoted by and the eye focus is set to the center of mass. A triangular facet in the object space is delivered to the retinal space through the geometric imaging transformation . The focal length of the eye lens is set by
The geometric imaging transformation transports the triangular facet with three apex points, , , and , to the triangular facet with , , and in retina space. It is assumed here that all the points on the triangular facet in object space are mapped onto the flat triangular facet in retina space . Here, a geometric imaging transform between the textures on and is developed. The triangles and specify two planes in object and retina space, respectively, asEq. (14) into Eq. (12):Eq. (16) into Eq. (13):Fig. 2(b) . The local coordinates of a point in object space is solved for the global coordinatesEq. (15) and is represented byEq. (17) and is represented byEqs. (20), (21), (22), and (23), we have the set of mapping functions relating the local coordinates of object space to those of retina space as and or, inversely, and . As a result, non-linear mapping is established between the local coordinate system of object space and retina space .
Figure 3(a) depicts the simulation setup used to verify the visual perception of a 3D object with a single rectangle background plane and a triangular facet positioned slightly apart from the rectangle plane. At the same time, the left and right eyes observe this scene.
By using nonlinear mapping and , we can draw a non-linear grid on the retina plane that is mapped from the uniform grid of the object surface. Figure 3 presents the mappings of a uniform grid drawn on the rectangular facet into the retina planes of the two separated eyes and the observed images with different parallax for the two distant positions of the eyes. This perception process can be visualized by the mapping of uniform grid in object local space to non-uniform grid in retina local space, in which the uniform grid image is stretched asymmetrically on the retina imaging plane and its shape changes for different positions. The simulation in Fig. 3 illustrates two processes: (1) how the triangle looks on the image plane of each camera and (2) how a uniform grid on local coordinates of the triangular facet floating in global object space is mapped to the local coordinates of the imaged triangular facets for both eyes. The first row of the chart in Fig. 3(b) shows that the perspectives of both eyes differ for the same scene. In the second row of the chart, we can see that the uniform grid on the local coordinates of object space is non-linearly mapped to that of each retina space. The two non-linear grids also exhibit different patterns because the location and view direction of both eyes are inconsistent.
It is important to consider the coordinate transform of a point in the local coordinate system of a facet to the local coordinate system of the adaptive global coordinate system. This relationship is described byEq. (24) with the definitions of , , and , are described in the Appendix. From Eq. (24), the redefined grid on the adaptive local coordinate is solved for the uniform grid of the original local coordinates.
3. Interocular affine similarity of a three-dimensional scene
If a triangular facet has a plain texture, the observer will notice variation in its shape and shading in response to spatial changes in the observer’s location and view direction. However, for a textured triangular facet, the observer can perceive not only changes in shape and shading but also deformation of the texture pattern. As depicted in Fig. 4, an observer located at position A, which is close to the normal vector of the triangular facet, can see a mostly undistorted texture pattern. However, another observer located at position C perceives the highly distorted texture pattern because location C is far from the normal axis of the triangular facet.
How an arbitrary texture pattern on a triangular facet floating in object space is distorted in the imaged triangular facet of retina space is fully explained by the geometric mapping model developed in Section 2. According to the visual perception model, the shape and texture pattern of the 3D object vary with changes in position, however, the observer can be thought to perceive similar scenes when the observation location or view direction does not change dramatically, meaning that interocular similarity exists between weakly separated observation points.
In the context of a holographic 3D display, the observed images at both observation points share the same holographic information through the similarity. From this point of view, we suppose that the holographic image observed at the original point has a linearly approximate conversion relationship with the other image observed at a neighboring point. Using the supposed linear relationship between the two images, it is possible to approximate the holographic information at the neighboring point by reconfiguring the information for the original point . We employ the affine transformation to represent the approximate linear relationship among adjacent observation points. This strategy is illustrated in Fig. 4(b). From a practical point of view, it is expected that it can be used to reduce the computational complexity of the CGH algorithm so that the computation speed of polygon CGHs can be dramatically increased.
Here, let us develop the mathematical formulation of this strategy based on the geometric mapping transformation of the previous section, and . Firstly, we set up the referential retina space and its adjacent retina space which are denoted by, and , respectively, as shown in Fig. 4(b). Although the two triangles on the local coordinates of the referential and adjacent retina spaces would appear to have different shapes, their relationship is described by the affine similarity transformation. After determining the three apexes of the triangular facets, , , and in the referential space and the corresponding apexes, , , and in adjacent retina spaces for a target triangle facet in object space, their relationship is written asEquations (25)-(27) are combined as a matrix equationFig. 3(b), the mapped grids on the local coordinates of the left and right eyes have different aspects because their positions and view directions are apparently dissimilar. However, if both eyes are located near to each other or their view directions are not significantly different, we are able to define the conversion relationship among their local coordinates using Eq. (29). It should be noted that there must be some errors in this assumption, which will continue to be estimated.
The validity of the affine interocular similarity is analyzed with a numerical simulation, in which it is assumed that the observer watches the center of a triangular facet lying in the plane while moving in the designated observation section as shown in Fig. 5(a). Four observation sections are set up which are designated in terms of longitudinal angle (0, 15, 30, and 45 degrees) azimuthal angle (−15 to 15 degrees). It is also assumed that the reference points of each observation section are located in the middle of each section.
Under these circumstances, let us use an example to clarify the purpose of this simulation. When the observer is located at and , we can determine how a uniform grid on the local coordinate of the triangular facet in the object space is mapped to the non-uniform grid of retina space. There are two ways to represent the non-uniform grid in retina space. The first is the exact method using . The other is the approximation method using the affine transformation of Eq. (29).
The accuracy and tolerance of the approximate method is examined by comparing it with the exact method. Figure 5(b) presents two overlapping grids calculated by the exact and approximate methods, colored red and blue, respectively. It can be observed that the overall shapes of the two grids are similar. However, there is a small difference between them around the outer edge, indicated by the shaded area A in Fig. 5(b). The effective portion in the total grid is eventually restricted to a finite interior area of the triangle in the local coordinates of retina space. In the shaded area B of Fig. 5(b), the two grids closely match around the center where the triangle is located.
The validity of the approximate method is evaluated using the simulation analysis shown in Fig. 6. After deriving two grids using the exact and approximate methods for a particular view point, we calculated the root mean square error (RMSE). RMSE graphs were then constructed from the results of the following two cases: (1) RMSE for all parts of the two grids and (2) RMSE for the interior region of the triangle, as shown in Figs. 6(a) and 6(b), respectively. The RMSE of the case (1) would be larger than that of the case (2). In the calculation of RMSE, all values of the cases (1) and (2) are normalized with the maximum value of the case (1).
The RMSE tends to increase exponentially as an observation point moves further from the reference point in the direction. When fixing as one value, the RMSE also increase proportionally with . Thus, we need to consider the applicable scope including a reference point and its adjacent points before we apply this proposed method to calculate multi-view CGHs. However, the RMSE for the case (2) is definitely smaller than that of the case (1). It means that the approximation method is sufficiently reliable if the triangle is small enough to be covered by the affine transform under a reasonable tolerance. In practice, the unit triangle facets that make up a 3D object are sufficiently small to represent it accurately with triangle meshes. If the unit triangles are too large to exploit the proposed method, the triangle facet should be divided into smaller triangles.
4. Affine-similarity transformation of holographic three-dimensional image light fields
In the polygon CGH synthesis theory [15, 16] that we have developed in previous papers, CGH patterns are obtained by propagating the observable holographic image in the retina plane into the CGH plane through the inverse cascaded generalized Fresnel transform . Therefore, when we calculate CGH patterns, the majority of the computation time is used in obtaining the holographic image in the retina plane. A complex process is required to calculate the observed image in the retina plane because light field distributions, which are emitted by all of the unit triangles that msake up the 3D object, have to be synthesized in the retina plane. In particular, for multi-view CGH calculations, the computational complexity can be exceptionally high because it is proportionate to the number of views that will be recorded in the CGH pattern. However, the similarity due to the affine transformation can be exploited to significantly improve multi-view CGH calculation.
In this section, an affine-reconfigured polygon CGH is formulated and the validity of the affine approximation and its effect on improving efficiency are tested with a comparison to an exact re-computed CGH model. The approximate light field distribution of the adjacent retina space is derived by referring to that of the referential retina space. First, the angular spectrum representations of the triangular facets and are given in the retinal space asEq. (32) into Eq. (31) leads toEq. (36) certifies that the angular spectrum of the adjacent local field is calculated from the geometric transformation of . The angular spectrum representation of the adjacent local field is represented byEq. (38) as . By multiplying this illuminating plane wave to Eq. (37), the light field distribution on the unit triangular facet in the adjacent local coordinate is obtained asEq. (36). It means that is the angular spectrum of the adjacent local field, and in the same way, we use @G to designate the terms of the global coordinate system. The light field distribution for the entire space of the adjacent local coordinate is obtained byEq. (38), the components of the Fourier spatial frequency vector in the adjacent local coordinates also have the conversion relationship by the Fourier spatial frequency vector of the adjacent global coordinate system asEqs. (41), (42) and (44) into Eq. (40), we can derive the diffraction field in the adjacent global coordinate system Eq. (45), the condition has to be satisfied; angular spectrum values at any frequency that do not satisfy this condition must be zero. Accordingly, the unit step function is contained in Eq. (45) . The angular spectrum in the adjacent global field is represented asEq. (36), is manipulated asEqs. (47), (48) and (49) into Eq. (46), the angular spectrum of the adjacent global fieldis solved for ,13, 15], convert the light field distribution in the retina plane to the CGH pattern. The intermediate view CGH is not generated by re-computing the entire process, but by reconfiguring the primitive data of the reference observation point. This process is expected to significantly reduce the computational complexity of wide-viewing angle polygon CGHs.
To assess the efficiency of the proposed method, we compared the computing time for a full-color CGH using the exact and approximate methods. In the calculation of the full-color CGH, the red (633nm), green (532nm) and blue (473nm) components of the CGH were independently calculated without color dispersion . Similar to Fig. 4, we assumed that a textured triangular facet was floating in object space and an observer is looking at it from a specific location. This computation is performed in MATLAB using a workstation with 2.27GHz Intel Xeon E5520 CPU and 48GByte memory. The size of the single view CGH is . Figure 7 displays the simulation results. Using both methods under the same computational conditions, we simulated the observer looking at specific objects while moving around them. As shown in Fig. 9, the textured cube is floating 5mm above the checker board and the observer is looking at this scene along a diagonal direction toward the floating object.
We assume the observer’s rotational range is as 0 to 360 degrees in the azimuthal direction and with an interval of 1 degree. Thus, 360 light field distributions should be calculated for each viewpoint. To accomplish this simulation, 360 times re-computations are required using the exact method. As indicated in Fig. 8(a), the exact method has two steps: (1) obtaining a properly distorted texture pattern on the local coordinates in the observer’s retina space and (2) numerically calculating the angular spectrum using a fast Fourier transform (FFT) algorithm and interpolation. The entire process takes 11.8513 seconds. On the other hand, the approximate method has three steps: (1) obtaining the properly distorted texture pattern on the local coordinates in the referential retina space, (2) calculating its angular spectrum with the FFT (this result is regarded as the primitive data) and (3) obtaining the angular spectrum by reconfiguring the primitive data as indicated in Fig. 8(b).
Although the approximate method has one more step than the exact method, the computation time of the entire process is much shorter at 6.397 seconds. The efficiency of the new method would be even more dramatic for multi-view CGH calculations because the step (3) of the approximate method is only implemented to calculate the new angular spectrum of the adjacent observation point if the primitive data is pre-calculated. Using the exact method, however, the entire process must be implemented each time. Therefore, the computation times for the exact and approximate methods are 11.8513 and 1.9773 sec for one cycle, respectively. As described above, however, we can efficiently calculate the CGH with the approximation method. In this case, we assume an applicable range of 20 degrees to apply the proposed algorithm. For example, one section covers to if the reference point is located at . Therefore, 18 observation sections are required and the light field distribution of each observation point is approximately calculated by reconfiguring the angular spectrum of its referential local coordinates.
Figure 9 displays the simulation results verifying the accommodation effect of the CGH computed by the affine approximate method. If the eye lens focuses on a particular object, that object is clearly recognized while other objects are blurred. The observed accommodation effect in Fig. 9 proves that the approximate method is accurate with its error not observable.
5. Concluding remarks
In conclusion, we have presented a concrete theory for interocular similarity and proposed an inter-view reconfiguration algorithm for textured polygon CGHs for view-direction change using an approximate affine transform. The effectiveness and efficiency of the approximate affine transform was proven with a numerical simulation, in which the reconfiguration algorithm based on the affine transform was applied to accelerate the computation of intermediate view CGHs for multi-view polygon CGHs. This work fllas under the umbrella of holographic information theory, an emerging field of optical information processing, which is a crucial component of next generation holographic 3D display technology.
In the Appendix, we prove the relationship between the local coordinates of a triangular facet in the original global coordinate and the adaptive global systems shown in Eq. (24). The local coordinates of a triangular facet in the adaptive global coordinates are solved for its global coordinates asEquation (51) can be modified byEq. (7) into Eq. (52), we can get the following Eq. (53)Equation (53) can be expanded using Eq. (54) asEq. (55), we can finally obtain the relationship of the local coordinates of a triangular facet in the original and adaptive global system.
Samsung Future Technology Fund of Samsung Electronics Inc. (SRFC-IT1301-52).
References and links
1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]
2. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inform. Displ. 18(1), 1–12 (2017). [CrossRef]
3. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef] [PubMed]
5. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef] [PubMed]
7. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef] [PubMed]
8. A. Symeonidou, D. Blinder, and P. Schelkens, “Colour computer-generated holography for point clouds utilizing the Phong illumination model,” Opt. Express 26(8), 10282–10298 (2018). [CrossRef] [PubMed]
9. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). [CrossRef] [PubMed]
12. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef] [PubMed]
13. J. Roh, K. Kim, E. Moon, S. Kim, B. Yang, J. Hahn, and H. Kim, “Full-color holographic projection display system featuring an achromatic Fourier filter,” Opt. Express 25(13), 14774–14782 (2017). [CrossRef] [PubMed]
14. T. Senoh, Y. Ichihashi, R. Oi, H. Sasaki, and K. Yamamoto, “Study on a holographic TV system based on multi-view images and depth maps,” Proc. SPIE 8644, 86440A (2013).
17. S.-B. Ko and J.-H. Park, “Speckle reduction using angular spectrum interleaving for triangular mesh based computer generated hologram,” Opt. Express 25(24), 29788–29797 (2017). [CrossRef] [PubMed]
22. J.-H. Park, S.-B. Kim, H.-J. Yeom, H.-J. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and S.-B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef] [PubMed]
25. G. Li, K. Hong, J. Yeom, N. Chen, J.-H. Park, N. Kim, and B. Lee, “Acceleration method for computer-generated spherical hologram calculation of real objects using graphics processing unit,” Chin. Opt. Lett. 12(6), 060016 (2014). [CrossRef]
26. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010). [CrossRef] [PubMed]
27. Y.-H. Seo, H.-J. Choi, J.-S. Yoo, and D.-W. Kim, “Cell-based hardware architecture for full-parallel generation algorithm of digital holograms,” Opt. Express 19(9), 8750–8761 (2011). [CrossRef] [PubMed]
28. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). [CrossRef] [PubMed]
29. J. Cho, J. Hahn, and H. Kim, “Fast reconfiguration algorithm of computer generated holograms for adaptive view direction change in holographic three-dimensional display,” Opt. Express 20(27), 28282–28291 (2012). [CrossRef] [PubMed]
31. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef] [PubMed]
32. A. W. Lohman, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996). [CrossRef]
33. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef] [PubMed]
35. Y. Lim, K. Hong, H. Kim, H.-E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). [CrossRef] [PubMed]
38. T. Kakue, T. Nishitsuji, T. Kawashima, K. Suzuki, T. Shimobaba, and T. Ito, “Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors,” Sci. Rep. 5(1), 11750 (2015). [CrossRef] [PubMed]