Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Field-of-view enhanced integral imaging with dual prism arrays based on perspective-dependent pixel mapping

Open Access Open Access

Abstract

A field-of-view (FOV)-enhanced integral imaging system is proposed by the combined use of micro-lens array (MLA) and dual-prism array (DPA). The MLA coupled with a DPA virtually function as a new type of the MLA whose FOV is much more increased than that of the original MLA, which enables the capturing of perspective-expanded elemental image arrays (EIAs) of input 3-D scenes and the FOV-enhanced reconstruction of them. For its practical operation, a two-step digital process called perspective-dependent pixel-mapping (PDPM) is also presented. With this PDPM method, picked-up EIAs with a couple of MLAs and DPAs are remapped into the new forms of EIAs to be properly reconstructed in the conventional integral imaging system. Operational performances of the proposed system are ray-optically analyzed. In addition, the feasibility of the proposed system is also confirmed from the computational and optical experiments with test 3-D objects on the implemented prototype. Experimental results finally show a two-times increase of the FOV range of the proposed system when it is compared with that of the conventional system.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Integral imaging has been known as a white light-based multi-perspective three-dimensional (3-D) imaging and display technique [1,2]. Integral imaging is largely composed of two processes, such as pickup and reconstruction processes [3]. In the pickup process, ray information emanating from an input 3-D scene is captured on a CCD camera through the pickup lens array, and recorded as a two-dimensional (2-D) array of elemental images, which is called an elemental image array (EIA), where each elemental image (EI) represents different perspectives of the input 3-D scene. From this picked-up EIA, a full-color 3-D scene image is reconstructed by combined use of LCD (liquid-crystal device) panel and display lens array in the reconstruction process.

This lens array-based integral imaging system, however, has several drawbacks, such as narrow viewing-angle, narrow field-of-view (FOV), small depth-of-focus (DOF), low image-resolution and depth inversion. Among them, FOV is considered as one of the critical issues in many applications including machine vision, automatic target tracking & recognition, surveillance and visual inspection [36].

Here, the FOV is defined as a maximum range of the perspective to be observed [7]. This FOV strongly depends on the f-number of each elemental lens of the pickup lens array, in which the f-number is described as a ratio of the focal length to the pitch of the elemental lens [7]. Thus, the smaller the f-number of the elemental lens becomes, the larger the FOV turns out to be. However, the f-number value of the practical elemental lens is low bounded, which means that the maximum FOV of the conventional integral imaging system looks limited [8].

Thus far, several approaches have been suggested for alleviating this problem [920]. One of them is the curved lens array-based integral imaging system, in which the curved lens array allows outgoing rays from the LCD panel to be spread over much wider viewing zones than those of the conventional system [17]. Another curving-effective projection integral imaging system was also proposed, where a large-aperture convex lens is employed to increase the viewing-zone of the reconstructed 3-D image [18]. Those methods, however, cannot increase the perspectives of the input 3-D scenes even though their viewing angles can be increased. In addition, a projection-type integral imaging system was also proposed for enhancing the viewing-zone of the input 3-D scene, but its size and image resolution happen to be much increased and lowered, respectively [16].

There was also another attempt to use the convex mirror array instead of the lens array to increase the FOV of the integral imaging display. That is, a projection-type integral imaging system with the convex mirror array was proposed, where the FOV can be much increased due to the fact that the f-number of the convex mirror can be made much shorter than that of the corresponding convex lens [17]. However, image resolution of the convex mirror array-based integral imaging display depends on the number of elemental convex mirrors, so that resolution of the reconstructed image gets much lowered since image resolution becomes same with the number of convex mirror arrays.

Meanwhile, several types of prisms were employed in the field of 3-D imaging and display [2125]. With the bi-prism, pairs of stereoscopic images of a 3-D object were picked up since it can refract the incident ray into two different directions and generate two kinds of virtual object images [23]. Prism arrays were also used in integral imaging to improve the image resolution of the 3-D object and its viewing angle [24,25]. However, in those studies, prism arrays were employed in the display processes, so that their FOVs of the 3-D objects were fixed by the employed pickup lens arrays, which means FOVs cannot be enhanced in their display processes.

Accordingly, in this paper, a new type of FOV-enhanced integral imaging display is proposed by combined use of the micro lens array (MLA) and dual prism array (DPA). Here, the proposed system consists of three processes, such as optical capturing of EIAs of the input 3-D object with a pair of the MLA and DPA, digital manipulation of the picked-up EIAs, and optical reconstruction of those manipulated EIAs into the 3-D object images much enhanced in FOV. In the pickup process, a DPA is employed in front of the MLA, so that two groups of rays coming from the 3-D object are captured on the CCD camera. That is, with the DPA, two views of the 3-D object, which are differently shifted and rotated depending on the employed DPA, are simultaneously captured on the CCD camera through the MLA as an overlapped form of two different EIAs, which is designated as an overlapped EIA (O-EIA). This O-EIA is a mixture of two kinds of EIAs, so that it cannot be directly reconstructed in the conventional integral imaging display system.

To solving this problem, a digital manipulation process called perspective-dependent pixel-mapping (PDPM) is proposed. The PDPM method is composed of two-step digital processes, with which the O-EIA is separated into two different EIAs and synthesized into a new type of the EIA acting like the EIA virtually picked up from the MLA whose f-number is much smaller than that of the employed MLA, which is called a synthesized EIA (S-EIA). This S-EIA contains much increased perspective information of the input 3-D object when it is compared with the EIA of the conventional system. Thus, from this S-EIA a 3-D object image whose FOV is much increased can be reconstructed.

Operational performances of the proposed system are ray-optically modeled and analyzed. In addition, for the feasibility test of the proposed system, computational & optical experiments with test 3-D objects are carried out on the implemented prototype and the results are comparatively analyzed with those of the conventional method.

2. Proposed system

2.1 Functional block-diagram of the proposed system

Figure 1 shows the functional block-diagrams of the conventional and proposed integral imaging systems. In the conventional integral imaging system shown in Fig. 1(a), rays coming from an input 3-D object are captured on the CCD camera just with a micro lens array (MLA) and recorded as an array of 2-D images with different perspectives of the 3-D object, which is called an elemental image array (EIA) or elemental images (EIs). This picked-up EIA can provide a specific field-of-view (FOV) of the input 3-D object determined by the f-number of the employed MLA, which means the FOV of the reconstructed 3-D image from this EIA is to be limited.

 figure: Fig. 1.

Fig. 1. Optical configurations of the (a) Conventional and (b) Proposed integral imaging systems

Download Full Size | PDF

On the other hand, in the proposed system shown in Fig. 1(b), another type of the optical pickup process is employed, where a combined pair of MLA and DPA is used to be capable of capturing a much wider angular range of the rays coming from the 3-D object. The DPA can refract lots of rays radiating from the 3-D object being located at the outside of the numerical aperture (NA) of the MLA, into the inside of its NA, allowing the MLA to capture them. Those rays are to be recorded as a form of the overlapped EIA (O-EIA). It is noted here that this O-EIA can be regarded as a mixture of two kinds of EIAs picked up from two virtual 3-D objects, which are symmetrically dislocated from the centered 3-D object and inwardly tilted due to the employed DPA. In other words, as shown in Fig. 1(b), the single camera virtually sees two kinds of virtual 3-D objects which are displaced and rotated from the center of the real 3-D object, which is to be explained in detail in the following section.

Thus, this picked-up O-EIA cannot be directly reconstructed on the conventional integral imaging system, which means this O-EIA needs to be manipulated into another form of the EIA for being properly reconstructed in the conventional integral imaging system. For this operation, a new digital process called the perspective-dependent pixel mapping (PDPM) process is proposed. With this process, the O-EIA can be separated into the two different EIAs and remapped into a single EIA looking just like the EIA picked up from another virtual MLA whose f-number is much smaller than that of the physical MLA originally employed in the pickup process, which is designated as a synthesized EIA (S-EIA).

This S-EIA contains a much increased perspective of the input 3-D by being compared with that of the conventional system since it is virtually picked up from the two laterally-displaced and inwardly-rotated versions of the real 3-D object generated due to the DPA operation, with which not only the front view of the real 3-D object, but two side views of the 3-D object with enlarged perspectives can be also captured as seen in Fig. 1. From this S-EIA a 3-D object image much increased in FOV can be then reconstructed. Detailed operational performances of the proposed system are to be analyzed below.

2.2 Capturing of the O-EIA

In the proposed system, a DPA is employed for capturing the rays containing the wider perspective information of the 3-D object. Figure 2 illustrates an operational function of the prism showing optical geometry of the ray path through the right-angle prism, where θ1 and θ2 represent the incident angle of the input ray intersected with the vertical line of the left surface of the right-angle prism, and refracted angle of the output ray intersected with the vertical line of the right surface of the right-angle prism, respectively.

 figure: Fig. 2.

Fig. 2. (a) Optical ray path of the right-angle prism, (b) Shifting and rotating properties of the DPA

Download Full Size | PDF

As seen in Fig. 2(a), the optical prism is shaped as the right-angled triangle with an apex angle of α, where δ denotes the deviation angle representing the difference angle between the input and output rays of θ1 and θ2. The input ray is incident to the point A of the left vertical surface of the prism and refracted into the prism, and then propagates to the point B of the right-angle surface of the prism according to the difference in refractive-index between the free-space and prism material. At the point B, the incoming ray is refracted again into the free-space with an angle of θ2. Here, the deviation angle of δ can be given by Eq. (1) where n represents the refractive index of the prism material [23]

$$\delta = {\theta _1} + {\sin ^{ - 1}}\left[ {({\sin \alpha } )\sqrt {{n^2} - {{\sin }^2}({{\theta_1}} )} - ({\sin {\theta_1}} )\cos (\alpha )} \right] - \alpha = {\theta _1} + {\theta _2} - \alpha ,$$
For the fixed values of n and α, the deviation angle of δ representing the difference angle between the input and output rays of θ1 and θ2, becomes only the function of the incident angle.

Figure 2(b) shows the shifting and rotating properties of the DPA, where X1 and X2 represent two point sources being located on the same central optic-axis of the DPA, but on the different depth planes. As shown in Fig. 2(b), rays coming from the point source X1 pass through the DPA and refract to the right and left directions with the same deviation angle of δ. Thus, the point source X1 in the real space is transformed into the two virtual image points of X1l and X1r, which are horizontally shifted into the left and right directions, respectively.

Here the shifting distance of B0 from the point source of X1 to the virtual image points of X1l or X1r can be given by Eq. (2).

$${B_o} = {L_p} \times tan\,\delta ,$$
In Eq. (2), Lp denotes the vertical distance between the DPA and X1. If δ value is fixed, the shifting distance B0 linearly increases according to the value of Lp. Thus, as shown in Fig. 2(b), two point sources of X1 and X2 are transformed into two sets of two virtual image points of X1l, X1r and X2l, X2r, respectively, along the horizontal direction depending on their depths, which is called a shifting property of the DPA. Moreover, all those shifted image points are located on the tilted optic-axis with an angle of δ, thus they look like rotated versions of the point images with the deviation angle of δ, which is called a rotating property of the DPA. Based on these shifting and rotating properties of the DPA, the single camera system with a DPA of Fig. 3(a) can be equivalently modeled as the two-virtual camera system with a single object of Fig. 3(b), as well as the single camera system with two virtual objects of Fig. 3(c) [23].

 figure: Fig. 3.

Fig. 3. Optical configurations of the (a) Single camera system with a DPA, its two equivalent versions of (b) Two-virtual camera system with a single object, (c) Single camera system with two virtual objects and its (d) Operational diagram

Download Full Size | PDF

In Fig. 3(a), CAM, O and L represent the camera, 3-D object to be captured and distance between the camera and DPA, respectively, and optic-axis of the camera is located to be coincided with the center of the DPA, while in Fig. 3(b) CAMl and CAMr denote the virtual left and right cameras, which are dislocated from the optic-axis of the real camera and rotated toward the object, as well as Ol and Or denote the virtual left and right objects to be seen by the DPA, which are rotated to the real camera, respectively.

As seen in Fig. 3(b), rays incoming to the left and right haves of the DPA from the object are refracted back into their opposite directions to be captured on the camera according to the deviation angle of the DPA. Two tilted views of the 3-D object can be captured on the camera system, which means that those two tilted views of the object look like the views to be captured on the virtual left and right cameras of CAMl and CAMr. In other words, the DPA allows the single real camera to operate just like two virtual cameras which are dislocated from the optic-axis of the real camera and rotated toward the object [23]. Since the optic axes of these two virtual cameras are intersected at the center of the DPA, they are to be positioned at the specific distance of Bc from the real camera, which is given by Eq. (3).

$${B_c} = L \times tan\,\delta ,$$
As shown in Fig. 3(b), CAMl and CAMr are dislocated from the real camera by L×tan δ to the left and right directions, respectively, and tilted with an angle of δ in their opposite directions. Thus, each of those two virtual cameras see the left and right-hand side views of the 3-D object, which means that the FOV of the single camera system can be increased as much as of -δ ≤ FOV ≤ δ just by employing the DPA in the pickup process [23]. Based on the operational principle of shifting and rotating properties of the DPA, another equivalent version of Fig. 3(a) can be derived. That is, due to the DPA, the 3-D object can be transformed into two virtual objects of Ol and Or, which are laterally shifted and inwardly rotated with the angle of δ. Thus, the real camera can see two virtual 3-D objects of Ol and Or as shown in Fig. 3(c)

Figure 3(d) shows the operational diagram of the proposed pickup system where Xp denotes an object point on the 3-D object of O, as well as XPl and XPr represent its corresponding object points on the two virtual 3-D objects of Ol and Or generated with the DPA, respectively. As shown in Fig. 3(d), the DPA makes a part of the rays located outside of the NA of the MLA refracted into its inside, which then enables the MLA to pick up the rays containing much wider perspectives of the object. Moreover, since the DPA is composed of two prism arrays which are center-symmetrically attached together, two kinds of EIAs can be captured as a form of the overlapped EIA (O-EIA). In other words, rays coming from the two virtual objects of Ol and Or are directly recorded with the single camera by combined use of the MLA and DPA.

Figure 4 shows the proposed pickup system composed of a pair of DPA and MLA, and its equivalent system where a pair of DPA and MLA is replaced with another MLA whose f-number is much smaller than that of the original MLA, which is called an equivalent MLA (E-MLA) here.

 figure: Fig. 4.

Fig. 4. Optical configurations of the (a) Proposed and its (b) Equivalent pickup systems

Download Full Size | PDF

As seen in Fig. 4(a), 2γ represents the FOV angle of the MLA and the incident angle of the ray entering with an angle of 2γ is set to be θinc. Then, the range of the incident angle of θinc in the proposed pickup system can be given by Eq. (4).

$$- \delta - \gamma \le {\theta _{inc}} \le \delta + \gamma ,$$
When directions of the rays incident to the MLA are bended by the DPA as much as δ, those rays can enter the MLA with additionally-increased angles of ± γ. Thus, the FOV angle of the MLA can be increased up to the range of –δγ to δ + γ, which is directly related to the perspective increase of the 3-D object. Since the DPA is made by center-symmetrical attaching of two prism arrays, 2δ increase of the FOV can be achieved by each of those two prism arrays. That is, left and right FOVs for the Ol and Or are ranged from (–δγ) to (–δ+γ) and from (δγ) to (δ+γ), respectively.

Here it must be noted that under either of the ideal or non-ideal matching conditions, the proposed pickup system virtually operates just like the conventional pickup system with the E-MLA of Fig. 4 (b). Here, the FOV range of the E-MLA, β can be given by Eq. (5).

$$- \delta - \gamma \le \beta \le \delta + \gamma ,$$
According to Eq. (5), as the angle of δ increases, the corresponding FOV of the E-MLA of β also becomes larger. From the simulated results with Eq. (4), FOV characteristics of the proposed method can be explained just with its equivalent pickup system of Fig. 4(b).

2.3 Generation of the S-EIA from the O-EIA

In fact, the O-EIA cannot be directly reconstructed on the conventional display system because two kinds of 3-D images are simultaneously reconstructed from this O-EIA as an overlapped form. Thus, the O-EIA must be manipulated into another form to be properly reconstructed in the conventional display system. For this operation, a perspective-dependent pixel mapping (PDPM) method is proposed.

Figure 5 show the proposed PDPM process which is largely composed of a two-step digital processing, such as separation and rearrangement processes of the O-EIA. For this processing, five-step operations are performed in sequence as shown in Fig. 5, such as (1) transformation of the O-EIA into its corresponding O-SIA, (2) perspective-dependent rearrangement of the O-SIA, (3) extraction of 3-D object images from the rearranged O-SIA, (4) calibrating and repositioning of those 3-D object images where the array of repositioned images is called a synthesized SIA (S-SIA), and (5) transformation of this S-SIA back into its corresponding S-EIA. That is, with the proposed PDPM, the O-EIA can be converted into the S-EIA acting just like the EIA picked up from the E-MLA whose f-number is much smaller than that of the employed MLA employed in the pickup process.

 figure: Fig. 5.

Fig. 5. Five-step operations of the proposed PDPM method: overlapped EIA(O-EIA), EIA-to-SIA transformation(EST), overlapped SIA(O-SIA), synthesized SIA(S-SIA), synthesized EIA(S-EIA)

Download Full Size | PDF

2.3.1 O-EIA–to–O-SIA transformation

Figure 6(a) illustrates a conceptual diagram of the EIA-to-SIA transformation (EST) process, where SIA denotes the sub-image array, and each sub-image (SI) of the SIA represents collections of all pixels located at the same positions in each elemental image (EI) of the EIA [26].

 figure: Fig. 6.

Fig. 6. (a) Conceptual diagram of the EST process, (b) Transformed O-SIA from the O-EIA

Download Full Size | PDF

As shown in Fig. 6(a), 3-D object images in each SI are rotated with their angles, which are called perspective angles regardless of the positions of the object [27]. That is, each SI represents the object image recorded with those same parallel rays coming from the 3-D object. Since the incident angles of the recorded parallel rays are different each other, each SI has its own perspective angle of the 3-D object. In other words, the 3-D object image in the ith SI is rotated with its perspective angle given by Eq. (6).

$${\theta _{sub,i}} = {\tan ^{ - 1}}\left( {\frac{{{y_i}}}{f}} \right),$$
Equation (6) shows that perspective angle of the 3-D object, θsub,i can be determined by the focal length of the elemental lens (f) and relative position of the ith pixel from the 0th pixel (yi). Based on Eq. (6), perspective angles of each SI can be uniquely calculated, where the 3-D object image captured in the 0th SI is designated as the front object image because its perspective angle becomes zero. Here, when the maximum perspective angle in the SIA is defined as θsub, max, perspective angle of the 3-D object can be ranged from -θsub, max to θsub, max, which is also same to the FOV of the elemental lens in pickup system [26].

Basically, the O-EIA contains the mixed perspective and intensity information of the two shifted and rotated virtual objects generated by the DPA. Thus, the corresponding O-SIA transformed from this O-EIA shows two kinds of features as shown in Fig. 6(b). One is the feature that each SI of the O-SIA has two kinds of 3-D object images originated from the virtual left and right objects which are designated as Ol and Or, respectively. All those pairs of Ol and Or object images in each SI are horizontally arranged in the orders of {L-10 R-10}, {L-9 R-9}, …, {L9 R9}, {L10 R10 } as shown in Fig. 6(b) for the case of a 21 × 21 O-SIA. All those object images of Ol and Or are positioned at each of the left and right sides of SIs, and horizontally shifted into the opposite directions from the central axis. The other feature is that two images of Ol and Or in each SI have different perspective angles and tilted with the angles of –δ and δ, respectively. The point source at the central axis is recorded in the different pixel positions in each SI because incident ray angles passing through the lens array are different in the consecutive SIs, which is called the central-axis shift property of the SI as shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. (a) Shifted pixel positions of the rays coming from the (a) Object point depending on the elemental lens and (b) Horizontally-shifted object point

Download Full Size | PDF

In Fig. 7, Xp and D represent the point source and distance between the MLA and Xp, where Xp is located on the central axis, as well as θsub,-2 and P denote the perspective angle at the -2nd SI and pitch of the MLA, respectively. In addition, white pixels in Fig. 7 mean the recorded pixel positions of the rays coming from Xp. As seen in Fig. 7(a), the recorded pixel of Xp in the 0th SI represents the 0th pixel while the recorded pixel of Xp in the -2nd SI represents the -1th pixel.

That is, the pixel position of a point source on the central axis changes depending on the order of the SI, where the recorded pixel position in each SI, pc can be given by Eq. (7).

$${p_c} = \tan ({{\theta_{sub,i}}} )\times ({D + f} )\times \frac{1}{P},$$
Equation (7) shows the recorded pixel position can be determined with four parameters of P, D, f and θsub,i. Since P and f are generally fixed, pc mainly depends on the values of D and θsub,i. In addition, the shifted pixel number is inversely related to the perspective angle. Equation (7) shows that a point source on the central axis is recorded at different pixels of each SI because perspective angles differ in each SI. In fact, parameters of D, f and P are fixed when the MLA is provided, thus pc may be strongly affected by perspective angles of each SI, which can be used for finding the recorded pixel on the central axis in each SI.

Figure 7(b) shows the pixel shift property in each SI for the shifted point source from the central axis, where Xs and d0 denote the shifted point source from Xp and distance between Xs and Xp in the horizontal direction, respectively. As shown in Fig. 7(b), when the point source of Xp is shifted to Xs, the ray coming from Xp passes through the different elemental lens. Since each ray is recorded through different elemental lenses, positions of the recorded pixels in each SI are changed, which is called the pixel shift property. For example, Xp is recorded in the 0th pixel position in the 0th SI whereas Xs is recorded in the -2nd pixel position in the 0th SI. The shifted number in pixel is given by ps, which is related to the number of elemental lenses of the MLA.

When the point source Xp is shifted with an amount of d0, the recorded pixel in the same SI is also shifted correspondingly. The shifted lens number ps of the pixel in the same SI can be given by Eq. (8).

$${p_s} = \frac{{{B_0}}}{P},$$
As seen in Eq. (8), ps depends on the ratio between the distance of B0 representing the difference in Xs and Xp, and pitch of the elemental lens. Thus, the shifted point source with B0 far from the central axis is recorded in the psth pixel of the same SI. With Eq. (7) and (8), two important positions can be derived. As mentioned above, the recorded position of the point source on the central axis changes according to the number of SIs. Using this property, firstly we can find the recorded pixel position of the central axis in the SI. In addition, the recorded pixel position of the shifted point source in the SI is calculated with Eq. (8) when the point source in the central axis is moved along the horizontal direction. Using these two properties for the SI, we can calibrate two 3-D objects in each SI.

From Eq. (6) the perspective angle of the ith SI is given by arctan(yi/f), thus each perspective angle of the Ol and Or images in the ith SI can be expressed by Eq. (9), where θL,i and θR,i represent the perspective angles of the Ol and Or images, respectively.

$$\begin{array}{l} {\theta _{L,i}} ={-} \delta + {\theta _{sub,i}} ={-} \delta + {\tan ^{ - 1}}\left( {\frac{{{y_i}}}{f}} \right)\\ {\theta _{R,i}} = \textrm{ }\delta + {\theta _{sub,i}} = \textrm{ }\delta + {\tan ^{ - 1}}\left( {\frac{{{y_i}}}{f}} \right). \end{array}$$
As seen in Eq. (9), perspective angles of θL,i and θR,i are functions of deviation angle, focal length and pixel position. Here deviation angle and focal length are treated as fixed values, so that those perspective angles vary with relative positions of yi . In addition, the difference in perspective angle between the Ol and Or images in each SI becomes two-fold of the deviation angle, such as 2δ. As seen in Fig. 6(a), relative positions of yi increase from the 0th pixel in the positive direction, and left perspective angles of the Ol image decrease because values of arctan(yi/f) for the left image are positive. On the other hand, right perspective angles of the Or image increase to the negative direction because positive deviation angles and arctan(yi/f) values are to be summed.

2.3.2 Perspective-dependent rearrangement of the O-SIA

As seen in Fig. 6(b), pairs of Ol and Or images are located in each SI and arranged like the forms of (L-10, R-10), (L-9, R-9), …, (L-9, R-9), (L10, R10). Perspective angles of those pair images are also given by {(θsub,-10-δ), (θsub,-10+δ)}, {(θsub,-9-δ), (θsub,-9+δ)}, …, {(θsub,9-δ), (θsub,9+ δ)}, {(θsub,10-δ), (θsub,10+δ)}. Those pairs of 3-D object images in each SI happen to be observed as mixed object images in the display process. Thus, for the reconstruction of the single 3-D object image, a new type of the digital manipulation process is required where two kinds of 3-D object images in each SI are to be properly transformed into the single ones.

For the pre-processing of this digital process, positions of all those Ol and Or of the O-SIA are rearranged according to the order of perspective angles. In other words, positions of all those Ol and Or images of the O-SIA, such as (L-10R-10), (L-9R-9), …, (L-9R-9), (L10R10) are rearranged into the forms of (L-10, L-9, …, L9, L10), and (R-10, R-9, …, R9, R10). Figure 8 shows this rearranging process of the O-SIA. As shown in Fig. 8, the O-SIA consists of 21 SIs, where each SI has a pair of Ol and Or images. The Ol image in the -10th SI has the biggest perspective angle whereas the Ol image in the 0th SI has the smallest perspective angle. In contrast, the Or image in the -10th SI has the smallest perspective angle while the Or image in the 10th SI have the biggest perspective angle. Thus, all those Ol and Or images in the O-SIA can be rearranged in the order of {(L-10, L-9, …, L9, L10), (R-10, R-9…, R9, R10)} as shown in Fig. 8. After the O-SIA is rearranged, left and right halves of the O-SIA turn out to be occupied by Ol and Or images, respectively. Perspective angles of those Ol and Or images are also changed in the rearranged O-SIA.

 figure: Fig. 8.

Fig. 8. Rearrangement process of the O-SIA

Download Full Size | PDF

Figure 9 shows simulated distributions of perspective angles of Ol and Or images for two cases of the ideal and non-ideal matching conditions, where x and y-axes represent the ordering number of the SI and perspective angles of Ol and Or, respectively, in which red and blue bars denote perspective angles for each of the Ol and Or. As mentioned above, two color bars represent the perspective angles of two object images in each SI. Since the front side of the object can be seen at the center of the viewing zone, perspective angles of those Ol and Or images are set to be 0° there. Figure 9(a), (b) and (c) show original distributions of perspective angles of those Ol and Or images in the horizontal direction, where perspective angle differences between two objects of Ol and Or in each SI are fixed 2δ. In addition, their perspective angles linearly increase because only the changes in numbers of SIs affect their perspective angles while keeping other parameters of the LA and DPA fixed.

 figure: Fig. 9.

Fig. 9. Original and rearranged distributions of perspective angles of those Ol and Or images in each SI for those cases of (a)-(a′) Overlapping, (b)-(b′) Crosstalk, and (c)-(c′) Ideal matching, respectively

Download Full Size | PDF

Figures 9(a)-(c) and  9(a′)-(c′), respectively, show perspective angle distributions of the original and rearranged Ol and Or images, which are linearly changing. These linear perspective angle distributions of the rearranged O-SIAs look similar to those of the SIAs transformed from the conventional EIAs. Figures 9(a′) and  9(b′) show perspective angle distributions of the rearranged Ol and Or images for the case of δθsub,max, meaning the so-called non-ideal matching condition. Under this condition, two kinds of problems happen to appear. One of them is the occurrence of an overlapping-zone of the perspective angle when δ becomes smaller than θsub,max. In case sign of the perspective angle changes from the plus (+) to the minus (-), the perspective direction is then changed from the plus (-) to the minus (+), and vice versa. Whenever overlapping-zones occur, two objects with the same perspective angles are recorded in the O-SIA, thus it is very important to minimize the overlapping-zone of the perspective for decreasing the loss of perspective information.

The other one is the crosstalk. When δ becomes larger than θsub,max, the front image with the perspective angle of cannot be generated because the value of δ-θsub,max always becomes larger than . This phenomenon can provide two perspective images whose perspective angles are large in difference to the viewer, and may cause angular resolution of the O-SI to be broken. Figure 9(c) and 9(c′) shows original and rearranged distributions of the perspective angles of those Ol and Or images under the ideal matching condition of δ=θsub,max. Under this condition, loss of perspective information can be minimized, and front images of the 3-D object can be generated because perspective angles are ranged from –(δ+θsub,max) to (δ+θsub,max). In addition, center images with the perspective angle of can be generated.

2.3.3 Generation of the S-SIA and its corresponding S-EIA

As mentioned above, each SI of the O-SIA has two views of the 3-D object, such as Ol and Or. Thus, a digital process to properly synthesize a new type of the O-SIA only for the single 3-D object is required, which is so-called the synthesized-SIA (S-SIA). It can be done with two-step operations, such as extraction of two sets of Ol and Or whose perspective angles are larger than the others in each O-SI from the rearranged O-SIA and horizontal calibration of their centers in each O-SI. Figure 10 show an operational block-diagram of the two-step operations for the generation of the S-SIA from the rearranged O-SIA.

 figure: Fig. 10.

Fig. 10. Two-step operations for generating the S-SIA from the rearranged O-SIA: (a) Rearranged O-SIA, (b) Extraction of two sets of Or and Ol images from the rearranged O-SIA, (c) Center-calibration of those extracted images (S-SIA), (d) Conventional SIA

Download Full Size | PDF

Here two sets of virtual object images of Ol and Or are assumed to be rearranged from L0 to L4, and from R0 to R4, respectively, where the total number of SIs of the S-EIA is set to be 5. Perspective angles of each virtual object image in the rearranged O-SIs are expressed as {(θsub,-2-δ)-(θsub,-1+δ)}, {(θsub,0-δ)-(θsub,1+δ)}, {(θsub,2-δ)-(θsub,-2+δ)}, {(θsub,-1-δ)-(θsub,0+δ)} and {(θsub,1-δ)-(θsub,2+δ)}. As seen in Fig. 10(b), the absolute perspective angle of L4 is much larger than that of L0, so that L4 becomes more tilted than L0. Differences in perspective angles between two neighboring images of Ol and Or can be calculated from Eq. (9) for i = 1∼5. Under the ideal matching condition, L0 and R0 can be front images, otherwise front images cannot be generated.

As seen in Fig. 10(b), only one pair of Ol and Or whose perspective angles are rather larger than the others is extracted from each SI of the rearranged O-SIA, such as L0 and two pairs of (L2, R2) and (L4, R4). Centers of those extracted images are then calibrated with factors of C0, C2 and C4 calculated from Eq. (7) and Eq. (8) as seen in Fig. 10(c). That is, the center of L0 is horizontally shift to the right direction as much as of C0 to be matched with the central axis of the 0th SI. Thus, the calibrated position of L0 is to be matched with the central axis of the 0th SI. Furthermore, central axis of two other pairs of (L2, R2) and (L4, R4) can be also calibrated with C2 and C4 factors to be symmetrically matched with their corresponding central axes of the 1st and 2nd SIs, respectively.

Here, the rearranged and calibrated version of the O-SIA of Fig. 10(c) is designated as the S-SIA. For comparison, the conventional form of the SIA picked up only with the MLA is also added in Fig. 10(d). The number of object images of the S-SIA is same with that of the conventional SIA, but perspective angles of all those object images in the S-SIA become increased much more than those of the conventional SIA as seen in Fig. 10(c) and (d). Thus, with the S-EIA which is inversely transformed from this S-SIA, a FOV-enhanced 3-D object image can be reconstructed on the conventional integral imaging display system.

2.4 Optimization of the perspective angle

Due to the mismatch between the maximum perspective angle of the MLA and deviation angle of the DPA, there can be loss of perspectives and front images with the perspective angle of SI cannot be generated. To be free from those issues, an optimized matching process between the MLA and DPA is required. Figure 11 shows a simulation result for the optimized matching case.

 figure: Fig. 11.

Fig. 11. Simulation results for the optimized matching

Download Full Size | PDF

In Fig. 11, black lines represent the maximum perspective angle (±θsub,max) dependence on the focal length (f) of the MLA whose pitch is set to be 7.47 mm while red lines represent the deviation angle(±δ) dependence on the pitch of the DPA which is fixed to ±10°. In addition, 2θsub,max represents the full range of the obtainable perspective angle in the SIA captured with the MLA, where θsub,max changes according to the inverse-square function of the focal length of the MLA given by Eq. (6). Figure 11 reveals that optimized points can be generated at the intersection points between the red and black lines where 3-D images with zero perspectives can be generated and loss of perspectives can be minimized when θsub,max is equal to δ. As seen in Fig. 11, the left-hand side of the optimized point represents the non-center region because SIs with 0° perspective angles cannot be generated due to the condition of θsub,max>δ. However, for the case of δ < θsub,max perspective overlapped zones can be generated.

3. Experiments

3.1 Computational experiments

In the experiments, a 3-D object composed of two kinds of objects of ‘Red Car(RC)’ and ‘Yellow Plate(YP)’, which are located at the distances of 300 mm and 700 mm from the LA, respectively, are used as the test object as shown in Fig. 12(a). Here, most part of the backside object of ‘YP’ is set to be blocked by the front object of ‘RC’, where an English word of ‘STOP’ is written on the ‘YP’.

 figure: Fig. 12.

Fig. 12. (a) Two kinds of 3-D objects of ‘RC’ and ‘YP’(front and side-views) (b) Proposed and (c) Conventional pickup systems (top views)

Download Full Size | PDF

As shown in Fig. 12(b), the proposed pickup system is composed of the virtual MLA, DPA and camera under the ideal matching condition between the perspective angle of the MLA and deviation angle of the DPA, and with which O-EIAs are captured with the 3D Max for the test object of Fig. 12(a). For the comparative analysis, other types of EIAs are also captured from the conventional pickup system of Fig. 12(c) where the DPA is excluded.

Here, the FOV of each elemental lens of the MLA is set to be 20°. For the ideal matching condition, the DPA with a deviation angle of ±10° is assumed to be located at the same position of the MLA. From Eq. (2), two sets of two 3-D objects of ‘RC’ and ‘YP’ are horizontally shifted to the left and right directions as much as ±7.05 mm and ±12.34 mm, respectively, while those shifted objects of ‘RC’ and ‘YP’ are also inwardly rotated as much as ±10°. That is, perspective angles of the left and right-shifted ‘RC’ and ‘YP’ are ranged from -20° to 0° and from 0° to 20°, respectively.

The SI of the object with the perspective angle of 0° can be generated when the summation of the deviation angle and the FOV becomes 0, which means that the relationship between the MLA and DPA is in the ideal matching condition. When O-EIAs are picked up under this condition, 3-D object images with perspective angles of 0° can be generated and these 3-D object images are to be located at the 0th SIs. Table 1 shows specifications of the virtual pickup devise employed in the computational experiments.

Tables Icon

Table 1. Specifications of the computational pickup devices

Figure 13 shows three kinds of picked-up O-EIAs whose resolutions are 6,171 ×3,519 pixels. Figure 13(a) represents the EIA generated from the conventional pickup system of Fig. 12(c) whose perspective angle of the test 3-D object is ranged from -10.0° to 10.0° since the FOV of the pickup MLA is given by 20°. In addition, Fig. 13(b) and (c) show the O-EIA generated from the proposed pickup system of Fig. 12(b) and its rearranged and calibrated version of the S-EIA based on the PDPM method, respectively.

 figure: Fig. 13.

Fig. 13. (a) EIA and (b) O-EIA generated from the conventional and proposed pickup systems, respectively, and (c) S-EIA rearranged and calibrated version of (b) based on the PDPM

Download Full Size | PDF

Figure 14(a) and (b), respectively, show 3-D object images reconstructed from the EIA of Fig. 13(a) and S-EIA of Fig. 13(c) based on the view-based computational integral-imaging reconstruction (CIIR) algorithm [28], and observed from five kinds of viewing angles of -10°, -5°, 0°, +5° and +10°. As shown in Fig. 14, perspectives of those reconstructed 3-D object images change according to the viewing angles. Figure 14(a) shows 3-D object images reconstructed from the conventional EIA of Fig. 13(a), where perspective changes of those images according to the viewing angle look small due to the limited FOV of the pickup MLA. However, for the case of Fig. 14(b), where images are reconstructed from the proposed S-EIA of Fig. 13(c), their changes in perspective look much increased by being compared with the conventional case of Fig. 14(a) because the proposed S-EIA contains much wider FOV of the 3-D object than that of the conventional EIA.

 figure: Fig. 14.

Fig. 14. 3-D object images reconstructed from the (a) EIA of Fig. 13(a) and (b) S-EIA of Fig. 13(c), respectively, which are observed from five viewing angles of -10°, -5°, 0°, +5° and +10°

Download Full Size | PDF

In fact, the FOV of the 3-D object in the conventional pickup system is ranged from 10° to -10°, whereas in the proposed pickup system, left and right FOV angles of the reconstructed 3-D object images are increased up to the range of +20° to -20°, respectively. It means that the proposed system can provide the increased FOV of 3-D objects more than that of the conventional system. In addition, since the total FOV angle of the proposed system is ranged from -20° to 20°, centered 3-D object images with the perspective angle of 0° can be also generated.

In short, those experimental results confirm that the FOV range of the proposed system has been increased up to 40°, which means a two-fold increase of the FOV can be achieved in the proposed system by being compared with that of the conventional system. In addition, 3-D object images with the perspective angle of 0° can be reconstructed under the ideal-matching condition.

This FOV increase of the proposed system can be equivalently explained as follows. When the original MLA with a FOV angle of 20° is used by combined with the DPA in the pickup process, the resultant FOV is increased up to 40°, which equivalently looks like the case that a new type of the MLA with a FOV angle of 40° is used in the conventional pickup system. In other words, the proposed pickup system using a pair of the MLA and DPA virtually operates just like the conventional pickup system with a new type of the MLA whose FOV is much larger than that of the employed MLA, which is called an equivalent MLA(E-MLA) here.

Figure 15(a) and (b) show reconstructed 3-D object images from the S-EIA generated in the proposed system by use of a pair of MLA and DPA, and from the EIA picked up from its equivalent system only employing the E-MLA, respectively. As shown in Fig. 15, all those perspectives of 3-D object images at each viewing angle have been found to be identical in both of the proposed and its equivalent systems. Thus, these experimental results confirm that the proposed pickup system with a pair of MLA and DPA virtually acts just like the conventional pickup system employing an E-MLA whose FOV is much increased.

 figure: Fig. 15.

Fig. 15. Reconstructed 3-D object images from the (a) Proposed and (b) Conventional systems employing the E-MLA, which are viewed from five viewing angles of -10°, -5°, 0°, +5° and +10°

Download Full Size | PDF

3.2 Optical experiments with real 3-D objects

In the optical experiments, the proposed pickup system is implemented by combined use of the MLA which is manufactured by our research center, and DPA (Model: Array of prism #450, Fresnel Technologies) under the non-ideal matching condition since the deviation angle (δ) of the employed DPA happens to be larger than the perspective angle of the elemental lens of the MLA.

Here, pitch and focal length of the MLA are given by 7.47mm and 29.88mm, respectively, and its perspective angle is calculated to be ranged from -7.1° to 7.1°. The total number of lenses of the employed MLA is given by 62×36 and resolution of the picked-up O-EIA is set to be 1,920×1,080 pixels, respectively. In addition, size and deviation angle of the employed DPA are given by 0.78mm and 21.0°, respectively. For the comparative performance analysis, the conventional pickup system only employing the MLA is also implemented.

For the display, a LCD panel (Model: S22B300H, Samsung) with the resolution of 1,920×1,080 pixels and the same MLA used in the pickup system, are employed. Here, the gap between the LCD panel and MLA is set to be 35mm, thus the viewing-angle of the display system is calculated to be 12.2°. In the proposed pickup process, perspective angles of the left and right 3-D objects are calculated to be ranged from -28.1° to -13.9° and from 13.9° to 28.1°, respectively, with Eq. (9), which means that SIs with the perspective angle of 0° cannot be generated since the optical pickup is not performed under the ideal matching condition. Table 2 shows specifications of the pickup and display devices employed in the optical experiments.

Tables Icon

Table 2. Specifications of optical pickup and display devices for optical experiments

Figure 16 shows a test object composed of two kinds of 3-D objects such as ‘Doll’ and ‘Truck’, as well as two types of the conventional and proposed optical pickup systems. As shown in Fig. 16(a), the ‘Truck’ object embedded with two Arabic numbers of ‘1’ and ‘2’ on its left-hand side is set to be blocked by the ‘Doll’ object, thus most of those two Arabic numbers cannot be seen within a limited viewing angle. In the optical pickup systems, the ‘Doll’ object is located at the distance of 85 mm from the MLA, while the DPA is set to be located at the distance of 45 mm from the MLA, which means that the gap between the DPA and MLA is kept to be 40 mm as shown in Fig. 16(b) and (c).

 figure: Fig. 16.

Fig. 16. (a) A test object composed of two 3-D objects of ‘Doll’ and ‘Truck’ (front view), (b) Conventional and (c) Proposed pickup systems (Top views)

Download Full Size | PDF

Figure 17(a) and (b) show EIA and O-EIA captured from the test object with the conventional and proposed pickup systems of Fig. 16(b) and (c), respectively. As shown in Fig. 17(a) and (b), FOV of the EIA captured from the conventional pickup system is ranged from -7.1° to 7.1°, whereas corresponding FOVs of the O-EIA captured from the proposed system are ranged from -28.1° ∼ -13.9° for the Ol image and from 13.9° to 28.1° for the Or image, respectively. It looks much expanded in FOV by being compared with that of the conventional EIA since the picked-up O-EIA is composed of two kinds of EIAs captured from two virtual 3-D objects which are shifted and tilted versions of the centered original 3-D object. Figure 17(c) shows the S-EIA being rearranged and calibrated from the O-EIA of Fig. 16(c) based on the PDPM method.

 figure: Fig. 17.

Fig. 17. (a) EIA picked up from the conventional system, (b) O-EIA picked up from the proposed system, (c) S-EIA which is the rearranged and calibrated version of the O-EIA

Download Full Size | PDF

Since optical pickup is carried out under the non-ideal matching condition due to the physical difference between the FOV (7.1°) of the MLA and deviation angle (21°) of the DPA, front object images with the perspective angle of 0° can’t be generated in the experiments as mentioned above.

Figure 18 shows four kinds of 3-D object images reconstructed from the conventional and proposed systems depending on the viewing angle. A set of 3-D object images reconstructed from the EIA of Fig. 17(a) is shown in Fig. 18(a), which are observed from five viewing angles of -6°, -3°, 0°, +3° and +6°. As we move from the left to the right direction, two Arabic numbers of ‘1’ and ‘2’ embedded on the right-hand side of the ‘Truck’ cannot be seen in any viewing angle since the FOV of the conventional pickup system becomes very limited due to the relatively high f-number of the employed MLA.

 figure: Fig. 18.

Fig. 18. Optically-reconstructed 3-D object images from the (a) Conventional and (b) Proposed systems which are observed from 5 viewing angles of -6°, -3°, 0°, +3° and +6°

Download Full Size | PDF

On the other hand, in case of Fig. 18(b) showing 3-D object images reconstructed from the S-EIA of Fig. 17(c) with the proposed system, one Arabic number of ‘1’ can be seen in those images viewed from the angles of -3° and -6°, while the other Arabic number of ‘2’ can be also seen in other images viewed form the angles of +3° and + 6° since the FOV of the proposed system has been much increased with the DPA. That is, left and right-hand side perspective angles of the reconstructed 3-D object images have been increased from +7.1° to (7.1°+21.0°) and from -7.1° to -(7.1°+21.0°), respectively, which makes rays coming from two Arabic numbers of ‘1’ and ‘2 of the ‘Truck’ captured on the MLA with the DPA. Thus, perspective angles of the reconstructed 3-D object images can be maximally ranged up to 28.2° and -28.2°.

Actually, in the proposed system, perspective angles of the reconstructed 3-D object images in the left and right viewing zones are ranged from -28.2° to -13.9° and from 13.9° to 28.2°, respectively, unlike the conventional system where those of the reconstructed 3-D object images in the left and right viewing zones are ranged from - 6° to -0° and from 0° to 6°, respectively. In addition, these experimental results also confirm that the FOV of the proposed system where a pair of the MLA and DPA is used, could be enhanced much larger than that of the conventional method, which allows the proposed system to provide much wider perspective angle of the reconstructed 3-D object images, and for the viewer to see hidden parts of 3-D objects which cannot be seen in the conventional system.

4. Conclusions

In this paper, a new type of the field-of-view (FOV)-enhanced integral imaging system with a pair of micro-lens array (MLA) and dual-prism array (DPA) is proposed based on the perspective-dependent pixel-mapping (PDPM) method. The DPA enables the employed MLA virtually functioning as another new MLA whose FOV is enhanced as much as 2δ. The PDPM scheme also allows the picked-up EIAs to be remapped into the new forms of EIAs to be properly reconstructed in the conventional integral imaging system. Computational and optical experiments show 2-times increase of the FOV range of the proposed system when it is compared with that of the conventional system. Thus, ray-optical analysis of the proposed system, along with the computational & optical experiments finally confirm the feasibility of the proposed system.

Funding

National Research Foundation of Korea (2018R1A6A1A03025242); Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (IITP-2017-01629).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D.-H. Shin, B. Lee, and E.-S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef]  

2. Stern and B. Javidi, “Three dimensional sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]  

3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]  

4. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. computing 15, 81–93 (1997). [CrossRef]  

5. V. Javier Traver, P. Latorre-Carmona, E. Salvador-Balaguer, F. Pla, and B. Javidi, “Human gesture recognition using three-dimensional integral imaging,” J. Opt. Soc. Am. A 31(10), 2312–2320 (2014). [CrossRef]  

6. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

7. J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12(6), 1077–1086 (2004). [CrossRef]  

8. A. Stern and B. Javidi, “3-D computational synthetic aperture integral imaging (COMPSAII),” Opt. Express 11(19), 2446–2451 (2003). [CrossRef]  

9. W. Xie, Y. Wang, H. Deng, and Q. Wang, “Viewing angle-enhanced integral imaging system using three lens arrays,” Chin. Opt. Lett. 12(1), 011101 (2014). [CrossRef]  

10. J. Hyun, D.-C. Hwang, D.-H. Shin, B.-G. Lee, and E.-S. Kim, “Curved projection integral imaging using an additional large-aperture convex lens for viewing angle improvement,” ETRI J. 31(2), 105–110 (2009). [CrossRef]  

11. J.-Y. Jang, H.-S. Lee, S. Cha, and S.-H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–76 (2011). [CrossRef]  

12. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007). [CrossRef]  

13. G. Baasantseren, J.-H. Park, K.-C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009). [CrossRef]  

14. J.-S. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42(11), 1996–2002 (2003). [CrossRef]  

15. H. Geng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011). [CrossRef]  

16. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, “Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images,” Opt. Express 28(17), 24731–24746 (2020). [CrossRef]  

17. H.M Choi, J.G Choi, and E.S Kim, “Dual-View Three-Dimensional Display Based on Direct-Projection Integral Imaging with Convex Mirror Arrays,” Appl. Sci. 9(8), 1577–1595 (2019). [CrossRef]  

18. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004). [CrossRef]  

19. D.-H. Shin, Y. Kim, B. Lee, and E.-S. Kim, “Curved integral imaging scheme using an additional large-aperture lens,” Proc. SPIE , 649064901K, 2007). [CrossRef]  

20. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]  

21. C.-Y. Chen, T.-T. Yang, and W.-S. Sun, “Optics system design applying a micro-prism array of a single lens stereo image pair,” Opt. Express 16(20), 15495–15505 (2008). [CrossRef]  

22. Q.-L. Deng, C.-Y. Chen, S.-W. Cheng, W.-S. Sun, and B.-S. Lin, “Micro-prism type single-lens 3d aircraft telescope system,” Opt. Commun. 285(24), 5001–5007 (2012). [CrossRef]  

23. D.- H Lee and I.-S Kweon, “A novel stereo camera system by a biprism,” IEEE Trans. Robot. Automat. 16(5), 528–541 (2000). [CrossRef]  

24. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism arrays,” Opt. Express 15(8), 4814–4822 (2007). [CrossRef]  

25. C.-W. Chen, M. Cho, Y.-P. Huang, and B. Javidi, “Improved viewing zones for projection type integral imaging 3D display using adaptive liquid crystal prism array,” J. Display Technol. 10(3), 198–203 (2014). [CrossRef]  

26. J.-H. Park, J. Kim, and B. Lee, “Three-dimension optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef]  

27. H.-H. Kang, J.-H. Lee, and E.-S. Kim, “Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging,” Opt. Express 20(5), 5440–5459 (2012). [CrossRef]  

28. D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express 16(21), 16294–16304 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Optical configurations of the (a) Conventional and (b) Proposed integral imaging systems
Fig. 2.
Fig. 2. (a) Optical ray path of the right-angle prism, (b) Shifting and rotating properties of the DPA
Fig. 3.
Fig. 3. Optical configurations of the (a) Single camera system with a DPA, its two equivalent versions of (b) Two-virtual camera system with a single object, (c) Single camera system with two virtual objects and its (d) Operational diagram
Fig. 4.
Fig. 4. Optical configurations of the (a) Proposed and its (b) Equivalent pickup systems
Fig. 5.
Fig. 5. Five-step operations of the proposed PDPM method: overlapped EIA(O-EIA), EIA-to-SIA transformation(EST), overlapped SIA(O-SIA), synthesized SIA(S-SIA), synthesized EIA(S-EIA)
Fig. 6.
Fig. 6. (a) Conceptual diagram of the EST process, (b) Transformed O-SIA from the O-EIA
Fig. 7.
Fig. 7. (a) Shifted pixel positions of the rays coming from the (a) Object point depending on the elemental lens and (b) Horizontally-shifted object point
Fig. 8.
Fig. 8. Rearrangement process of the O-SIA
Fig. 9.
Fig. 9. Original and rearranged distributions of perspective angles of those Ol and Or images in each SI for those cases of (a)-(a′) Overlapping, (b)-(b′) Crosstalk, and (c)-(c′) Ideal matching, respectively
Fig. 10.
Fig. 10. Two-step operations for generating the S-SIA from the rearranged O-SIA: (a) Rearranged O-SIA, (b) Extraction of two sets of Or and Ol images from the rearranged O-SIA, (c) Center-calibration of those extracted images (S-SIA), (d) Conventional SIA
Fig. 11.
Fig. 11. Simulation results for the optimized matching
Fig. 12.
Fig. 12. (a) Two kinds of 3-D objects of ‘RC’ and ‘YP’(front and side-views) (b) Proposed and (c) Conventional pickup systems (top views)
Fig. 13.
Fig. 13. (a) EIA and (b) O-EIA generated from the conventional and proposed pickup systems, respectively, and (c) S-EIA rearranged and calibrated version of (b) based on the PDPM
Fig. 14.
Fig. 14. 3-D object images reconstructed from the (a) EIA of Fig. 13(a) and (b) S-EIA of Fig. 13(c), respectively, which are observed from five viewing angles of -10°, -5°, 0°, +5° and +10°
Fig. 15.
Fig. 15. Reconstructed 3-D object images from the (a) Proposed and (b) Conventional systems employing the E-MLA, which are viewed from five viewing angles of -10°, -5°, 0°, +5° and +10°
Fig. 16.
Fig. 16. (a) A test object composed of two 3-D objects of ‘Doll’ and ‘Truck’ (front view), (b) Conventional and (c) Proposed pickup systems (Top views)
Fig. 17.
Fig. 17. (a) EIA picked up from the conventional system, (b) O-EIA picked up from the proposed system, (c) S-EIA which is the rearranged and calibrated version of the O-EIA
Fig. 18.
Fig. 18. Optically-reconstructed 3-D object images from the (a) Conventional and (b) Proposed systems which are observed from 5 viewing angles of -6°, -3°, 0°, +3° and +6°

Tables (2)

Tables Icon

Table 1. Specifications of the computational pickup devices

Tables Icon

Table 2. Specifications of optical pickup and display devices for optical experiments

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

δ = θ 1 + sin 1 [ ( sin α ) n 2 sin 2 ( θ 1 ) ( sin θ 1 ) cos ( α ) ] α = θ 1 + θ 2 α ,
B o = L p × t a n δ ,
B c = L × t a n δ ,
δ γ θ i n c δ + γ ,
δ γ β δ + γ ,
θ s u b , i = tan 1 ( y i f ) ,
p c = tan ( θ s u b , i ) × ( D + f ) × 1 P ,
p s = B 0 P ,
θ L , i = δ + θ s u b , i = δ + tan 1 ( y i f ) θ R , i =   δ + θ s u b , i =   δ + tan 1 ( y i f ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.