Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational phase modulation in light field imaging

Open Access Open Access

Abstract

We propose a scheme for modulating phase computationally in light field imaging systems. In a camera system based on the scheme, light field (LF) data is obtained by array-based optics, and the data is computationally projected into a single image with arbitrary phase modulation. In a projector system based on the scheme, LF data with arbitrary phase modulation is computationally generated before optical projection, and the phase-modulated image is projected by array-based optics. We describe the system design and required conditions based on the sampling theorem. We experimentally verified the proposed scheme based on camera and projector systems. In the experiment, we demonstrated a super-resolution camera and projector with extended depth-of-field without estimating the object’s shape.

© 2013 Optical Society of America

1. Introduction

1.1. Light field imaging

Light field imaging systems using array-based optics and postprocessing have been proposed for capturing spatio-angular information of light rays in object space; this light ray information is called the light field (LF) [13]. The LF is expressed by the four-dimensional function L(s, t, u, v) determined by two parallel planes indicating the spatial and angular coordinates of a ray, as shown in Fig. 1 [3]. Figure 2 shows a schematic diagram of an LF imaging system, in which array-based optics capture the four-dimensional LF in the object space, and the acquired LF data is computationally projected to a single two-dimensional virtual image on the virtual image plane.

 figure: Fig. 1

Fig. 1 Definition of light field (LF).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Schematic diagram of LF imaging system.

Download Full Size | PDF

The computational process in conventional LF imaging corresponds to an imaging process of a virtual camera in a virtual space, as shown in Fig. 2. The virtual imaging process is emulated by ray-tracing. As a result, arbitrary camera conditions, including the focusing distance, F-number, camera position, etc., can be realized even after image capturing [4, 5].

An interesting application of LF imaging is so-called all in-focus imaging, or extended depth-of-field (EDOF) imaging, where objects located at different distances can be brought into focus. In the case of Fig. 2, the focused object has to be located on the virtual image plane, and the depth-of-field (DOF) of the virtual image is limited. All in-focus imaging is realized by arbitrarily changing the shape of the virtual image plane for space-variant focusing. The shape of the virtual image plane has to be equivalent to the object’s shape. In general, however, it is difficult to obtain an accurate depth map in object space, and processing for estimating the depth map generally involves a large computational cost [6, 7].

As well as depth estimation, another important issue in LF imaging is super-resolution processing, because the spatial resolution is typically compromised in order to observe the angular information of the light rays. The pixel pitch in the virtual image plane can also be determined arbitrarily. A super-resolved image having a higher resolution than that captured by elemental optics, which is one of the sub-optics in the array-based optics, can be reconstructed by a computational process based on ray tracing with sub-pixel precision [612]. As an alternative to the image-based super-resolution in LF imaging, it can also be realized by providing additional optical elements [1315].

1.2. Phase-modulation imaging

Imaging systems based on optical phase modulation and postprocessing have been studied with the aim of enhancing imaging performance and realizing highly functional imaging. Figure 3 schematic illustrates phase-modulation imaging using a phase plate. Examples of phase-modulation imaging are given below.

 figure: Fig. 3

Fig. 3 Schematic diagram of phase-modulation imaging.

Download Full Size | PDF

1.2.1. Extended depth-of-field (EDOF)

Optical design employing phase modulation can realize highly depth-invariant point spread functions (PSFs). In this design, the focusing range is optically extended in single-shot imaging, as illustrated in Fig. 3, resulting in blurred but depth-invariant PSFs. The images captured by such systems can be deconvolved into a single EDOF image with a single PSF [16]. For example, a cubic phase mask, spherically aberrated optics, and a radially symmetric kinoform diffuser have been used to realize EDOF imaging [1619].

1.2.2. Depth measurement

Phase modulation can also realize depth-variant PSFs. The depth map of an object is retrieved by estimating the defocused PSFs on a captured image [20]. PSFs in which depth-variance is enhanced have been implemented by using a phase plate or a diffuser in the imaging optics [2123].

1.2.3. Super-resolution

Phase modulation is also used for enhancing sub-pixel structures in PSFs to realize super-resolution imaging. A super-resolved image, which has a higher resolution than the Nyquist sampling pitch of the image sensor, can be obtained via the inverse process of the imaging process [24]. A random phase mask has been proposed for realizing such PSFs [24, 25].

1.3. Phase-modulated LF imaging

In this paper, we present a framework for realizing arbitrary phase modulation in LF imaging systems to achieve the promising imaging functions mentioned above. Although amplitude modulation in LF imaging has been demonstrated by other researchers [4, 5], there has been little work on phase modulation. The computational projection used in the amplitude modulation schemes assumes a virtual aperture stop with an arbitrary shape to realize variable perspective view and variable DOF. In contrast, in the proposed method, virtual optics used in the computational process implements phase modulation virtually. The proposed method does not require specially designed physical phase modulating elements because these elements are computationally emulated. This means that the proposed method is advantageous in terms of its higher flexibility and lower implementation costs compared with conventional phase-modulation imaging systems.

We describe an application of the proposed method to a phase-modulated camera and projector, and we analyze the conditions for the system design to satisfy the sampling theorem. In the following section, we explain the concept with an LF camera. An LF projector is merely an inversion of an LF camera, and thus, the concept of the LF camera can be directly applied to an LF projector. Finally, as examples of phase-modulated LF imaging, we demonstrate EDOF and super-resolution imaging for a camera and projector.

2. Proposed scheme

In the proposed scheme, the phase modulation is computationally realized by the modulated virtual optics used in the computational process of the LF camera, as shown in Fig. 4. From the perspective of geometric optics, phase modulation that changes the normals of the equiphase wavefront corresponds to changing the directions of the rays. In this paper, therefore, the phase modulation is implemented by tilting the optical axes of the virtual elemental optics in the virtual space. In this section, we formulate an expression for the required angles of the optical axes for achieving the desired phase modulation.

 figure: Fig. 4

Fig. 4 Schematic diagram of phase-modulated LF camera.

Download Full Size | PDF

Figure 5 shows definitions of the system parameters used for the analysis. In this paper, the global coordinates of the lens plane are (s, t), and the local coordinates of the sensor plane in each elemental optics are (u, v). The distance between each lens and sensor is the focal length f of the lens, and the axial coordinate from the lens plane is defined as z. For simplicity, the v-axis and t-axis are omitted in our analysis. The index k of the elemental optics is defined as shown in Fig. 5, and the order of k is along the global coordinate s. The total number of elemental optics is N.

 figure: Fig. 5

Fig. 5 Definitions of the system parameters.

Download Full Size | PDF

A phase plate for phase modulation is modeled as a glass plate with a refractive index n and a shape function z = g(s). The thickness of the phase plate is neglected in this paper. As the effect of refraction by the phase plate, the angle of an emerging ray is changed as shown in Fig. 6(a). The modulated angle m(s) of the optical axes of the virtual elemental optics for achieving the phase modulation is calculated as

m(s)=ϕemϕin,
where, based on Snell’s law,
ϕin=arctan(g(s)s),
ϕem=arcsin(nsin(ϕin)).
Here, ϕin and ϕem are the angles of incidence and emergence measured from the normal.

 figure: Fig. 6

Fig. 6 Designs for implementing phase modulation in virtual optics. Phase modulation by (a) using a phase plate, and (b) tilting the optical axes of virtual elemental optics for achieving modulation equivalent to that of a phase plate.

Download Full Size | PDF

In the proposed scheme, the modulation m(s) is emulated by the tilt angle of the optical axes of the virtual elemental optics in the LF camera, as shown in Fig. 6(b). By changing the design of g(s), an arbitrary phase plate can be realized computationally.

3. Sampling in phase-modulated LF imaging

In LF camera systems, the LF in the object space is sampled discretely by an array of imaging optics and an array of detector pixels. To avoid aliasing in the virtual images due to under-sampling, the pitches of the elemental optics and detector pixels should be designed based on the sampling theorem [26]. In this section, we introduce the sampling conditions for phase-modulated LF imaging. With the introduced conditions, the sampling properties of LF signals, PSFs, and image acquisition are simulated by emulating EDOF imaging as an example of phase-modulation imaging.

3.1. Formulation of sampling

In this subsection, we formulate the Nyquist pixel pitch and the Nyquist pitch of elemental optics, which are used for the proposed system design.

3.1.1. Nyquist pixel pitch

Here, we formulate the required pixel pitch Δu of the elemental optics for sampling the LF signal without aliasing. The smallest structure in an image of an object on a sensor is defined as Δo. In this paper, we assume Lambertian objects whose surfaces exhibit light reflection with no angular dependency.

The resolution of an imaging system is limited by diffraction and aberrations in the optics, as well as the pixel pitch of the image sensor. The resolvable size due to diffraction and aberrations, namely, the diameter of the Airy disc, is defined as Δa [27], and the resolvable size due to the image sensor, namely, the pixel pitch, is defined as Δu. The latter resolution limit Δu can be modified by applying super-resolution processing to give δu as follows [10]:

δu=Δunsr,
where nsr is a natural number which can be considered as a resolution improvement factor resulting from the super-resolution processing. The maximum nsr is the total number N of elemental optics theoretically. In practice, however, nsr is lower than N because the resolution of the imaging system is determined by the larger of the wave optics-based resolution Δa and the geometrical optics-based resolution δu, considering super-resolution [10]. To sample the LF signal with an over-sampling rate for the smallest structure Δo in the object image, the required condition is
max(Δa,δu)Δo2.
Assuming that general imaging conditions satisfy δu > Δa, the required condition for the pixel pitch Δu can be formulated as
Δu=nsrδunsrΔo2.

3.1.2. Nyquist pitch of elemental optics

Now we determine the pitch of the elemental optics in the phase-modulated LF camera system. The pitch has to be appropriately set to generate a smaller disparity between neighboring elemental optics than the pixel pitch of the image sensor [26]. In the formulation of this condition, we consider the disparity of the center pixel of the image sensor, and the disparities of the other pixels are approximated as that of the center pixel based on the paraxial approximation.

In the computational process, the LF data are projected onto the virtual image plane. In our scheme, the projection is performed by calculating the geometrical relation between the pixels on the sensor and the pixels on the virtual image plane, as shown in Fig. 7. By projecting from the k-th elemental optics located at s(k), the center pixel in the virtual image plane detects a ray from a pixel at us, as shown in the figure. The coordinate us of the sampled pixel in the LF data can be formulated as follows:

us(s(k),zv,m(s))=ftan(θray+θmod),
where, as shown in Fig. 7,
θray=arctan(s(k)zv),
θmod=m(s(k)).
Here, zv is the distance between the virtual image plane and the virtual elemental optics, θray is the angle of the ray passing through the lens coordinate s(k) measured from the normal of the virtual image plane, and θmod is the tilt angle of the virtual lens for phase modulation. The geometrical relation between the pixels on the sensor and the object image plane can also be calculated by using Eq. (7) by replacing the distance zv with the distance zo between the object image plane and the real elemental optics, and the phase is not modulated, namely, m(s) = 0.

 figure: Fig. 7

Fig. 7 Geometrical relation between pixels on a sensor and pixels on a virtual image plane in virtual space.

Download Full Size | PDF

The disparity d is the difference of the local coordinates us obtained by Eq. (7) of the neighboring elemental optics in the object and virtual spaces, as shown in Fig. 8. It also changes depending on the phase modulation m(s) or the tilt of the virtual elemental optics, as shown in the figure. The disparity d between the k-th and (k − 1)-th elemental optics can be calculated as follows:

d(s(k),Δs,zo,zv,m(s))=|ωoωv|,
where, as illustrated in Fig. 8,
ωo=us(s(k),zo,0)us(s(k1),zo,0),
ωv=us(s(k),zv,m(s))us(s(k1),zv,m(s)),
Δs=s(k)s(k1).
Here, ωo is the difference of the local coordinates us of the two elemental optics in the object space, ωv is that in the virtual space, and Δs is the pitch of the elemental optics (element pitch). The disparity d of the k-th elemental optics is defined as the absolute difference between ωo and ωv.

 figure: Fig. 8

Fig. 8 Geometrical representation of the disparity.

Download Full Size | PDF

To introduce the Nyquist element pitch, the mean disparity of the system is defined as follows:

d¯(Δs,zo,zv,m(s))=k=2Nd(s(k),Δs,zo,zv,m(s))N1.
The condition for the pitch of the elemental optics for making the mean disparity less than the pixel pitch on the image sensor is as follows:
d¯(Δs,zo,zv,m(s))Δunsr=δu.
The mean disparity monotonically increases with the pitch of the elemental optics Δs; therefore, the maximum element pitch satisfying the sampling condition in Eq. (15) can be found uniquely. To avoid aliasing in reconstructed images, the pitches of the image sensor pixels and the elemental optics in the phase-modulated LF imaging system should be designed to satisfy the conditions of Eqs. (6) and (15) for all virtual distances zv of the image plane within an assumed range.

3.2. Simulations based on EDOF imaging

In this subsection, we analyze the sampling properties of the LF data in the computational projection based on Eq. (7), and we show the results of simulations performed to confirm the effect of the sampling condition of Eq. (15). In the simulations, EDOF imaging described in Section 1.2.1 is used as an example of phase-modulation imaging.

3.2.1. Sampling properties

To analyze the EDOF properties, a normalized defocus parameter Ψ is introduced [16, 27]:

Ψ=πA24λ(1fLF1zo1zvΨ).
Here, zvΨ is the distance of a virtual image plane generating the defocus Ψ, λ is the wavelength, A is the diameter of the whole array-based optics, and fLF is the focal length of the system. A and fLF are given by
A=(N1)Δs,
fLF=11zo+1zvΨ=0.
In this paper, the in-focus distance zvΨ=0 of the virtual image plane is defined identically to the distance zo of the assumed object plane as follows:
zvΨ=0=zo.

Assuming a system with the parameters shown in Table 1, the sampling pattern of the LF data by the center virtual pixel based on Eq. (7) is shown in Fig. 9. In the simulations, the distance zv of the virtual image plane was varied to evaluate the EDOF performance, and the element pitch Δs was also varied to satisfy the condition in Eq. (15) at each zv. Figure 9(a) shows the sampling pattern of a conventional LF imaging system with no phase modulation. Figures 9(b)–9(d) show the sampling patterns of LF imaging systems with a cubic phase mask, spherical optics, and a radially symmetric kinoform diffuser. For emulating such phase plates, the shape functions g(s) in Eqs. (2) and (3) are given by [16, 17, 19]:

g(s)=αs3,
g(s)=β1(sγ)2,
ddsg(s)~P,
where α, β, and γ are arbitrary constants for adjusting the imaging conditions [28], and P is an arbitrary probabilistic function. Crosses in Fig. 9 are the LF data captured by the array-based optics and the image sensor, and the LF data on the colored lines and within the colored regions are data sampled based on Eq. (7). Note that the LF data in Fig. 9 are sparser than the actually sampled data in the simulations in order to enhance the effect of discrete sampling. Red, green, and blue colors show the sampling patterns at zvΨ=0, zvΨ=15, and zvΨ=30, respectively. The computational projection in Fig. 4 corresponds to the integration of the LF data on the colored lines or in the colored regions. In the case of the diffuser shown in Fig. 9(d), the LF data in the colored regions are sampled probabilistically based on the designed probabilistic function [19].

Tables Icon

Table 1. System parameters used in simulations.

 figure: Fig. 9

Fig. 9 Sampling patterns of LF data and PSFs in the systems (a) without and (b)–(d) with phase modulation. The modulations were designed by emulating (b) a cubic phase mask, (c) spherical optics, and (d) a radially symmetric kinoform diffuser.

Download Full Size | PDF

In conventional LF imaging, the integral lines are straight, as in Fig. 9(a). The gradients of the lines correspond to the distances zv of the virtual image planes. In phase-modulated LF imaging on the other hand, the lines are curved or replaced with regions. The line profiles in the right-hand sides of each subfigure in Fig. 9 show each of the PSFs based on the integrations. In a phase-modulated LF imaging system with EDOF, the imaging is more depth-invariant compared with that of a conventional imaging system.

Fig. 9(a) illustrates the pixel pitch Δu, the element pitch Δs, and the disparity d in the computational projection defined as in Eq. (10). The disparity d corresponds to the difference of the sampled u-coordinates in the neighboring elemental optics, as indicated in the figure, and the mean disparity corresponds to the mean value of the disparities in the whole set of elemental optics. The sampling condition for the pitch of the elemental optics in Eq. (15) corresponds to a comparison between the mean disparity and the pixel pitch δu after super-resolution processing.

3.2.2. PSF analysis

To verify the sampling conditions in the proposed scheme, we analyzed the PSFs obtained by changing the pitch of the elemental optics. As an example of EDOF imaging, the cubic phase mask (CPM) expressed in Eq. (20) was emulated in the phase-modulated LF camera with α = 40 for one direction, along the u and s-axes.

The PSF in CPM-based wavefront coding can be theoretically derived as follows:

h(u,W20)=12|1+1exp(jαsp3+jkW20sp2j2πusp)dsp|2,
W20=λΨ2π,
sp=sA,
where sp is the normalized pupil coordinate [16, 29]. The line profile of the theoretical PSF at Ψ= 0, which was derived by Eq. (23) with geometrical optics, is shown in Fig. 10(a). Note that, since the computational projection in the proposed scheme is performed based on geometrical optics, the PSF in Fig. 10(a) was calculated by neglecting the diffraction effects due to the finite pupil, for comparison with the proposed scheme.

 figure: Fig. 10

Fig. 10 The PSFs in CPM-based wavefront coding obtained by (a) analytical derivation and (b)–(e) numerical simulations. In the simulations, the pitch of the elemental optics was determined to make the mean disparity (b) half of the pixel pitch, (c) equal to the pixel pitch, (d) double the pixel pitch, and (e) five times larger than the pixel pitch at zvΨ=0.

Download Full Size | PDF

Figures 10(b)–10(e) show the PSFs of the proposed scheme with four different element pitches, in which the mean disparities in Eq. (14) are one-half of the pixel pitch, equal to the pixel pitch, double the pixel pitch, and five times larger than the pixel pitch, respectively. With the over-sampling condition of the element pitch in Eq. (15), the PSFs were approximately the same as the theoretical one in Fig. 10(a). In contrast, in the PSFs with the under-sampling condition, some fluctuations appeared. The results also indicate that the change in the PSFs according to the element pitch is only slight around the bound of the sampling condition (d̄/δu = 1.0); however, the bound can be used as a good criterion for obtaining approximately the same PSF as the theoretically derived PSF.

3.2.3. Analysis with two-dimensional image

The effect of the sampling condition in Eq. (15) was confirmed with a two-dimensional object using the system described by the parameters in Table 1. In the simulations, the finest structure Δo of the object image on the image sensor was set to double the pixel pitch δu to satisfy Eq. (6), where nsr = 1. As well as the PSF analysis described above, CPM-based wavefront coding in one direction was chosen as an example of the phase modulation. Based on the proposed system emulating the CPM, the imaging process was simulated with virtual image planes at zvΨ=0 and zvΨ=30 while changing the element pitch Δs. The relation between the simulated element pitch Δs and the achieved mean disparity in the two virtual image planes is shown in Table 2.

Tables Icon

Table 2. The pitch of optics and the achieved sampling pitch.

The images computationally projected onto the virtual image planes based on Eq. (7) are shown in Fig. 11(a). With the conventional LF imaging, a sharp image appeared at zvΨ=0, which is the in-focus distance, and a defocused image appeared at zvΨ=30. On the other hand, with the phase-modulated LF camera, images blurred by the depth-invariant PSF were obtained at both distances. The computationally projected images for the under-sampling condition in Eq. (15) showed some aliasing artifacts.

 figure: Fig. 11

Fig. 11 Simulations with two-dimensional image. (a) Computationally projected images and (b) final images obtained with the conventional and proposed LF cameras with the CPM while changing the element pitch and the object distance. Red rectangles and blue rectangles mean over-sampling and under-sampling conditions, respectively.

Download Full Size | PDF

Figure 11(b) shows final images of the conventional and phase-modulated LF cameras. The final images obtained with the conventional method are the same as the images in Fig. 11(a), and those obtained with the proposed method are deconvolved versions of those in Fig. 11(a). In the deconvolution of all conditions, a single Wiener filter with a PSF at d̄/δu = 0.5 and zvΨ=0, which can be approximated as the theoretically derived in-focus PSF as indicated in Fig. 10(b), was used to make clear the effect of each sampling condition [27]. Compared with the conventional LF imaging, the DOF of the proposed LF camera was successfully extended when the sampling condition in Eq. (15) was satisfied. The deconvolved images with the under-sampling condition have some artifacts even at the in-focus distance zvΨ=0. Artifacts in the deconvolved image at zvΨ=30 in the over-sampling condition occurred due to a small depth invariance of the PSFs caused by the CPM for EDOF imaging, not the LF imaging [30].

When the element pitch was designed to satisfy the under-sampling condition, artifacts appeared due to aliasing in the computational projection. The differences between the deconvolved images with d̄/δu = 0.5 and the others at each distance are shown in Fig. 12. The images of the differences and their corresponding peak signal-to-noise ratios (PSNRs) indicate that the artifacts increased in proportion to the element pitch at both distances.

 figure: Fig. 12

Fig. 12 Differences between deconvolved images with d̄/δu = 0.5 and others at zvΨ=0 and zvΨ=30, and their corresponding PSNRs.

Download Full Size | PDF

In conclusion, the system should be designed to satisfy the sampling conditions of Eq. (6) and Eq. (15) at all virtual image plane distances to avoid aliasing.

4. Experimental demonstrations

We experimentally demonstrated the proposed scheme in camera and projection systems. In the experiments, we demonstrated super-resolution and EDOF based on the CPM as examples of phase-modulation imaging. The demonstrated camera and projector did not require estimation of a depth map of the three-dimensional object or screen, like CPM-based EDOF imaging. This is the main advantage of the proposed method over the conventional all in-focus LF imaging from the perspective of computational cost [6,7]. System flows of the experiments are shown in Fig. 13. In the camera system, the LF in the object space was captured by a camera array, and the captured LF data were computationally projected into a single image with phase modulation. The projected image was deconvolved into an EDOF image. In the projection system, on the other hand, the deconvolution process and computational projection were performed before the optical projection, as in [3134]. The generated LF data were projected by a projector array. In the proposed scheme, the phase modulation was achieved by tilting the virtual elemental optics in the computational projection process, as shown in Fig. 4.

 figure: Fig. 13

Fig. 13 Schematic diagram of (a) EDOF camera and (b) EDOF projector based on phase-modulated LF imaging.

Download Full Size | PDF

4.1. Camera systems

4.1.1. EDOF camera

EDOF imaging based on a camera array was demonstrated by use of the setup shown in Fig. 14. In this experiment, a monochrome CCD camera (PL-B953U, manufactured by PIXELINK, including 1200 × 768 pixels with a pixel pitch of 4.65 × 4.65 μm) was scanned mechanically to emulate a camera array. As in the simulations described in Section 3, scanning of the t-axis was omitted. A tilted sheet of paper was used as a three-dimensional object.

 figure: Fig. 14

Fig. 14 Setup used for experimental verification of the EDOF camera.

Download Full Size | PDF

In the experiment, the object was captured while scanning the camera along the s-axis at a fixed interval. The interval was set to 0.2 mm to satisfy the sampling condition of Eq. (15) within the range of the object distances used. Since the aim of this experiment was to verify the EDOF performance, super-resolution was not demonstrated, and thus, the improvement factor nsr in Eq. (15) was 1. The pixel pitch sampling condition in Eq. (6), where nsr = 1, was also satisfied. One of the captured images is shown in Fig. 15(a). The pixel values of the image were normalized according to the reconstructed images, and white Gaussian noise, with an SNR of 10 dB, was added to the captured images to demonstrate noise robustness in EDOF imaging.

 figure: Fig. 15

Fig. 15 Results obtained with the EDOF camera based on phase-modulated LF imaging. (a) A single captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolution of Fig. (c).

Download Full Size | PDF

The captured images (LF data) were computationally projected. A projected image without the computational phase modulation emulating the CPM is shown in Fig. 15(b). The noise in the projected image was suppressed compared with that in the captured image by the integration process in the computational projection; however, the in-focus region in the image was restricted by the DOF of the conventional LF imaging. On the other hand, the computationally projected image with the computational phase modulation shown in Fig 15(c) was subjected to noise removal and was defocused by the depth-invariant PSF. The deconvolved image obtained by Wiener filtering using the phase-modulated image is shown in Fig. 15(d). The DOF was successfully extended compared with the computationally projected image without phase modulation in Fig. 15(b). The noise in the computationally projected image in Fig. 15(c) was slightly enhanced by deconvolution, as with general EDOF imaging based on depth-invariant PSFs; however, the amount of noise in the deconvolved image shown in Fig. 15(d) was obviously lower than that in the image captured by the single elemental optics shown in Fig. 15(a).

4.1.2. Super-resolved EDOF camera

We also confirmed the effect of super-resolution in the EDOF imaging. The horizontal pixel count of captured images was reduced by 1/nsr, where nsr is the resolution improvement factor in Eq. (4); in this experiment, nsr = 3. The down-sampled pixel pitch in the low-resolution captured images was nsrΔu. Noise was also added to them, with an SNR of 15 dB. One of the low-resolution captured images with nsr = 3 is shown in Fig. 16(a). The text is not recognizable due to the down-sampling and noise. Computationally projected images without and with phase modulation, obtained using ray tracing with sub-pixel precision for achieving super-resolution, are shown in Figs. 16(b) and 16(c), respectively. The resolutions were improved and noise was removed in both cases. The DOF in Fig. 16(b) is limited, and the PSF in Fig. 16(c) is depth-invariant. The deconvolution result of Fig. 16(c) is shown in Fig. 16(d), in which the resolution was improved and the noise was suppressed compared with Fig. 16(a). The results show that a super-resolved EDOF camera was successfully demonstrated.

 figure: Fig. 16

Fig. 16 Results obtained with super-resolved EDOF camera based on phase-modulated LF imaging. (a) A single low-resolution captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolved image of Fig. (c).

Download Full Size | PDF

4.2. Projector systems

A projector based on the proposed scheme was also demonstrated. The experimental setup is illustrated in Fig. 17. Instead of an array projector, in this experiment, a three-panel LCD projector (EH-TW400 manufactured by EPSON, including 1280 × 800 pixels with a pixel pitch of 5.93×7.12 μm) was scanned along the s-axis. The projected images were captured by a CCD camera (PL-B953U manufactured by PIXELINK), and the captured images were superposed by signal processing instead of the integration function performed by the human visual system. A tilted screen was placed in front of the projector.

 figure: Fig. 17

Fig. 17 Setup used for experimental verification of EDOF projector.

Download Full Size | PDF

4.2.1. EDOF projector

First, input Lena images like that shown in Fig. 18(a) were computationally projected to generate the LF data without and with the phase modulation emulating the CPM. To verify the EDOF performance, the deconvolution of the Lena image in Fig. 18(b) was also projected into the LF data with phase modulation, as shown Fig. 13(b). Considering the constraint of the maximum grayscale value of pixels in incoherent projectors, deconvolution was performed by solving the following inverse problem instead of the frequency filtering process as in [31]:

i^=argmaxi[i|o¯=t]subjectto0i^(p)c,p,
where î is the deconvolution result used as an input image i, [·] is the likelihood function, ō is a projected image of the EDOF projector, t is a target image, î(p) is the pixel value of the p-th pixel of î, and c is the maximum pixel value of the incoherent projector. In this experiment, the target image t was the Lena image shown in Fig. 18(a). In the experiment, the inversion problem of Eq. (26) was solved by using the Richardson and Lucy algorithm [35, 36].

 figure: Fig. 18

Fig. 18 (a) An input Lena image and (b) its deconvolved image.

Download Full Size | PDF

The generated LF data were projected onto the tilted screen by scanning the projector at fixed spatial intervals. The interval was set to 1.3 mm to satisfy the sampling condition of Eq. (15). Since super-resolution was not demonstrated in this experiment, the improvement factor nsr was 1. Noise with an SNR of 1 dB was added to the captured images to demonstrate noise robustness in the EDOF imaging.

An image optically projected by the conventional single projector is shown in Fig. 19(a). The pixel values were normalized according to the images obtained with the LF projector. Images optically projected by the LF projector using the original Lena image in Fig. 18(a) without and with phase modulation are shown in Figs. 19(b) and 19(c). Similarly to the camera system experiments described above, a noise-suppressed image with limited-DOF and a noise-suppressed image defocused with the depth-invariant PSF were generated. The resulting image obtained with the LF projector using the deconvolved Lena image in Fig. 18(b) and the phase modulation is shown in Fig. 19(d). The noiseless EDOF image was optically projected onto a three-dimensional screen.

 figure: Fig. 19

Fig. 19 Results obtained with the EDOF projector based on phase-modulated LF imaging. The optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) The optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.

Download Full Size | PDF

4.2.2. Super-resolved EDOF projector

We demonstrated a super-resolved EDOF projector based on the proposed scheme. Similar to the camera system experiments described above, the elemental input image in the LF data was down-sampled horizontally, with an improvement factor nsr = 3 in Eq. (4). Noise was also added to the captured images, with an SNR of 5 dB. The optically projected low-resolution image generated by the conventional single projector is shown in Fig. 20(a). Using ray tracing with sub-pixel precision to achieve super-resolution, the optically projected images obtained with the LF projector using the original Lena image in Fig. 18(a) generated without and with computational phase modulation for the CPM are shown in Figs. 20(b) and 20(c). The resolutions were improved and noise was removed in both cases. The DOF in Fig. 20(b) is limited, and the PSF in Fig. 20(c) is depth-invariant. The optically projected image obtained with the super-resolved LF projector using the deconvolved Lena image in Fig. 18(b) with computational phase modulation is shown in Fig. 20(d). The noise-suppressed and super-resolved EDOF image was optically projected onto the three-dimensional screen successfully.

 figure: Fig. 20

Fig. 20 Results obtained with the super-resolved EDOF projector based on phase-modulated LF imaging. Optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) An optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.

Download Full Size | PDF

5. Conclusion

In the research described in this paper, we have proposed a scheme for realizing arbitrary phase modulation in light field (LF) imaging. In a camera system based on our scheme, array-based optics capture the LF of an object, and the captured LF data is computationally projected into a single image with phase modulation. In a projector system based on the scheme, an input image is computationally projected into the LF data with phase modulation, and the generated LF data is optically projected by array-based optics. Phase modulation was realized by tilting the optical axes of the virtual elemental optics in the computational projection process in LF imaging.

The required conditions in the system design were also described. Expressions for the pitches of image sensor pixels and an array of elemental optics in the proposed scheme were formulated, and the conditions were verified by simulation. As examples of phase-modulation imaging, a super-resolved camera and projector with EDOF based on the proposed LF imaging scheme were numerically and experimentally demonstrated. In these systems, it was not necessary to estimate the shapes of the three-dimensional object and screen, like general EDOF techniques based on depth-invariant PSFs. Although the proposed method was demonstrated by scanning a camera or a projector in the experiments, the method is applicable to any type of LF imaging system so long as the sampling conditions derived in Section 3.1 are satisfied.

Our proposed scheme realizes arbitrary phase modulation in a single imaging system. Such an imaging system is a promising platform for computational imaging with phase coding.

Acknowledgments

This research was supported by Grant-in-Aid for JSPS Fellows from the Japan Society for the Promotion of Science.

References and links

1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

2. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

3. M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. ACM SIGGRAPH(1996), pp. 31–42.

4. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proc. ACM SIGGRAPH(2000), pp. 297–306.

5. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR 2005-02 (2005).

6. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007). [CrossRef]  

7. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012). [CrossRef]  

8. Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt. 43, 1719–1727 (2004). [CrossRef]   [PubMed]  

9. T. E. Bishop, S. Zenetti, and P. Favaro, “Light field superresolution,” in Proc. IEEE International Conference on Computational Photography (ICCP)(2009), pp. 1–9.

10. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20, 21–36 (2003). [CrossRef]  

11. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52, D22–D31 (2013). [CrossRef]   [PubMed]  

12. R. Horisaki, K. Kagawa, Y. Nakao, T. Toyoda, Y. Masaki, and J. Tanida, “Irregular lens arrangement design to improve imaging performance of compound-eye imaging systems,” Appl. Phys. Express 3, 022501 (2010). [CrossRef]  

13. R. Horisaki and J. Tanida, “Full-resolution light-field single-shot acquisition with spatial encoding,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB5. [CrossRef]  

14. Z. Xu, J. Ke, and E. Y. Lam, “High-resolution lightfield photography using two masks,” Opt. Express 20, 10971–10983 (2012). [CrossRef]   [PubMed]  

15. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1–11 (2013). [CrossRef]  

16. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

17. P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express 16, 12995–13004 (2008). [CrossRef]   [PubMed]  

18. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express 20, 27482–27495 (2012). [CrossRef]   [PubMed]  

19. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion coded photography for extended depth of field,” ACM Trans. Graph. 29, 1–10 (2010). [CrossRef]  

20. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987). [CrossRef]   [PubMed]  

21. G. E. Johnson, E. R. Dowski, and W. T. Cathey, “Passive ranging through wave-front coding: information and application,” Appl. Opt. 39, 1700–1710 (2000). [CrossRef]  

22. A. Greengard, Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006). [CrossRef]   [PubMed]  

23. C. Zhou, O. Cossairt, and S. K. Nayar, “Depth from diffusion,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)(2010), pp.1–8.

24. A. Ashok and M. A. Neifeld, “Pseudorandom phase masks for superresolution imaging from subpixel shifting,” Appl. Opt. 46, 2256–2268 (2007). [CrossRef]   [PubMed]  

25. A. Ashok and M. A. Neifeld, “Information-based analysis of simple incoherent imaging systems,” Opt. Express 11, 2153–2162 (2003). [CrossRef]   [PubMed]  

26. J. Chai, X. Tong, S. Chan, and H. Shum, “Plenoptic sampling,” in Proc. ACM SIGGRAPH(2000), pp. 307–318.

27. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

28. S. S. Sherif, W. T. Cathey, and E. R. Dowski, “Phase plate to extend the depth of field of incoherent hybrid imaging systems,” Appl. Opt. 43, 2709–2721 (2004). [CrossRef]   [PubMed]  

29. W. Zhang, Z. Ye, T. Zhao, Y. Chen, and F. Yu, “Point spread function characteristics analysis of the wavefront coding system,” Opt. Express 15, 1543–1552 (2007). [CrossRef]   [PubMed]  

30. Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33, 1515–1517 (2008). [CrossRef]   [PubMed]  

31. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition projector for extended depth of field and field of view,” Opt. Lett. 9, 1560–1562 (2013). [CrossRef]  

32. M. Sieler, P. Schreiber, P. Dannberg, A. Bräuer, and A. Tünnermann, “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt. 51, 64–74 (2012). [CrossRef]   [PubMed]  

33. M. Grosse, G. Wetztein, A. Grundhöfer, and O. Bimber, “Coded aperture projection,” ACM Trans. Graph. 29, 1–12 (2010). [CrossRef]  

34. R. Horisaki and J. Tanida, “Compact compound-eye projector using superresolved projection,” Opt. Lett. 36, 121–123 (2011). [CrossRef]   [PubMed]  

35. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972). [CrossRef]  

36. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1
Fig. 1 Definition of light field (LF).
Fig. 2
Fig. 2 Schematic diagram of LF imaging system.
Fig. 3
Fig. 3 Schematic diagram of phase-modulation imaging.
Fig. 4
Fig. 4 Schematic diagram of phase-modulated LF camera.
Fig. 5
Fig. 5 Definitions of the system parameters.
Fig. 6
Fig. 6 Designs for implementing phase modulation in virtual optics. Phase modulation by (a) using a phase plate, and (b) tilting the optical axes of virtual elemental optics for achieving modulation equivalent to that of a phase plate.
Fig. 7
Fig. 7 Geometrical relation between pixels on a sensor and pixels on a virtual image plane in virtual space.
Fig. 8
Fig. 8 Geometrical representation of the disparity.
Fig. 9
Fig. 9 Sampling patterns of LF data and PSFs in the systems (a) without and (b)–(d) with phase modulation. The modulations were designed by emulating (b) a cubic phase mask, (c) spherical optics, and (d) a radially symmetric kinoform diffuser.
Fig. 10
Fig. 10 The PSFs in CPM-based wavefront coding obtained by (a) analytical derivation and (b)–(e) numerical simulations. In the simulations, the pitch of the elemental optics was determined to make the mean disparity (b) half of the pixel pitch, (c) equal to the pixel pitch, (d) double the pixel pitch, and (e) five times larger than the pixel pitch at z v Ψ = 0.
Fig. 11
Fig. 11 Simulations with two-dimensional image. (a) Computationally projected images and (b) final images obtained with the conventional and proposed LF cameras with the CPM while changing the element pitch and the object distance. Red rectangles and blue rectangles mean over-sampling and under-sampling conditions, respectively.
Fig. 12
Fig. 12 Differences between deconvolved images with d̄/δu = 0.5 and others at z v Ψ = 0 and z v Ψ = 30, and their corresponding PSNRs.
Fig. 13
Fig. 13 Schematic diagram of (a) EDOF camera and (b) EDOF projector based on phase-modulated LF imaging.
Fig. 14
Fig. 14 Setup used for experimental verification of the EDOF camera.
Fig. 15
Fig. 15 Results obtained with the EDOF camera based on phase-modulated LF imaging. (a) A single captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolution of Fig. (c).
Fig. 16
Fig. 16 Results obtained with super-resolved EDOF camera based on phase-modulated LF imaging. (a) A single low-resolution captured image, the computationally projected images (b) without and (c) with computational phase modulation, and (d) the deconvolved image of Fig. (c).
Fig. 17
Fig. 17 Setup used for experimental verification of EDOF projector.
Fig. 18
Fig. 18 (a) An input Lena image and (b) its deconvolved image.
Fig. 19
Fig. 19 Results obtained with the EDOF projector based on phase-modulated LF imaging. The optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) The optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.
Fig. 20
Fig. 20 Results obtained with the super-resolved EDOF projector based on phase-modulated LF imaging. Optically projected images of the Lena image in Fig. 18(a) based on (a) the conventional single projector and the LF projector (b) without and (c) with computational phase modulation. (d) An optically projected image of the deconvolved Lena image in Fig. 18(b) based on the LF projector with computational phase modulation.

Tables (2)

Tables Icon

Table 1 System parameters used in simulations.

Tables Icon

Table 2 The pitch of optics and the achieved sampling pitch.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

m ( s ) = ϕ em ϕ in ,
ϕ in = arctan ( g ( s ) s ) ,
ϕ em = arcsin ( n sin ( ϕ in ) ) .
δ u = Δ u n sr ,
max ( Δ a , δ u ) Δ o 2 .
Δ u = n sr δ u n sr Δ o 2 .
u s ( s ( k ) , z v , m ( s ) ) = f tan ( θ ray + θ mod ) ,
θ ray = arctan ( s ( k ) z v ) ,
θ mod = m ( s ( k ) ) .
d ( s ( k ) , Δ s , z o , z v , m ( s ) ) = | ω o ω v | ,
ω o = u s ( s ( k ) , z o , 0 ) u s ( s ( k 1 ) , z o , 0 ) ,
ω v = u s ( s ( k ) , z v , m ( s ) ) u s ( s ( k 1 ) , z v , m ( s ) ) ,
Δ s = s ( k ) s ( k 1 ) .
d ¯ ( Δ s , z o , z v , m ( s ) ) = k = 2 N d ( s ( k ) , Δ s , z o , z v , m ( s ) ) N 1 .
d ¯ ( Δ s , z o , z v , m ( s ) ) Δ u n sr = δ u .
Ψ = π A 2 4 λ ( 1 f LF 1 z o 1 z v Ψ ) .
A = ( N 1 ) Δ s ,
f LF = 1 1 z o + 1 z v Ψ = 0 .
z v Ψ = 0 = z o .
g ( s ) = α s 3 ,
g ( s ) = β 1 ( s γ ) 2 ,
d d s g ( s ) ~ P ,
h ( u , W 20 ) = 1 2 | 1 + 1 exp ( j α s p 3 + j k W 20 s p 2 j 2 π u s p ) d s p | 2 ,
W 20 = λ Ψ 2 π ,
s p = s A ,
i ^ = arg max i [ i | o ¯ = t ] subject to 0 i ^ ( p ) c , p ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.