Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Electrically addressed focal stack plenoptic camera based on a liquid-crystal microlens array for all-in-focus imaging

Open Access Open Access

Abstract

Focal stack cameras are capable of capturing a stack of images focused at different spatial distance, which can be further integrated to present a depth of field (DoF) effect beyond the range restriction of conventional camera’s optics. To date, all of the proposed focal stack cameras are essentially 2D imaging architecture to shape 2D focal stacks with several selected focal lengths corresponding to limited objective distance range. In this paper, a new type of electrically addressed focal stack plenoptic camera (EAFSPC) based on a functional liquid-crystal microlens array for all-in-focus imaging is proposed. As a 3D focal stack camera, a sequence of raw light-field images can be rapidly manipulated through rapidly shaping a 3D focal stack. The electrically addressed focal stack strategy relies on the electric tuning of the focal length of the liquid-crystal microlens array by efficiently selecting or adjusting or jumping the signal voltage applied over the microlenses. An algorithm based on the Laplacian operator is utilized to composite the electrically addressed focal stack leading to raw light-field images with an extended DoF and then the all-in-focus refocused images. The proposed strategy does not require any macroscopic movement of the optical apparatus, so as to thoroughly avoid the registration of different image sequence. Experiments demonstrate that the DoF of the refocused images can be significantly extended into the entire tomography depth of the EAFSPC, which means a significant step for an all-in-focus imaging based on the electrically controlled 3D focal stack. Moreover, the proposed approach also establishes a high correlation between the voltage signal and the depth of in-focus plane, so as to construct a technical basis for a new type of 3D light-field imaging with an obvious intelligent feature.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Nowadays, cameras are widely used in various fields, such as handheld photography, security, industrial inspection, and autonomous driving, etc. Conventional cameras have a limited depth of field (DoF), which results in an obvious defocus blur and a loss of image detail. As shown, a limited DoF is useful for some specific applications, such as emphasizing the face in portrait photography by blurring the background. However, in most applications, a limited DoF is undesired to both the human vision and machine vision, especially in target detection and recognition. The loss of image detail will cause a relatively great challenge to the above work. So far, many methods have been proposed to overcome this critical problem [110]. One of the most powerful method is the focal stack imaging [1117], which is an emerging technology for computational photography, macroscopic and microscopic imaging. As demonstrated, the focal stack cameras can be used to capture a rich volumetric information of a scene selected by enhancing the capability about scene understanding. Therefore, the focal stack cameras will act a pivotal role in the applications including life sciences, 3D medical imagery, focus-tunable displays, etc [16]. Generally, the focal stack means to acquire a sequence of images focused at different spatial distance. With several current image fusion algorithms, these images can be combined to present a DoF effect beyond the range restriction normally allowable by the camera’s optics. Especially, a DoF extension means that more image details can be obtained for both human vision and machine vision. Currently, the focal stack can be implemented by multiple approaches. One way is to mechanically move the image sensor or camera lens or imaging target along the optical axis [15,18,19]. The mechanical movement can be achieved by many actuators such as voice coil motors, piezoelectric motors, ultrasonic transducers and DC motors. However, the mechanical movement should bear a relatively large inertia and thus the focal stack cannot be achieved in a jump-type manner. Moreover, the stimulated mechanical displacement cannot strictly follow the optical configuration such as main optical axis, which means a lateral drift in the captured focal stack. Thus, an imaging registration is necessarily required, so as to consume a relatively huge computing resources. Another way is to use adaptive lens such as liquid lens or liquid-crystal (LC) lens, their focal length can be adjusted via external filed based on electro-wetting effect, electro-optic effect, or fluid pumping effect [17,2026]. Among them, the liquid lens according to electro-wetting effect and the LC lens to electro-optic effect, demonstrate an operating feature without any mechanical moving part, so as to effectively avoid image registration.

Usually, the conventional 2D cameras can not be utilized to capture the key information about the light radiation distribution away from the targets, because of the dimension loss from real 3D scenes to 2D images. To date, most of the proposed focal stack cameras are essentially 2D architectures, which can be used to capture a stack of 2D images focused at different objects. With a rapid development of modern optical imaging technology, cameras for acquiring more dimensional information have been proposed. Among them, the plenoptic cameras consisting of a main lens and a microlens array with fixed surface profile allows to obtain both the spatial and angular information simultaneously [2731], i.e., the 3D images of a scene. So far, the plenoptic cameras have been used in several typical applications, such as multi-view imaging, depth estimation, and 3D target reconstruction.

In this paper, a new type of electrically addressed focal stack plenoptic camera (EAFSPC) based on a functional liquid-crystal microlens array for all-in-focus imaging is proposed. As a focal stack camera proposed firstly, a sequence of raw light-field images can be rapidly manipulated through shaping a needed fashion of 3D focal stack. The electrically addressed focal stack strategy relies on the electrically tuning of the focal length of the liquid-crystal microlens array by efficiently selecting or adjusting or jumping the signal voltage applied over the microlenses configurated. An algorithm based on the Laplacian operator is further utilized to composite the electrically addressed focal stack leading to raw light-field images with an extended DoF and then the all-in-focus refocused images. The suggested strategy does not require any macroscopic movement of the optical apparatus, so as to thoroughly avoid the registration of different image sequence. Experiments demonstrate that the DoF of the refocused images can be significantly extended into the entire tomography depth of the EAFSPC, which means a significant step for an all-in-focus imaging based on an electrically controlled 3D focal stack.

2. Structures and principles

2.1 Electrically addressable property of liquid-crystal microlens array

As shown in Fig. 1, the key functional structures of the liquid-crystal microlens array are both ∼500µm glass substrates with different conductive film pre-coated over their inner surfaces. The inner surface of the bottom substrate is deposited with a planar ITO electrode of 185nm thickness, and that of the top substrate is deposited with an aluminum (Al) electrode of 100nm thickness. After conventional ultraviolet photolithography and wet-etching process, the patterned Al electrode with a hexagonally arrayed micro-holes, is formed. The diameter of each micro-hole is 100µm and the center-to-center distance 125µm. The polyimide layer is further spin-coated on both the patterned Al and planar ITO electrodes of the liquid-crystal microlens array, and then prebaked for 10min at 80°C and cured for 30min at 230°C on a hot plate, sequentially. The cured polyimide layers on the patterned Al and planar ITO electrodes act as an initial alignment layer of LC molecules, which are rubbed anti-paralleled to each other so as to shape a homogeneously aligning of LC molecules filled later. Glass microsphere spacers of 20µm diameter are mixed with the adhesive and then deposited over a surface of a glass substrate so as to effectively separate two substrates and also shape a micro-cavity. Finally, a layer of nematic LC materials (E44, purchased from Merck) was fully filled into the shaped microcavity to construct the liquid-crystal microlens array architecture. The electro-optical parameters are: ne = 1.7904 and no = 1.5277 (Δn = 0.2627 at 589.3 nm, +20°C) and ε = 5.2, $\varepsilon_{\parallel}=22.0$, where ε and $\varepsilon_{\parallel}$ are the dielectric constants of the LC molecules perpendicular or parallel to the director, respectively.

 figure: Fig. 1.

Fig. 1. The basic structure and the focal length electrically addressable property of the liquid-crystal microlens array.

Download Full Size | PDF

When an appropriate signal voltage is applied, a stable gradient refractive index distribution corresponding to the extraordinary beams can be formed in the liquid-crystal microlens array, which results in an effective beam converging behavior. The black dash line indicates a typical equivalent reflective index distribution profile. The formed gradient refractive index distribution only corresponds to the extraordinary beams, i.e., the incident beams whose polarization direction is consistent with the homogeneous alignment direction, which indicates the polarization sensitivity of the liquid-crystal microlens array. Generally, the focal length can be easily controlled via the root mean square (RMS) of the signal voltage applied over the liquid-crystal microlenses. For example, quickly select a specific RMS value according to demands, gradually adjust the RMS value within a certain range, and quickly switch from one RMS value to another, which are the selecting, adjusting and jumping manipulation of the applied signal voltage. The above beam converging behavior establishes the electrically addressable relationship between the focal length of liquid-crystal microlens array and the applied voltage signal.

To evaluate the focal length electrically addressable property of the liquid-crystal microlens array, two basic parameters of the point spread function (PSF) and the focal length are measured according to a common measurement system, as shown in Fig. 2. A collimated white light source was used, which employs a high-brightness tungsten lamp to generate collimated light in the visible spectral region. First, a beam of the collimated white light source continuously pass through the linear polarizer and the liquid-crystal microlens array. Then the transmitted lightfields are remarkably amplified by a microscope objective of ×40 and 0.65 numerical aperture, and finally captured by a Laser Beam Profiler (WinCamD of DataRay). To finely locate the focal planes shaped, the distance between the liquid-crystal microlens array and the microscope objective should be adjusted precisely during experiments. According to the experimental configuration, a relatively accurate focal length should be equal to a sum of the thickness of the glass substrate and the distance between the beam exiting end of the device and the incident surface of the microscope, as shown in Fig. 2, where the light blue rectangles represent the glass substrates. During experiments, the liquid-crystal microlens array is driven by square wave voltage signals with 2KHz frequency, which are generated by an AC generator.

 figure: Fig. 2.

Fig. 2. The common testing configuration for measuring the basic parameters of the PSF and the focal length of the liquid-crystal microlens array. The light blue rectangles represent the glass substrates.

Download Full Size | PDF

The electrically addressable relationship between the focal length of the liquid-crystal microlens array and the voltage signal applied is demonstrated in Fig. 3. Some typical PSFs of the fabricated liquid-crystal microlens array are shown in the inserted figures, where the false color represents the normalized light intensity, and the colorbar is also shown in the inserted dashed box. Since the formed phase profile can not be a perfect parabola, and the birefringence effect of nematic LCs is wavelength-dependent, aberrations such as tilt, spherical and chromatic abberations exist in the liquid-crystal microlens array [3234]. According to the measurements, the full width at half maxima (FWHM) of the focal spots is approximately ∼3µm, which is relatively sharp. It can be further improved through optimizing the structural design, the fabrication technology and the properties of the adopted LC material, such as adopting high-dielectric or high-resistance layers to get a better phase profile [35], floating electrodes that enable the level of near-diffraction-limited [33], and LC materials with less obvious wavelength-dependent feature.

 figure: Fig. 3.

Fig. 3. Relationship between the focal length of the liquid-crystal microlens array and the RMS value of the signal voltage.

Download Full Size | PDF

According to the experiments, the realized focal length of the liquid-crystal microlens array is in a range from ∼0.6 mm to ∼1.9 mm, which can be electrically addressed via the applied voltage signals, so as to present an excellent focal length electrically addressable property under the condition of a low signal voltage level. For the liquid-crystal microlens array with a nematic LC layer of ∼20µm, the response time is usually tens to hundreds of milliseconds, which can be further improved by reducing the thickness of the LC layer or selecting other LC materials [36].

2.2 Focal stack strategy of the proposed EAFSPC

As shown in Fig. 4, the proposed EAFSPC mainly consists of a CMOS sensor (IMX342 of Sony), a liquid-crystal microlens array and a main lens with a focal length of 25mm (LF2528M-F). The resolution of the CMOS sensor array is 6464 × 4852 and its pixel pitch is 3.45µm. The maximum frame rate of the EAFSPC is 3.9 fps. The distance between liquid-crystal microlens array and the CMOS sensor is set as ∼1.1mm. Some supporting parts were fabricated by 3D printing and computer numerical control technology, with which the above functional components were integrated into a hand-held camera. A linear polarizer (USP-50C0.4-38 of OptoSigma) is placed in front of the liquid-crystal microlens array to satisfy its polarization sensitivity, thereby eliminating the crosstalk of ordinary beams.

 figure: Fig. 4.

Fig. 4. The structure configuration of the proposed EAFSPC.

Download Full Size | PDF

The principles of the focal stack strategy of the proposed EAFSPC is shown in Fig. 5. Inside the proposed EAFSPC architecture, the main lens and the liquid-crystal microlens array constitute a typical Galilean-mode plenoptic imaging system [37]. As demonstrated before, with the focal length electrically addressable property, the focal length of the liquid-crystal microlens array can effectively tuned within a range by continuously adjusting the applied signal voltage, so as to capture a sequence of electrically addressable raw light field images with distinct focus.

 figure: Fig. 5.

Fig. 5. The focal stack strategy of the proposed EAFSPC.

Download Full Size | PDF

Since the captured raw light field images contain both spatial and angular information, this sequence is essentially a 3D focal stack, in which each raw light field image has one major focused plane for the scene. The major focused plane is determined by the real-time focal length of the liquid-crystal microlens array. Therefore, the object space can be chromatographed according to the focal length electrical addressing properties. During the 3D focal stack capture process, the plenoptic imaging system does not experience any macroscopic movement, thus the image registration is effectively avoided. To obtain a raw light field image with desired DoF extension, the whole DoFs of all captured raw light field images should ideally cover the entire desired depth range, and the end of one raw light field image’s DoF is better to be the start of the next image’s DoF [38]. For the proposed EAFSPC, the DoF can be extended to the entire tomography depth of the EAFSPC via the proposed focal stack strategy.

2.3 DoF of the EAFSPC

As described in previous research, defocus blur is a consequence of geometric optics, which limits the DoF range of overall imaging system. Due to the defocus blur effect, a blur spot of radius will be formed on the image plane. The formed blur spot is known as the circle of confusion. Figure 6 shows the schematic of the DoF of the proposed EAFSPC. L presents the distance from the main lens to the liquid-crystal microlens array, B presents the distance from the liquid-crystal microlens array to the CMOS sensor, and fm is the focal length of the main lens. As shown, lights from points X, Y, and Z are compressed by the main lens and tend to form points X’, Y’, and Z’. However, due to the presence of the liquid-crystal microlens array, they are eventually imaged at the points X”, Y”, and Z”, which is shown in Fig. 6(b). Cx, Cy, and Cz are the distances from points X”, Y”, and Z” to the LC microlens, and D represents the aperture of the LC microlens.

 figure: Fig. 6.

Fig. 6. The schematic of the DoF of the proposed EAFSPC.

Download Full Size | PDF

Point Y” is just on the major focused plane (i.e., the sensor plane), thus, Cy is equal to B. X” and Z” are two points before and behind the major focused plane, respectively, and the blur spot sizes are exactly equal to the resolution limit s. In this case, they are considered to be acceptably sharp. According to previous studies, the mathematical relationships are as follows [39]:

$$\left\{ \begin{array}{l} {A_x} = {(\frac{1}{{{f_m}}} - \frac{1}{{{B_x} + L}})^{ - 1}},{A_z} = {(\frac{1}{{{f_m}}} - \frac{1}{{{B_z} + L}})^{ - 1}}\\ {B_x} = {(\frac{1}{f} - \frac{1}{{{C_x}}})^{ - 1}},{B_z} = {(\frac{1}{f} - \frac{1}{{{C_z}}})^{ - 1}}\\ {C_x} = \frac{{BD}}{{D + s}},{C_z} = \frac{{BD}}{{D - s}} \end{array} \right.,$$
where Bx and Bz are the distances from the LC microlens to the points X’ and Z’. Ax and Az are the distances from the main lens to the points X and Z, and fm represents the focal length of the main lens. Therefore, the DoF range can be considered as between Ax and Az in theory.

For plenoptic cameras with conventional microlens arrays, since the focal length of each microlens is fixed, the captured raw light-field images frequently suffer from a fixed or limited DoF. Therefore, approaches using a multi-focus microlens array have been proposed for extending the DoF, however the spatial resolution is further reduced [31,40,41]. As demonstrated before, the focal length of the liquid-crystal microlens array (i.e., the parameter f) can be effectively tuned within a range by continuously adjusting the applied signal voltage. Compared to conventional microlens arrays, the above-mentioned significant feature can be used to further extend the DoF. Theoretically, the proposed EAFSPC demonstrates an advantage over conventional plenoptic cameras in terms of DoF. According to Eq. (1), Ax and Az will also be different with the change of f, which means the DoF range of each captured raw light field image is different. Thus, the captured focal stack can be used to obtain a single raw light field image with extended DoF, which can cover the entire tomography depth of the EAFSPC. The obtained DoF extension is attributed to the change of parameter f, which means that f is an important factor that determines the depth limitation of the EAFSPC. Thus, by expanding the focal length variation range of the liquid-crystal microlens array, the DoF extension can be further improved. Specifically, it can be achieved by adopting the appropriate structure, as well as optimizing the aperture of the LC microlens, the thickness of the LC layer and the LC material used [4244]. Likewise, as discussed above, optimizing aberrations can result in better imaging quality, further improving the depth limitation of the EAFSPC.

2.4 Algorithm for producing a single raw light field image

To obtain a single raw light field image with extended DoF, a proper image algorithm is required. Since the 3D focal stack is captured without any macroscopic movement, the image registration is effectively avoided. The single raw light field image can be produced in one step, i.e., the step of image fusion. For focal stack imaging, the image fusion algorithm mainly relies on the image sharpness criterion. Currently, the image sharpness criterion for focal stack imaging can be divided into two categories, one is based on local variance [11,14,15], and the other is based on image gradient [45]. In this paper, an image fusion algorithm based on the Laplacian operator is proposed for the EAFSPC. Generally, the focused images present more sharp edges than blurred ones. The Laplacian operator is a simple way to detect image edges, which performs a gray-scale second derivative measurement. The relization of the Laplacian operator can be completed simply by calculating the response of the Sobel operator [46], the latter performs a 2D gray gradient measurement via a pair of 3 × 3 convolution mask, as shown by Eq. (2).

$${G_x}(i,j) = \left[ {\begin{array}{ccc} { - 1}&0&1\\ { - 2}&0&2\\ { - 1}&0&1 \end{array}} \right]\ast I(i,j),{G_y}(i,j) = \left[ {\begin{array}{ccc} { - 1}&{ - 2}&{ - 1}\\ 0&0&0\\ 1&2&1 \end{array}} \right]\ast I(i,j),$$
where Gx(i,j) and Gy(i,j) are used to express the gray gradient in x-direction and y-direction of image point I(i,j). The approximate gradient can be expressed as:
$$G(i,j) = |{{G_x}(i,j)} |+ |{{G_y}(i,j)} |.$$
With the Laplacian operator, the response at each pixel in the focal stack can be calculated and also considered as a criterion for image sharpness evalutaion. The proposed algorithm flow for producing a single raw light field image with extended DoF is shown in Fig. 7. In the focal stack, the captured raw light field images are sequentially named as Ik, k = 1,…,M, M means the number of raw light field images, Idst represents the destination image.

 figure: Fig. 7.

Fig. 7. The pipeline of the raw light field fusion algorithm for obtaining a single raw light field image with extended DoF.

Download Full Size | PDF

Firstly, the captured 3D focal stacks are converted to gray-scale images and then Gaussian blurring is performed on all the gray-scale images for suppressing noise. Secondly, the responses of Laplacian operator at each pixel in the kth raw light field image Lk(i,j) were calculated. Subsequently, the maximum response Ln(i,j) is obtained by comparing all the Lk(i,j) in the focal stack and the corresponding n is recorded as index. Finally, the corresponding pixel value In(i,j) in focal stack is assigned to the destination image Idst, which is exactly the desired single raw light field image. With this strategy, each pixel in the destination image Idst is extracted from the pixel with the largest response at corresponding position. Theoretically, the obtained single light field image has an extended DoF, which covers the entire tomography depth of the EAFSPC. Meanwhile, the resolution of the obtained single raw light field image is consistent with that of the CMOS sensor. Subsequently, an all-in-focus refocused image can be obtained via the depth-based rendering method. Furthermore, according to the recoded index voltage signals, the depth information in the field of view can be solved [47].

3. Experiments and discussion

A typical scene including three objects is constructed to demonstrate the proposed EAFSPC, as shown in Fig. 8. The three objects are placed at distances of 90mm, 155mm and 230mm respectively. A 3D focal stack of the scene can be captured via the strategy proposed in Section 2.2. The applied voltage was increased by 0.5Vrms from 1.5Vrms to 6.0Vrms in turn. The captured 10 frames of scene constitute the 3D focal stack. Considering the maximum frame rate of the camera and the response time of the liquid-crystal microlens array, the theoretical fastest acquisition time is about 2.5s. However, since the adjustment of the applied voltage and the image acquisition were performed manually in the experiment, this process took about 10s.

 figure: Fig. 8.

Fig. 8. The typical imaging scene for demonstrating the proposed EAFSPC, the three objects are placed at distances of 100mm, 170mm and 240mm respectively.

Download Full Size | PDF

Within the captured 3D focal stack, the best focused raw light field images corresponding to different depths are shown in Fig. 9. The applied voltage signals are 2.0Vrms, 3.5Vrms and 4.0Vrms, respectively. The inserted figures are partially enlarged for illustrating the focused depths. In these typical raw light field images, the objects with specific depths are presented in a relatively clear state, while the other objects are relatively blurred. Generally, the raw light field image needs to be processed by a certain rendering method for obtaining images with better visual effect. With the depth-based rendering approach, the states of clear and blurred in the refocused images are more intuitive.

 figure: Fig. 9.

Fig. 9. Three focused states corresponding to different depths, the applied voltage signals are 2.0Vrms, 3.5Vrms and 4.0Vrms, respectively.

Download Full Size | PDF

Based on the image fusion algorithm proposed in Section 2.4, a single raw light field image was produced according to the captured 3D focal stack, as shown in Fig. 10. As presented, the objects at all depths are in-focus, which means a significantly extend DoF is achieved. The resolution of the obtained single raw light field image is consistent with that of the CMOS sensor. Figure 11 shows serveral refocused images. With the depth-based rendering approach, the typical raw light field images corresponding to voltage signals of 2.0Vrms, 3.5Vrms and 4.0Vrms, are rendered to intuitive images, as shown in Figs. 11(a) to 11(c). The all-in-focus refocused image of the produced single raw light field image is also shown in Fig. 11(d). Several patches are partially enlarged for illustration. The spatial resolution of the refocused images is 1616 × 1213, and that of the raw light field images is 6464 × 4852, which indicates a loss of spatial resolution due to the rendering process. As presented in Figs. 11(a) to 11(c), the objects with specific depths are presented in a relatively clear state, while the others are relatively blurred. The clear and blurred states are determined by the DoF. From Figs. 11(a) to 11(c), the relatively clear positions are sequentially the distant object, the middle object, and near object. As demonstrated in Section 2.4, each pixel in the produced single raw light field image is extracted from the pixel with the largest Laplacian response at corresponding position. Theoretically, the obtained single light field image has an extended DoF, which covers the entire tomography depth of the EAFSPC. Therefore, the refocused image of the produced single raw light field image also has an extended DoF.

 figure: Fig. 10.

Fig. 10. The produced single raw light field image via the proposed image fusion algorithm.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The refocused images of typical raw light field images corresponding to voltage signals of (a) 2.0Vrms, (b) 3.5Vrms and (c) 4.0Vrms, as well as (d) the all-in-focus refocused image of the produced single raw light field image.

Download Full Size | PDF

As shown in Fig. 11(d), with the proposed image fusion algorithm, the objects at different depths are all-in-focus, which means that the DoF of the refocused image is significantly extended to the entire tomography depth of the EAFSPC. Compared those patches with the best focused patches in Figs. 11(a) to 11(c), the image quality is nearly the same. To quantitatively demonstrate the DoF extension via electrically addressed focal stack strategy, Sobel operator is selected for detecting the edge information of the refocused images in Figs. 11(a) to 11(d). Gaussian blur is used to preprocess the refocused images to suppress noise, and then two 3 × 3 matrices in Eq. (3) are used to perform the plane convolution operation with the refocused images, which can obtain the approximate brightness difference in both horizontal and vertical directions. Generally, the sharper image has a faster gray variation at the image edge, i.e, a larger image gradient.

The edge images obtained after the operation of the Sobel operator are shown in Fig. 12. Figures 12(a) to 12(c) are repectively the edge images of the refocused images of 2.0Vrms, 3.5Vrms and 4.0Vrms, and Fig. 12(d) is that of the refocused image of the produced raw light field image. Figure 12(d) exhibits the most obvious edge features at all depths, which is better than other edge images. Meanwhile, the Sobel mean gradients of Figs. 12(a) to 12(d) are 7.600, 6.740, 6.524 and 8.026, respectively, the largest Sobel mean gradient represent the most obvious edge features, which means that the DoF of the refocused image is significantly extended via the proprosed EAFSPC. Specifically, the DoF is extended to the entire tomography depth of the EAFSPC via the proposed focal stack strategy. Another typical outdoor experiment is carried out, as shown in Fig. 13. A board marked with “HUST” and a fire box were placed 1800 and 2700 mm away from the EAFSPC, respectively, and used as targets.

 figure: Fig. 12.

Fig. 12. The edge images of the refocused images of (a) 2.0Vrms, (b) 3.5Vrms and (c) 4.0Vrms, as well as (d) the edge image of the refocused image of the produced single raw light field image.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. The typical outdoor experiment scene.

Download Full Size | PDF

A 3D focal stack of the scene can be captured via the strategy proposed in Section 2.2. The applied voltage was increased by 0.25Vrms from 1.5Vrms to 4.0Vrms in turn. The captured 11 frames of raw light field images constitute the 3D focal stack. Within the captured 3D focal stack, the best focused raw light field images corresponding to different depths are shown in Fig. 14. The applied voltage signals are 2.25Vrms, and 3.50Vrms, respectively. The inserted figures are partially enlarged for illustrating the focused depths. When the voltage is 2.25Vrms, the board marked with “HUST” is presented in a relatively clear state, while the fire box is relatively blurred. When the voltage is 3.50Vrms, the above state is opposite. Based on the image fusion algorithm proposed in Section 2.4, a single raw light field image was produced according to the captured 3D focal stack, as shown in Fig. 15. As presented, the objects at all depths are in-focus, which means a significantly extend DoF is achieved. The resolution of the obtained single raw light field image is consistent with that of the CMOS sensor.

 figure: Fig. 14.

Fig. 14. Two focused states corresponding to different depths, the applied voltage signals are 2.25Vrms, and 3.50Vrms, respectively.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. The produced single raw light field image via the proposed image fusion algorithm.

Download Full Size | PDF

Serveral refocused images are shown in Fig. 16. Enlarged regions are demonstrated on the right. Figures 16(a) to 16(c) are repectively the refocused images of 2.25Vrms, 3.50Vrms and the all-in-focus refocused image. Similarly, the spatial resolutions of the refocused images and the raw light field images are respectively 1616 × 1213 and 6464 × 4852, which also indicates a loss of spatial resolution due to the rendering process. As shown in Fig. 16(c), with the proposed image fusion algorithm, the objects at different depths are all-in-focus, which means that the DoF of the refocused image is significantly extended to the entire tomography depth of the EAFSPC. Compared those patches with the best focused patches in Figs. 16(a) and 16(b), the image quality is nearly the same.

 figure: Fig. 16.

Fig. 16. The refocused images of typical raw light field images corresponding to voltage signals of (a) 2.25Vrms, (b) 3.50Vrms and (c) the all-in-focus refocused image.

Download Full Size | PDF

Similarly, Sobel operator is performed for detecting the edge information of the refocused images in Figs. 16(a) to 16(c). Gaussian blur is used to preprocess the refocused images to suppress noise, and then two 3 × 3 matrices in Eq. (3) are used to perform the plane convolution operation. The edge images are shown in Fig. 17. Figures 17(a) to 17(c) are repectively the edge images of the refocused images of 2.25Vrms, 3.50Vrms and that of the refocused image of the produced raw light field image. As shown, Figure 17(c) exhibits the most obvious edge features at all depths, which is better than other edge images. The Sobel mean gradients of Figs. 17(a) to 17(c) are 8.145, 7.540 and 10.941, respectively, the largest Sobel mean gradient represent the most obvious edge features, which means that the DoF of the refocused image is significantly extended via the proprosed EAFSPC.

 figure: Fig. 17.

Fig. 17. The edge images of the refocused images of (a) 2.25Vrms, (b) 3.50Vrms and (c) the edge image of the refocused image of the produced single raw light field image.

Download Full Size | PDF

Expriments demonstrate that the proposed EAFSPC can capture 3D focal stacks and enable to produce a 3D fusion image with an extended DoF. The DoF of the refocused images can be significantly extended into the entire tomography depth of the EAFSPC, which means a significant step for an all-in-focus imaging based on an electrically controlled 3D focal stack. Furthermore, the proposed imaging architecture can perform conventional 2D imaging when no voltage signal is applied [48], which has the advantage of high spatial resolution, i.e., the proposed architecture can realize the 2D/3D switchable imaging function.

4. Conclusion

In this paper, a new type of electrically addressed focal stack plenoptic camera (EAFSPC) based on a functional liquid-crystal microlens array for all-in-focus imaging is proposed. Unlike the 2D cameras, the proposed plenoptic cameras can be used to capture both the spatial and angular information in an entire spatial range layered by the signal voltage applied over the liquid-crystal microlens according to a 3D optical imaging architecture. As a 3D focal stack camera, a sequence of raw light-field images can be rapidly manipulated through rapidly shaping a 3D focal stack. The electrically addressed focal stack strategy relies on the electrically tuning of the focal length of the liquid-crystal microlens array by efficiently selecting or adjusting or jumping the signal voltage applied over the liquid-crystal microlenses. An algorithm based on the Laplacian operator is utilized to composite the electrically addressed focal stack leading to raw light-field images with an extended DoF and then the all-in-focus refocused images. The proposed strategy does not require any macroscopic movement of optical apparatus, so as to thoroughly avoid the registration of different image sequence. Experiments demonstrate that the DoF of the refocused images can be significantly extended into the entire tomography depth of the EAFSPC, which means a significant step for an all-in-focus imaging based on an electrically controlled 3D focal stack. Furthermore, the proposed imaging architecture can be utilized to perform conventional 2D imaging with high spatial resolution without any voltage signal, which means that the proposed architecture can realize an attractive 2D/3D switchable imaging.

Funding

National Natural Science Foundation of China (61176052).

Acknowledgment

This work was partially carried out at the USTC Center for Micro- and Nano-scale Research and Fabrication, and the authors thank Yizhao He and Yu Wei for their help on micro-nano-fabrication.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Castro and J. Ojeda-Castañeda, “Asymmetric phase masks for extended depth of field,” Appl. Opt. 43(17), 3474–3479 (2004). [CrossRef]  

2. O. Cossairt and S. K. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP) (2010).

3. R. Edward, Jr. Dowski, and W.T. Cathey, “Extended depth of field through wavefront coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]  

4. N. George and W. Chi, “Extended depth of field using a logarithmic asphere,” J. Opt. A: Pure Appl. Opt. 5(5), S157–S163 (2003). [CrossRef]  

5. F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” in IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics (2009), pp. 72500N-72500N-12.

6. G. Indebetouw and H. Bai, “Imaging with Fresnel zone pupil masks: extended depth of field,” Appl. Opt. 23(23), 4299–4302 (1984). [CrossRef]  

7. P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express 16(17), 12995–13004 (2008). [CrossRef]  

8. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011). [CrossRef]  

9. S. Bagheri and B. Javidi, “Extension of depth of field using amplitude and phase modulation of the pupil function,” Opt. Lett. 33(7), 757–759 (2008). [CrossRef]  

10. Z. Zalevsky and S. Ben-Yaish, “Extended depth of focus imaging with birefringent plate,” Opt. Express 15(12), 7202–7210 (2007). [CrossRef]  

11. C. Zhou, D. Miau, and S. K. Nayar, “Focal sweep camera for space-time refocusing,” http://www1.cs.columbia.edu/CAVE/projects/focal_sweep_camera/.

12. D. E. Jacobs, J. Baek, and M. Levoy, “Focal stack compositing for depth of field control,” https://graphics.stanford.edu/papers/focalstack/.

13. A. Koppelhuber, C. Birklbauer, S. Izadi, and O. Bimber, “A transparent thin-film sensor for multi-focal image reconstruction and depth estimation,” Opt. Express 22(8), 8928–8942 (2014). [CrossRef]  

14. A. Späth, S. Schö Ll, C. Riess, D. Schmidtel, G. Paradossi, J. R. Raabe, J. Hornegger, and R. H. Fink, “STXM goes 3D: digital reconstruction of focal stacks as novel approach towards confocal soft x-ray microscopy,” Ultramicroscopy 144, 19–25 (2014). [CrossRef]  

15. L. Ma, X. Zhang, Z. Xu, A. Spath, Z. Xing, T. Sun, and R. Tai, “Three-dimensional focal stack imaging in scanning transmission X-ray microscopy with an improved reconstruction algorithm,” Opt. Express 27(5), 7787–7802 (2019). [CrossRef]  

16. K. Wu, Y. Yang, M. Yu, and Q. Liu, “Block-wise focal stack image representation for end-to-end applications,” Opt. Express 28(26), 40024–40043 (2020). [CrossRef]  

17. Y.-J. Wang, X. Shen, Y.-H. Lin, and B. Javidi, “Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens,” Opt. Lett. 40(15), 3564–3567 (2015). [CrossRef]  

18. Y. Liu, J. Wang, Y. Hong, Z. Wang, K. Zhang, P. A. Williams, P. Zhu, J. C. Andrews, P. Pianetta, and Z. Wu, “Extended depth of focus for transmission x-ray microscope,” Opt. Lett. 37(17), 3708–3710 (2012). [CrossRef]  

19. M. Solh, “Real-time focal stack compositing for handheld mobile cameras,” Proc. SPIE 9020, 90200Z (2014). [CrossRef]  

20. H. Ren and S.-T. Wu, “Variable-focus liquid lens,” Opt. Express 15(10), 5931–5936 (2007). [CrossRef]  

21. C. Liu, D. Wang, Q. H. Wang, and J. C. Fang, “Electrowetting-actuated multifunctional optofluidic lens to improve the quality of computer-generated holography,” Opt. Express 27(9), 12963–12975 (2019). [CrossRef]  

22. C. Liu, D. Wang, Q. H. Wang, and Y. Xing, “Multifunctional optofluidic lens with beam steering,” Opt. Express 28(5), 7734–7745 (2020). [CrossRef]  

23. S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature cameras,” Appl. Phys. Lett. 85(7), 1128–1130 (2004). [CrossRef]  

24. N. Chronis, G. L. Liu, K. H. Jeong, and L. P. Lee, “Tunable liquid-filled microlens array integrated with microfluidic network,” Opt. Express 11(19), 2370–2378 (2003). [CrossRef]  

25. K. H. Jeong, G. L. Liu, N. Chronis, and L. P. Lee, “Tunable microdoublet lens array,” Opt. Express 12(11), 2494–2500 (2004). [CrossRef]  

26. J. F. Algorri, N. Bennis, V. Urruchi, P. Morawiak, J. M. Sánchez-Pena, and L. R. Jaroszewicz, “Tunable liquid crystal multifocal microlens array,” Sci. Rep. 7(1), 17318 (2017). [CrossRef]  

27. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(1), 821–825 (1908). [CrossRef]  

28. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanf. Univ. Tech. Rep. CSTR 2005-02 (Stanford University, 2005).

29. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]  

30. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Appl. Opt. 43(31), 5806–5813 (2004). [CrossRef]  

31. T. Georgiev and A. Lumsdaine, “Multimode plenoptic imaging,” Proc. SPIE 9404, 940402 (2015). [CrossRef]  

32. N. Fraval and J. L. de la Tocnaye, “Low aberrations symmetrical adaptive modal liquid crystal lens with short focal lengths,” Appl. Opt. 49(15), 2778–2783 (2010). [CrossRef]  

33. L. Li, D. Bryant, T. Van Heugten, and P. J. Bos, “Near-diffraction-limited and low-haze electro-optical tunable liquid crystal lens with floating electrodes,” Opt. Express 21(7), 8371–8381 (2013). [CrossRef]  

34. S. T. Wu, “Birefringence dispersions of liquid crystals,” Phys. Rev. A 33(2), 1270–1274 (1986). [CrossRef]  

35. A. Hassanfiroozi, Y. P. Huang, B. Javidi, and H.-P. D. Shieh, “Dual layer electrode liquid crystal lens for 2D/3D tunable endoscopy imaging system,” Opt. Express 24(8), 8527–8538 (2016). [CrossRef]  

36. S. Xu, Y. Li, Y. Liu, J. Sun, H. Ren, and S.-T. Wu, “Fast-response liquid crystal microlens,” Micromachines 5(2), 300–324 (2014). [CrossRef]  

37. T. Georgiev and A. Lumsdaine, “Depth of Field in Plenoptic Cameras,” Eurographics 2009, 5–8 (2009). [CrossRef]  

38. K. Kutulakos and S. Hasinoff, “Focal stack photography: High-performance photography with a conventional camera,” in Proc. 11th IAPR Conference on Mach. Vision Appl. (2009), 332–337.

39. Y. Lei, Q. Tong, X. Zhang, H. Sang, A. Ji, and C. Xie, “An electrically tunable plenoptic camera using a liquid crystal microlens array,” Rev. Sci. Instrum. 86(5), 053101 (2015). [CrossRef]  

40. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, 8291 (International Society for Optics and Photonics, 2012), p. 829108. [CrossRef]  

41. S. Bae, K. Kim, S. Yang, K. Jang, and K. Jeong, “Multifocal microlens arrays using multilayer photolithography,” Opt. Express 28(7), 9082–9088 (2020). [CrossRef]  

42. Y.-H. Lin, Y.-J. Wang, and V. Reshetnyak, “Liquid crystal lenses with tunable focal length,” Liq. Cryst. Rev. 5(2), 111–143 (2017). [CrossRef]  

43. M. Chen, W. He, D. Wei, C. Hu, J. Shi, X. Zhang, H. Wang, and C. Xie, “Depth-of-field-extended plenoptic camera based on tunable multi-focus liquid-crystal microlens array,” Sensors 20(15), 4142 (2020). [CrossRef]  

44. W. Wang, S. Li, P. Liu, Y. Zhang, Q. Yan, T. Guo, X. Zhou, and C. Wu, “Improved depth of field of the composite micro-lens arrays by electrically tunable focal lengths in the light field imaging system,” Opt. Laser Technol. 148, 107748 (2022). [CrossRef]  

45. Y. Xiao, G. Wang, X. Hu, C. Shi, L. Meng, and H. Yang, “Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack,” Sensors 19(22), 4845 (2019). [CrossRef]  

46. A. Kaehler and G. Bradski, Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library (O’Reilly Media, 2016), Chap. 10.

47. Q. Tong, M. Chen, Z. Xin, D. Wei, X. Zhang, J. Liao, H. Wang, and C. Xie, “Depth of field extension and objective space depth measurement based on wavefront imaging,” Opt. Express 26(14), 18368–18385 (2018). [CrossRef]  

48. J. F. Algorri, V. Urruchi, N. Bennis, P. Morawiak, J. M. Sánchez-Pena, and J. M. Otón, “Integral imaging capture system with tunable field of view based on liquid crystal microlenses,” IEEE Photonics Technol. Lett. 28(17), 1854–1857 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. The basic structure and the focal length electrically addressable property of the liquid-crystal microlens array.
Fig. 2.
Fig. 2. The common testing configuration for measuring the basic parameters of the PSF and the focal length of the liquid-crystal microlens array. The light blue rectangles represent the glass substrates.
Fig. 3.
Fig. 3. Relationship between the focal length of the liquid-crystal microlens array and the RMS value of the signal voltage.
Fig. 4.
Fig. 4. The structure configuration of the proposed EAFSPC.
Fig. 5.
Fig. 5. The focal stack strategy of the proposed EAFSPC.
Fig. 6.
Fig. 6. The schematic of the DoF of the proposed EAFSPC.
Fig. 7.
Fig. 7. The pipeline of the raw light field fusion algorithm for obtaining a single raw light field image with extended DoF.
Fig. 8.
Fig. 8. The typical imaging scene for demonstrating the proposed EAFSPC, the three objects are placed at distances of 100mm, 170mm and 240mm respectively.
Fig. 9.
Fig. 9. Three focused states corresponding to different depths, the applied voltage signals are 2.0Vrms, 3.5Vrms and 4.0Vrms, respectively.
Fig. 10.
Fig. 10. The produced single raw light field image via the proposed image fusion algorithm.
Fig. 11.
Fig. 11. The refocused images of typical raw light field images corresponding to voltage signals of (a) 2.0Vrms, (b) 3.5Vrms and (c) 4.0Vrms, as well as (d) the all-in-focus refocused image of the produced single raw light field image.
Fig. 12.
Fig. 12. The edge images of the refocused images of (a) 2.0Vrms, (b) 3.5Vrms and (c) 4.0Vrms, as well as (d) the edge image of the refocused image of the produced single raw light field image.
Fig. 13.
Fig. 13. The typical outdoor experiment scene.
Fig. 14.
Fig. 14. Two focused states corresponding to different depths, the applied voltage signals are 2.25Vrms, and 3.50Vrms, respectively.
Fig. 15.
Fig. 15. The produced single raw light field image via the proposed image fusion algorithm.
Fig. 16.
Fig. 16. The refocused images of typical raw light field images corresponding to voltage signals of (a) 2.25Vrms, (b) 3.50Vrms and (c) the all-in-focus refocused image.
Fig. 17.
Fig. 17. The edge images of the refocused images of (a) 2.25Vrms, (b) 3.50Vrms and (c) the edge image of the refocused image of the produced single raw light field image.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

{ A x = ( 1 f m 1 B x + L ) 1 , A z = ( 1 f m 1 B z + L ) 1 B x = ( 1 f 1 C x ) 1 , B z = ( 1 f 1 C z ) 1 C x = B D D + s , C z = B D D s ,
G x ( i , j ) = [ 1 0 1 2 0 2 1 0 1 ] I ( i , j ) , G y ( i , j ) = [ 1 2 1 0 0 0 1 2 1 ] I ( i , j ) ,
G ( i , j ) = | G x ( i , j ) | + | G y ( i , j ) | .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.