Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correction model for microlens array assembly error in light field camera

Open Access Open Access

Abstract

In a light field camera, a microlens array (MLA) assembly error can affect the quality of the image. In this study, aiming to ensure corrective imaging using a light field camera, we accurately evaluate and eliminate the assembly error. We used an error image and a standard image to confirm the MLA assembly error, and we developed an assembly error correction model combined with an image quality evaluation index to correct the error. The proposed error correction model can be employed for various assembly errors and different error ranges. Quantitative analyses are performed for these different scenarios. The proposed model can be applied in accurate imaging using a light field camera, four-dimensional optical radiation field information reconstitution, MLA manufacturing and assembly processes, etc.

© 2016 Optical Society of America

1. Introduction

In 2005, Ng et al. [1] proposed the technology and concept of light field imaging based on a surface radiation mechanism. A light field camera is used for this type of imaging. Unlike ordinary cameras, a light field camera contains a microlens array (MLA) between the main lens and the photoreceptor. Light first passes through the MLA via the main lens, and then, it is transferred to the photoreceptor. That is, the light (intensity) incident in a certain direction from the MLA is received by the receptor. In this manner, the light field information can be captured [2]. Because the light field camera can simultaneously capture multiangle light field information, it can be used to reconstruct a 3D model of the captured issue [3]. Light field distortion features [4] have been utilized for reconstructing transparent structure objects [5]. The light field camera can also be used for target identification. Apelt et al. [6] utilized the focus image and the depth-of-field image supplied by a light field camera to observe botanic forms and correlations in the process of growth. Kim et al. [7] explored human facial movement using the light field camera.

When installing the light field camera [8], the MLA arbitrarily deviates from the standard location, which affects detection in the corresponding area. In general, a registration error is used to measure the deviation of the MLA. Registration errors [9,10] between the MLA and the detector are mainly errors in the coupling distance, translation, rotation, and tilt. The assembly error results in varying degrees of obscurity and deformation, which causes distortions of the subaperture image such as aliasing and blur.

A relevant assembly error correction model is proposed to overcome the assembly error, which significantly affects light field camera imaging. Moreover, the light field camera assembly error correction model can be used for effectively judging the image quality. Image quality evaluation systems are developed accordingly and fused image quality evaluation is used as a reference for the error image quality evaluation model in this study. The quality evaluation index [11] is used as a reference. The quality evaluation index for the light field camera is proposed, and quantitative analysis is performed to evaluate the image quality under various assembly error conditions. In the study of fused image quality evaluation, Xydeas et al. [11] pointed out that the human visual system is particularly important for obtaining edge information in the recognition process; therefore, the Sobel operator should be used for detecting image edges. The measurement of a flame’s three-dimensional radiant energy field [12] is primarily based on readings from a single camera and a polyphaser. Wang et al. [13] developed a structural similarity model based on image structural modification as well as contrast and luminance distortion. Piella et al. [14] proposed image fusion characteristic evaluation indices Q, QW, and QE, and combined the structural similarity theory and fused image quality evaluation. Chen et al. [15] presented a contrast sensitivity function (CSF) to appraise fused image quality. Han et al. [16] proposed visual information fidelity as an image fusion performance metric.

We developed a program to simulate an image with an MLA assembly error, determined a quality evaluation index for a light field camera error image by considering the fused image quality evaluation, and analyzed the image quality distortion with the MLA assembly error, providing a fundamental basis for the assembly error.

2. Image quality evaluation

The light field camera based on the Monte Carlo method established in preliminary research was used to contrapose the MLA assembly error for conducting the numerical simulation. Further, the image quality evaluation index was selected to evaluate the light field camera image quality.

2.1 Physical model

The light field camera is different from a traditional camera. While it is based on a traditional camera, the light field camera has an MLA in the image plane of the main lens [17,18] and uses a photoreceptor to detect light that arrives at the focal plane of each microlens. A light field camera model has been constructed previously [19].

A reflective light field camera has been used in this study. Figure 1 shows the structural arrangement. A point light source radiates light from the origin of the coordinates towards the negative x-axis direction. The cosine of the illumination beam spread is 0.999. The target object is 30 m away from the origin “o” of the coordinates. The focal length of the reflective mirror is f = 2.0, the vertex position is x = 4 m, the caliber is D = 0.8 m, and the open mouth is oriented towards the negative direction of the shaft. The reflected light, via the 60 × 60 MLA, arrives at the data charge-coupled device (CCD) photoreceptor, pixel size s = 0.05 mm, the number of pixels is 600 × 600.

 figure: Fig. 1

Fig. 1 Structure of the reflective light field camera.

Download Full Size | PDF

Figure 2 shows the target plane, which is a 40 cm length-of-side orthogonal plane that consists of a 5 cm length-of-side black and white quadrate. The surfaces of the objects enable diffuse reflection. The reflectivity of the black area is 0.0 and that of the white area is 1.0.

 figure: Fig. 2

Fig. 2 Target plane.

Download Full Size | PDF

Figure 2 shows the target plane, which is an arrangement of microlenses with a diameter Dmic = 0.48 mm in a 60 × 60 array. The distance between adjacent lens centers is Dmic = 0.5 mm. The focal length of each microlens is fmic = 1.1669 mm. The hook faces on the two sides of the microlenses are spherical surfaces with radius Rmic = 1.15 mm. The refractive index of the microlens medium is 1.333.

2.2 Assembly error

If there are errors in the MLA installation in the light field camera assembly process, the refocused image will become obscured and other anamorphic conditions will arise. The errors between the MLA and the photoreceptor mainly include coupling distance, translation, angle, and rotation errors [20].

When the MLA planes and the photoreceptor are parallel, an offset is present between the standard plane and the MLA plane, which is defined as the coupling distance error [20]; meanwhile, an offset is present between the standard and the real position of microlens in the MLA plane, which is defined as the translation error. If there is a nonparallel rotation angle between the photoreceptor and the MLA plane, a rotation error will occur. However, when these planes are not parallel, a tilt error will occur.

2.3 Image quality evaluation system

Multisensor image fusion is the process of combining relevant information from two or more images into a single image, which enables effective information to be extracted. Recently, researchers have proposed image fusion models; thus, it is crucial to evaluate fused image quality [11–16,20].

Methods to evaluate fused image quality include the objectivity and subjectivity methods. The subjectivity method is defined as the evaluation of image quality conducted by experts; however, this method is considerably affected by environmental factors, and quantitative analysis cannot be applied. Therefore, this method is not utilized in this study. The objectivity method is defined as an image-sensed simulation using the human visual system, and the fused image quality is evaluated by measuring the relevant indices. The evaluation index for the objectivity method is divided into indices based on the visual system, statistical property, and information content. The statistical property index cannot evaluate the dependency between the fused image and the oriented image. The structural information cannot be entirely utilized, as the image quality evaluation results indicate an apparent disparity with subjective assessment results. The information content index mainly evaluates the information abundance of the fused image and conducts gray-level distribution. The visual system index evaluates the image quality by simulating the perceptual process of the human visual system. Therefore, an objective evaluation index based on edge information and a structural similarity model are selected for the quantitative analysis.

a) Objective evaluation index based on edge information

Edge information association is supported by human visual system studies. Furthermore, the objective evaluation index based on edge information can be obtained by evaluating the amount of edge information that is transferred from the input image to the fused image.

For example, consider a standard input image S (N × M) and an error image E. A Sobel operator is applied to obtain the edge strength g(i, j) and orientation α(i, j) information for each pixel p(i, j),1<i <N,1<j <M. Thus, for S and E,

f(x)=1.52+0.4538cosπx0.01083sinπx+0.0271cos2πx0.06249sin2πx
αS(i,j)=tan1(sSy(i,j)2sSx(i,j)2)
gE(i,j)=sEx(i,j)2+sEy(i,j)2
αE(i,j)=tan1(sEy(i,j)2sEx(i,j)2)
wheresx(i,j)2andsy(i,j)2are, respectively, the outputs of the horizontal and vertical Sobel templates centered on pixel (i, j) and convolved with corresponding pixels of image S and E. Thus, for image S with respect to image E, the relative edge strength GSE and orientation values of ASE are formulated as
GSE(i,j)={gS(i,j)gE(i,j)gS(i,j)<gE(i,j)gE(i,j)gS(i,j)gS(i,j)gE(i,j)
ASE(i,j)=||αE(i,j)αS(i,j)|π2|π2
Equations (5) and (6) are used to derive the similarity of the edge strength and orientation preservation values as QgSE(i,j) and QαSE(i,j), respectively.
QgSE(i,j)=Γg1+eκg(GSE(i,j)σg)
QαSE(i,j)=Γα1+eκα(GSE(i,j)σα)
The constants Γg, κg, σg, Γα, κα, andσα [11] are used to determine the exact shape of the sigmoid functions used to obtain the edge strength and orientation preservation values. The edge information preservation values are then defined as
QSE(i,j)=QgSE(i,j)×QαSE(i,j)
The evaluation was performed using
QSE=i=1Mj=1NQSE(i,j)ωS(i,j)i=1Mj=1NωS(i,j)
where ωS(i,j) represents the weight ofQSE(i,j).

b) Structural similarity

The structural similarity model is defined by comparing the structural similarities of the error image and the standard image. To evaluate the image quality, the structural similarity evaluation index is defined as follows:

SSIM(S,E)=[l(S,E)]α[c(S,E)]β[s(S,E)]γ
where [l(S,E)]α, [c(S,E)]β, and [s(S,E)]γare, respectively, the luminance, contrast, and structural comparison measures between images S and E, where α, β, and γ are parameters that define the relative importance of the three components. In particular, we set α = β = γ = 1. The above-mentioned measures are expressed as
l(S,E)=2μSμE+C1μS2+μE2+C2
c(S,E)=2σSσE+C2σS2+σE2+C2
s(S,E)=2σSE+C3σSσE+C3
where μS, μE, σS, and σE can be viewed as estimates of the gray average and the gray scale standard deviation of S and E, respectively, and σSE represents the gray covariance of image S and E. C1, C2, and C3 are set to 0. l(S,E)and c(S,E) are in the range of [0,1], and s(S,E) is in the range of [0,2]. Hence, SSIM(S,E)is in the range of [0,2]. Therefore, an indication of structural similarity can be attained via the above calculation. In the analysis, the larger the calculation result, the better the luminance, contrast, and structure similarity, and thus, the better the image quality.

2.4 Monte Carlo method

This study uses the Monte Carlo method (MCM) [21–25] to trace the light. The MCM divides the light transfer process into emission, reflection, scattering, transmission, absorption, and other mutually independent subprocedures [26]. The MCM can also be used to construct probability models for these subprocedures. Then, the MCM is used to trace and count the rays that are incident from every unit and judge whether each ray has been absorbed, transmitted, or escaped. Finally, we can determine the radiation energy of each unit [27, 28].

When using the MCM to simulate a photograph of a scene formed from the rays [29], we must ensure that a sufficient number of rays participate in the photographic process. However, in normal MCM-based simulations, the surfaces are either diffuse radiation surfaces or diffuse reflection surfaces. The emission radiation of the medium is isotropic [30, 31]. Therefore, light can be scattered in every direction. The absorption of both the surface and the medium must be considered [32]. Owing to these conditions, the number of light rays that are incident on the actual scene is greatly reduced [33]. Further, a longer distance between the medium or surface and the scene means that fewer rays are incident on that scene. The law is approximately based on the square of the distance between the medium or surface and the scene. To obtain a sufficient number of rays, the total number of simulated light rays must be increased dramatically, thus the calculation time would be too long to accept.

To simulate the light image in an acceptable time frame, light splitting technology can be applied. Light splitting has been used in the surface diffuse reflection and isotropic scattering processes of rays [34]. The split light rays, which move in different directions, can be obtained by sampling at random positions at which reflection or scattering occur. The incident light energy is thus divided into several parts [35–37].

3. Image quality evaluation function

For this study, we developed a program for MLA error analog computation. As the adequate number of rays can be guaranteed, we took advantage of parallel computing with 7 threads for the simulation. Each thread simulates 1 × 109 rays, each ray transmits 20 nW of energy, and the total number of rays is 7 × 109; the entire process requires 10 h. If a single thread is selected for the calculation, the homologous time will increase. It follows that, parallel computing effectively improves the efficiency and precision of the simulation. Thus, rays are emitted by a random number subroutine; when a thread is completed and the next is being processed, the program will automatically skip over 1 × 109 random numbers, by parity of reasoning, which ensures ray randomness. This section focuses on the effect of the assembly error on the image. In addition, a limit to the optical radiation signal energy threshold value of 3.5 nW is set, which enables the signals to be detected by a CCD. If the energy received by the pixel exceeds 3500 nW, a point of saturation is reached.

3.1 Coupling distance error

The coupling distance error structure is demonstrated in Fig. 3. Dmic is the diameter of microlens, x is the CCD, s, s’, and s” respectively are the MLA plane and two error MLA plane with error distance Δd, and d is the distance between the MLA plane and CCD. Considering the practical installation of the MLA of the light field camera in the application process, we select the margin of the coupling distance error as EΔd ≤ 1 mm. Figure 4 is a collection of images that represents the coupling distance errors at 0.1 mm, 0.3 mm, 0.5 mm, 0.7 mm, 0.9 mm, and 1 mm under the standard condition. By comparing the images in Fig. 4, it is apparent that the larger the error, the harder it is to determine the blur length.

 figure: Fig. 3

Fig. 3 Coupling distance error structure

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Image comparison considering different coupling distance errors.

Download Full Size | PDF

The objective evaluation index based on edge information and the structural similarity evaluation index are used to compare the standard image [Fig. 4(a)] and error image [Figs. 4(b)–4(g)]. Table 1 illustrates the calculation result for the above indices. The objective evaluation index based on edge information Q△d and structural similarity evaluation index of a standard image SSIM△d are 0.9760590 and 2.0, respectively.

Tables Icon

Table 1. Calculation result of coupling distance error image quality evaluation

By using MATLAB, as in Table 1, the quality evaluation result, and corresponding coupling distance error function fitting calculation, the quality of the coupling distance error image according to the coupling distance error variation can be attained as follows:

QΔd(EΔd)=0.1635EΔd20.2099EΔd+0.083EΔd31.1EΔd2+0.3365EΔd+0.1072
SSIMΔd(EΔd)=0.9116EΔd21.976EΔd+2.155

3.2 Translation error

The MLA translation error is illustrated in Fig. 5. Parameters are the same as shown in Fig. 3, and Δe is the translation error. Figure 6 shows the standard image and the MLA image in a plane when Er = 0.8 mm and Eβ is 10°, 30°, and 50°. In this section, the calculated error situation is simulated, when Er = 0.8 mm and 0° ≤ Eβ ≤ 360°.

 figure: Fig. 5

Fig. 5 Schematic of translation error.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Image comparison considering different translation errors.

Download Full Size | PDF

Figures 7 and 8 demonstrate the translation error image considering the objective evaluation index based on edge information and structural similarity evaluation index calculated using the resulting graph, which changed with the MLA translation error. It follows that, if the MLA has a translation error, the error image varies with the error orientation angle periodically from 0° to 360°. During this practice, it is relatively hard to find out the method to assess the translation error. Judging from the blue horizontal line in Figs. 7 and 8, one value of Q△d or SSIM△d corresponds to multiple translation errors. Therefore, the translation error in the image quality evaluation calculation result cannot be confirmed in this way. Furthermore, the MLA cannot be rectified. Consequently, the image quality evaluation index is not suitable for the MLA translation error correction for determining the error image.

 figure: Fig. 7

Fig. 7 Graph of translation error image considering the objective evaluation index based on edge information calculation result.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Graph of translation error image considering similarity evaluation index calculation result.

Download Full Size | PDF

Nonetheless, another method can be utilized to estimate the translation error. The translation error in a plane can be decomposed into translation towards the z and y directions, as demonstrated in Fig. 9. As the error only exists in the MLA plane, when the error exists in the z and y directions, it is demonstrated as an integral multiple of Dmic. A microlens in a new coordinate x'i,j, which used to be placed in xi,j, will superimpose with xi,j + 1 or xi + 1,j. Therefore, under the present circumstances, the MLA error makes no difference to light field camera imaging, which can be performed from 0 to Dmic. Figures 10 and 11 show the standard subaperture image and the MLA subaperture image in the z and y directions, respectively, in which the translation errors are 0.1 mm, 0.2 mm, 0.3 mm, 0.4 mm, and 0.5 mm. Considering the subaperture image in the red square of the picture as an example, the subaperture image undergoes a translation in the horizontal and vertical directions with a period of Dmic.

 figure: Fig. 9

Fig. 9 MLA translation error schematic diagram.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Subaperture image with translation error in x direction.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Subaperture image with translation error in y direction.

Download Full Size | PDF

Figure 12 illustrates the subaperture image generated by the MLA in the z direction when the errors are 0.05 mm and 0.1 mm, and in a standard situation, considering the translation error in the z direction as an example. Similarly, considering the subaperture image in the red square as an example, we compared Figs. 12(a)–12(c) with each other. If the translation error of the MLA is an integral multiple of 0.1 mm, the subaperture image moves horizontally to the right by two subaperture image distances, i.e., by 120 pixels. If the MLA translation error is an integral multiple of 0.05 mm, the subaperture image moves horizontally to the right by one subaperture image distance, i.e., by 60 pixels. If the MLA translation errors are 0.03 mm, 0.02 mm, and 0.01 mm, the structural similarity evaluation index is selected to appraise the image quality, and the calculated results are 1.904, 1.945, and 1.977, respectively; when the translation errors are 0.02 mm and 0.01 mm, the difference between the image and the standard image is lower, and thus, the translation error can be considered to be 0. When the MLA translation error is 0.003 mm, the structural similarity evaluation index is used for image quality evaluation (taking a subaperture image, with a translation error of 0.05 mm, as the standard image), and the calculation result is 1.947, which is close to the image quality if the translation error is 0.05 mm. Therefore, when the translation error component in the z and y directions is [n, n + 0.003] mm (n is the integral multiple of 0.05 mm), the translation error is recognized as n mm; when translation error component in the z and y directions is [n + 0.003, n + 0.05] mm, the translation error is recognized as n + 0.05 mm. If the translation error really exists in the MLA, the error will be compared in the z and y directions, based on the subaperture image excursion of the light field camera in the horizontal and vertical directions, and relevant adjustments will be made.

 figure: Fig. 12

Fig. 12 Subaperture image with translation error in z direction.

Download Full Size | PDF

Figure 13(a) shows the subaperture image when the MLA in the plane, where the translation error are Er = 1.4 mm, Eβ = 30°.The translation error Er = 1.4 mm can be decomposed in the y direction with and in the z direction with 0.7 mm; Figs. 13(b) and 13(c) respectively show the corresponding subaperture images. When the MLA has a translation error of Er = 1.4 mm and Eβ = 30°, the variation of the image produced by the light field camera can be considered as the superposition of the translation error in the y direction with and in the z direction with 0.7 mm.

 figure: Fig. 13

Fig. 13 Subaperture image after translation error decomposition.

Download Full Size | PDF

3.3. Rotation error

Figures 14(b)–14(d) show partially enlarged error images with a rotation error angle α respectively of 10°, 30°, and 45° (clockwise around the w-axis). When the MLA rotates, each lens in the corresponding field rotates with it, and ultimately, the faculae array produced by the light field camera changes. Compared with the standard image in Fig. 14(a), the resulting image does not indicate any change; however, the faculae array rotates with the rotation of the microlens.

 figure: Fig. 14

Fig. 14 Partial enlarged rotation error image.

Download Full Size | PDF

The image variation scenario produced by the MLA rotation error can be directly obtained using error images; therefore, the image quality evaluation index is useless for further numerical analysis.

3.4. Tilt error

As shown in Fig. 15, the error angle Eθ between the vertical plane and the MLA plane is 1°, 3°, and 5°. The bottom of the pictures become indistinct with the largeness of angle Eθ. Considering the actual installation of the MLA in the application process, we selected a tilt error range Eθ ≤ 5° to simulate the calculation. Table 2 lists the calculation result in terms of the objective evaluation index based on edge information Qθ and structural similarity evaluation index SSIMθ.

 figure: Fig. 15

Fig. 15 Tilt error image.

Download Full Size | PDF

Tables Icon

Table 2. Tilt error image quality evaluation calculation result

By utilizing MATLAB, as in Table 2, the quality evaluation result, and corresponding coupling distance error function fitting calculation, the quality of the coupling distance error image according to the coupling distance error variation can be attained as follows:

Qθ(Eθ)=0.5e0.7Eθ+0.2e0.055Eθ
SSIMθ(Eθ)=0.1872e0.7771Eθ+1.909e0.03347Eθ
Therefore, by studying the light field camera image quality in different assembly error conditions of the MLA, a certain functional relationship between the assembly error and the image quality created by the light field camera can be determined, which is defined as the error image quality function.

In accordance with the calculating work above, we establish an assembly error correction model, utilizing the image quality evaluation index to compare the quality of the standard and the error images created by the light field camera. The results are substituted into the error image quality function to calculate the assembly error corresponding to the MLA.

4. Verification of quality evaluation function

For verifying the validity of the image quality function, various assembly errors of the MLA are selected, taking advantage of the objective evaluation index based on the edge information and the structural similarity model to compute the generated picture. Furthermore, assuming that the assembly error is unclear, the image quality evaluation results are accordingly substituted in the corresponding image quality evaluation function, and the deviation Δ of the computed error and the located assembly error is compared, which corroborates the correctness of the error image quality evaluation function. Figure 16 shows the flow chart for the verification of the error image quality function.

 figure: Fig. 16

Fig. 16 Verification flow chart for error image quality function.

Download Full Size | PDF

4.1 Coupling distance error

As mentioned in section 3.1, when the coupling distance error exists, the error image quality and the error obtained by the light field camera satisfy the coupling distance error image quality function QΔd(EΔd) and SSIMΔd(EΔd). In this section, the coupling distance errors (EΔd = 0.2, 0.4, 0.6, and 0.8 mm) are used for simulation, utilizing the objective evaluation index based on the edge information and structural similarity evaluation index. Furthermore, the coupling distance error was calculated by substituting calculation result into QΔd(EΔd) andSSIMΔd(EΔd). The calculated result and the measured result are listed in Table 3.

Tables Icon

Table 3. Coupling distance error image quality calculation result

The error range between the coupling distance error and the actual error is|Δ|0.06mm. It is proposed that the coupling distance error can affect the image quality in the range|EΔd|1mm, satisfying correlationsQΔd(EΔd)and SSIMΔd(EΔd). Because the coupling distance error between the MLA and the detector is unknown, it is possible to evaluate the quality of an image captured by a light field camera by utilizing the image quality evaluation parameter. The image quality evaluation results are substituted into the coupling distance error image quality function to ascertain the coupling distance error corresponding to the MLA.

4.2 Translation error

Considering the MLA translation error Er = 0.3 mm with Eβ = 80° as a verified example, Figs. 17(a) and 17(b) show the subaperture image produced by the light field camera under standard conditions, and Fig. 17(c) shows the subaperture image produced by the light field camera under a translation error. Considering the subaperture image in the red square in Fig. 17 as a basic reference, and comparing Fig. 17(c) and the standard images in Figs. 17(a) and 17(b), it is evident that the subaperture image moves horizontally to the right by six subaperture distances, i.e., 360 pixels; meanwhile, it also moves vertically downward by one subaperture distance, i.e., 60 pixels. Therefore, it can be deduced that the MLA has translation errors of 0.3 mm and 0.05 mm in the z and y directions, respectively, because the period of the translation error is dmic = 0.5 mm. Moreover, in this section, (translation error Er = 0.3 mm, Eβ = 80°) the components in the z and y directions are Er × sinEβ = 0.295 mm and Er × cosEβ = 0.052 mm, which are close to the deduced translation error. Hence, the translation error can be evaluated using the subaperture image created by the light field camera.

 figure: Fig. 17

Fig. 17 Translation error verification for subaperture image.

Download Full Size | PDF

Because the circumstances of the translation error between the MLA and the detector are unclear, the translation error can be obtained according to the subaperture image created by the light field camera in the horizontal and vertical orientations.

4.3 Tilt error

As mentioned in Section 2.4, if the MLA has a tilt error, the error image quality and tilt error satisfy the function Qθ(Eθ) and. SSIMθ(Eθ) For the sake of verifying the correctness of this relation, a different tilt error (Eθ = 1.5°, 2.5°, 3.5°, and 4.5°) is selected for the calculation. The objective evaluation index based on edge information and structural similarity evaluation index are utilized for the calculation, and the result is substituted into Qθ(Eθ)and SSIMθ(Eθ)to determine the corresponding tilt error. The tilt error image quality evaluation calculation result and the derived result are listed in Table 4.

Tables Icon

Table 4. Tilt error image quality calculation result

The error range between the tilt error and the actual error is|Δ|0.09mm. It is proposed that the coupling distance error can affect the image quality in the range|Eθ|5, satisfying correlations Qθ(Eθ)andSSIMθ(Eθ). As the coupling distance error between the MLA and the detector is uncertain, it is possible to evaluate the quality of the image captured by the light field camera by utilizing the image quality evaluation parameter. The image quality evaluation results are substituted into the coupling distance error image quality function to ascertain the coupling distance error corresponding to the MLA.

5. Conclusions

This paper presented a fusion image quality evaluation system that applies the quality evaluation index (objective evaluation index based on edge information and structural similarity evaluation index) to a light field camera error image, and analyzed the change in the image quality with respect to the MLA assembly error.

An assembly error correction model was presented that compares the error image and the standard image. We evaluated several types of errors, and our findings are as follows. For the coupling distance error, if|EΔd|1mm, the image quality changed when the coupling distance error satisfied the coupling distance error image quality function. The translation error was evaluated using the objective evaluation index based on edge information and structural similarity evaluation index. The subaperture image variation produced by the light field camera can be split into a horizontal and vertical translation error, resulting in subaperture image superposition. Under unknown circumstances, the translation error can be obtained based on a subaperture image in a horizontal and vertical changing state obtained by a light field camera. When the tilt error is|Eθ|5, the deviation between the tilt error calculated by the tilt error image function Eθ' and the set tilt error is in the range of|Δ|0.09mm. Thus, it can be considered that the image quality changes in the range of , which satisfies the tilt error image quality function.

Funding

National Natural Science Foundation of China (NSFC) (51327803, 51406041); China Postdoctoral Science Special Foundation (2015T80347)

Acknowledgments

We would like to specially acknowledge the editors and referees who made important comments that helped us to improve this paper.

References and links

1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

2. T. Georgiev and C. Intwala, “Light field camera design for integral view photography,” Adobe System, Inc., Technical Report (2006), pp. 1.

3. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (1996), pp. 31–42.

4. K. Maeno, H. Nagahara, A. Shimada, and R. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2786–2793. [CrossRef]  

5. G. Wetzstein, D. Roodnick, W. Heidrich, and R. Raskar, “Refractive shape from light field distortion,” in IEEE Conference on Computer Vision (2011), pp. 1180–1186.

6. F. Apelt, D. Breuer, Z. Nikoloski, M. Stitt, and F. Kragler, “Phytotyping(4D) : a light-field imaging system for non-invasive and accurate monitoring of spatio-temporal plant growth,” Plant J. 82(4), 693–706 (2015). [CrossRef]   [PubMed]  

7. S. Kim, Y. Ban, and S. Lee, “Face liveness detection using a light field camera,” Sensors (Basel) 14(12), 22471–22499 (2014). [CrossRef]   [PubMed]  

8. C. Zhou and S. K. Nayar, “Computational cameras: convergence of optics and processing,” in IEEE Transactions on Image Processing (2011), pp. 3322–3340.

9. C. Cui and K. N. Ngan, “Plane-based external camera calibration with accuracy measured by relative deflection angle,” Signal Process. Image Commun. 25(3), 224–234 (2010). [CrossRef]  

10. C. S. Fraser, “Digital camera self-calibration,” Opt. Commun. 52, 149–159 (1997).

11. C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36(4), 308–309 (2000). [CrossRef]  

12. J. Sun, C. Xu, B. Zhang, M. M. Hossain, S. Wang, H. Qi, and H. Tan, “Three-dimensional temperature field measurement of flame using a single light field camera,” Opt. Express 24(2), 1118–1132 (2016). [CrossRef]   [PubMed]  

13. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” Conference Record of the Thirty-Seventh Asilomar Conference (IEEE, 2003), pp.1398−1402. [CrossRef]  

14. G. Piella and H. Heijmans, “A new quality metric for image fusion,” in IEEE International Conference on Acoustics (2003), pp.73−176. [CrossRef]  

15. H. Chen and P. K. Varshney, “A human perception inspired quality metric for image fusion based on regional information,” Inf. Fusion 8(2), 193–207 (2007). [CrossRef]  

16. Y. Han, Y. Z. Cai, Y. Cao, and X. M. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013). [CrossRef]  

17. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

18. R. M. Zhang, P. Liu, D. J. Liu, and G. B. Su, “Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera,” Opt. Commun. 357, 1–6 (2015). [CrossRef]  

19. B. Liu, Y. Yuan, S. Li, Y. Shuai, and H. P. Tan, “Simulation of light-field camera imaging based on ray splitting Monte Carlo method,” Opt. Commun. 355, 15–26 (2015). [CrossRef]  

20. Y. Yan, Z. Yu, and H. H. Hu, “Registration error analysis for microlens array and photosensor in light field camera,” Guangzi Xuebao 39(1), 123–126 (2010). [CrossRef]  

21. L. M. Ruan, H. P. Tan, and Y. Y. Yan, “A Monte Carlo method applied to the medium with nongray absorbing-emitting-anisotropic scattering particles and gray approximation,” Num. Heat Transfer. Part A 42(3), 253–268 (2002).

22. A. Q. Wang, M. F. Modest, D. C. Haworth, and J. Y. Wang, “Monte Carlo simulation of radiative heat transfer and turbulence interactions in methane/air jet flames,” J. Quant. Spectrosc. Radiat. Transf. 109(2), 269–279 (2008). [CrossRef]  

23. T. Yun, N. Zeng, W. Li, D. Li, X. Jiang, and H. Ma, “Monte Carlo simulation of polarized photon scattering in anisotropic media,” Opt. Express 17(19), 16590–16602 (2009). [CrossRef]   [PubMed]  

24. Z. Gong, H.-T. Chen, S.-H. Xu, Y. M. Li, and L. Lou, “Monte-Carlo simulation of optical trap stiffness measurement,” Opt. Commun. 263(2), 229–234 (2006). [CrossRef]  

25. E. Witkowska, M. Gajda, and K. Rzazewski, “Monte Carlo method, classical fields and Bose statistics,” Opt. Commun. 283(5), 671–675 (2010). [CrossRef]  

26. M. Premuda, E. Palazzi, F. Ravegnani, D. Bortoli, S. Masieri, and G. Giovanelli, “MOCRA: a Monte Carlo code for the simulation of radiative transfer in the atmosphere,” Opt. Express 20(7), 7973–7993 (2012). [CrossRef]   [PubMed]  

27. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]  

28. X. Ji, Y. L. Yin, Q. Zhou, Z. X. Wang, and J. P. Yin, “Decelerating a pulsed subsonic molecular beam by a quasi-cw optical lattice: 3D Monte-Carlo simulations,” Opt. Commun. 287, 128–136 (2013). [CrossRef]  

29. H. Qi, Z. Z. He, S. Gong, and L. M. Ruan, “Inversion of particle size distribution by spectral extinction technique using the attractive and repulsive particle swarm optimization algorithm,” Therm. Sci. 19(6), 2151–2160 (2015). [CrossRef]  

30. Y. B. Qiao, H. Qi, Q. Chen, L. M. Ruan, and H. P. Tan, “Multi-start iterative reconstruction of the radiative parameter distributions in participating media based on the transient radiative transfer equation,” Opt. Commun. 351, 75–84 (2015). [CrossRef]  

31. H. Qi, Y. T. Ren, Q. Chen, and L. M. Ruan, “Fast method of retrieving the asymmetry factor and scattering albedo from the maximum time-resolved reflectance of participating media,” Appl. Opt. 54(16), 5234–5242 (2015). [CrossRef]   [PubMed]  

32. X. Guo, M. F. G. Wood, and A. Vitkin, “Monte Carlo study of pathlength distribution of polarized light in turbid media,” Opt. Express 15(3), 1348–1360 (2007). [CrossRef]   [PubMed]  

33. Z. Z. He, H. Qi, Y. Q. Wang, and L. M. Ruan, “Inverse estimation of spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique,” Opt. Commun. 328, 8–22 (2014). [CrossRef]  

34. T. Nakamura, R. Horisaki, and J. Tanida, “Computational phase modulation in light field imaging,” Opt. Express 21(24), 29523–29543 (2013). [CrossRef]   [PubMed]  

35. L. L. Yu, T. Lai, Y. J. Zhao, and J.-H. Chen, “Study on the Phenomenon of SRA Image Edges Aliasing,” J. Signal Process. 29(1), 127–134 (2013).

36. C. Rozé, T. Girasole, L. Méès, G. Gréhan, L. Hespel, and A. Delfour, “Interaction between ultra-short pulses and a dense scattering medium by Monte Carlo simulation: consideration of particle size effect,” Opt. Commun. 220(4–6), 237–245 (2003). [CrossRef]  

37. C. Calba, C. Rozé, T. Girasole, and L. Méès, “Monte Carlo simulation of the interaction between an ultra-short pulse and a strongly scattering medium: The case of large particles,” Opt. Commun. 265(2), 373–382 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Structure of the reflective light field camera.
Fig. 2
Fig. 2 Target plane.
Fig. 3
Fig. 3 Coupling distance error structure
Fig. 4
Fig. 4 Image comparison considering different coupling distance errors.
Fig. 5
Fig. 5 Schematic of translation error.
Fig. 6
Fig. 6 Image comparison considering different translation errors.
Fig. 7
Fig. 7 Graph of translation error image considering the objective evaluation index based on edge information calculation result.
Fig. 8
Fig. 8 Graph of translation error image considering similarity evaluation index calculation result.
Fig. 9
Fig. 9 MLA translation error schematic diagram.
Fig. 10
Fig. 10 Subaperture image with translation error in x direction.
Fig. 11
Fig. 11 Subaperture image with translation error in y direction.
Fig. 12
Fig. 12 Subaperture image with translation error in z direction.
Fig. 13
Fig. 13 Subaperture image after translation error decomposition.
Fig. 14
Fig. 14 Partial enlarged rotation error image.
Fig. 15
Fig. 15 Tilt error image.
Fig. 16
Fig. 16 Verification flow chart for error image quality function.
Fig. 17
Fig. 17 Translation error verification for subaperture image.

Tables (4)

Tables Icon

Table 1 Calculation result of coupling distance error image quality evaluation

Tables Icon

Table 2 Tilt error image quality evaluation calculation result

Tables Icon

Table 3 Coupling distance error image quality calculation result

Tables Icon

Table 4 Tilt error image quality calculation result

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = 1.52 + 0.4538 cos π x 0.01083 sin π x + 0.0271 cos 2 π x 0.06249 sin 2 π x
α S ( i , j ) = tan 1 ( s S y ( i , j ) 2 s S x ( i , j ) 2 )
g E ( i , j ) = s E x ( i , j ) 2 + s E y ( i , j ) 2
α E ( i , j ) = tan 1 ( s E y ( i , j ) 2 s E x ( i , j ) 2 )
G SE ( i,j ) = { g S ( i,j ) g E ( i,j ) g S ( i,j ) <g E ( i,j ) g E ( i,j ) g S ( i,j ) g S ( i,j ) g E ( i,j )
A S E ( i , j ) = | | α E ( i , j ) α S ( i , j ) | π 2 | π 2
Q g S E ( i , j ) = Γ g 1 + e κ g ( G S E ( i , j ) σ g )
Q α S E ( i , j ) = Γ α 1 + e κ α ( G S E ( i , j ) σ α )
Q S E ( i , j ) = Q g S E ( i , j ) × Q α S E ( i , j )
Q S E = i = 1 M j = 1 N Q S E ( i , j ) ω S ( i , j ) i = 1 M j = 1 N ω S ( i , j )
S S I M ( S , E ) = [ l ( S , E ) ] α [ c ( S , E ) ] β [ s ( S , E ) ] γ
l ( S , E ) = 2 μ S μ E + C 1 μ S 2 + μ E 2 + C 2
c ( S , E ) = 2 σ S σ E + C 2 σ S 2 + σ E 2 + C 2
s ( S , E ) = 2 σ S E + C 3 σ S σ E + C 3
Q Δ d ( E Δ d ) = 0.1635 E Δ d 2 0.2099 E Δ d + 0.083 E Δ d 3 1.1 E Δ d 2 + 0.3365 E Δ d + 0.1072
S S I M Δ d ( E Δ d ) = 0.9116 E Δ d 2 1.976 E Δ d + 2.155
Q θ ( E θ ) = 0.5 e 0.7 E θ + 0.2 e 0.055 E θ
S S I M θ ( E θ ) = 0.1872 e 0.7771 E θ + 1.909 e 0.03347 E θ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.