Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Chromatic correction for super multi-view 3D display based on diffraction theory

Open Access Open Access

Abstract

The traditional analysis method for super multi-view 3D display based on geometric optics, which approximates the lenticular lenses as a series of pinhole structures, ignored the chromatic aberration. In this paper, the optimization method based on diffraction theory is proposed for super multi-view 3D display, where the wavefronts are evaluated accurately by the forward propagation method, and the chromatic aberration of the synthetic viewpoint image is reduced dramatically by the backward reconstruction optimization method (BROM). The optical experiment is performed to verify the feasibility of the method, which is consistent with numerical simulation results. It is proved that the proposed method simulates the physical propagation process of super multi-view 3D display and improves the reconstructed image quality. In the future, it can be used to achieve the super multi-view 3D light field technology with low crosstalk.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

3D display technology can comprehensively present the information of real scenes and accurately depict the parallax information [13]. Depending on the principle, it can be divided into Binocular Stereoscopic Display (BSD) and Autostereoscopic 3D display. BSD is a display technology based on binocular disparity, but it has issues with resolving the Vergence- accommodation Conflict (VAC) [4]. Autostereoscopic 3D display, also known as glasses-free 3D display, allows observers to directly perceive the true 3D information without the need for any display devices, encompassing light field 3D display [57], volumetric 3D display [8,9] and holographic 3D display [10,11]. Super multi-view 3D display technology [1214] is an important light field 3D display technology which utilizes a cylindrical lens array for subpixel modulation, providing continuous and clear motion parallax from different perspectives without glasses, thereby partially resolving the VAC and providing a more comfortable viewing experience [15].

In order to improve the image quality of super multi-view 3D display, a lot of efforts have been made. The use of tilted lens structure systems and the rearrangement of pixel filters have been adopted to solve the issue of an imbalance in horizontal and vertical resolution [16,17]. The method of subpixel rearrangement efficiently expands the field of view of super multi-view 3D displays [18]. Optimization of light field acquisition and reconstruction effectively mitigates the crosstalk in 3D display [19]. With the super multi-view 3D display utilizing programmable directional backlight with vertical collimation, the depth of field can be expanded [20]. Multiplexing is used to enlarge the viewing angle, such as multiple backlight units, projection dynamic backlight and so on, to achieve dense viewpoint 3D display, which is helpful in increasing the temporal points and resolving the contradiction between field of view and resolution [15,2123]. Methods based on lens tracking can achieve high image resolution and wide viewing angle of the 3D display [24].

However, currently, the simulation methods for super multi-view 3D simulation are based on geometric optics theory, such as cropping amplification and ray tracing [2527], which approximates lenticular lenses as a series of pinhole structures. As geometric optics theory cannot simulate the dispersion phenomenon of light, which is unable to solve the chromatic aberration issue, the results present deviations and image quality declined. In this paper, the diffraction-based optimization method is proposed to simulate the forward propagation in super multi-view 3D display and dramatically reduce the chromatic aberration of the synthetic viewpoint image by the backward reconstruction optimization method (BROM). The feasibility of the proposed method is verified by numerical simulation and optical experiment. The backward reconstruction optimization method based on the diffractive wave optics eliminates the chromatic aberration of light, thus enhancing the accuracy of 3D simulation results.

In section 2, the principle of the proposed backpropagation reconstruction method is introduced. In section 3, to verify the feasibility of the proposed method, numerical and optical experiments are conducted. The image quality of the proposed BROM and the traditional synthetic image simulation (TSIS) are compared. In section 4, the innovation and experimental results of the proposed method are summarized and a prospect for the future development is given.

2. Proposed BROM method

The geometric optics simulation method is based on the light ray tracing, approximating the lenticular lens structure as a pinhole without considering the differences in the propagation process of the different color subpixels, also ignoring the phenomenon of light scattering, thus resulting in inaccurate simulation results. In this paper, the diffraction-based numerical simulation method is proposed to simulate and analyze the causes of chromatic aberration in super multi-view 3D display. Figure 1 shows the operation process of the method proposed in this paper. In order to observe the ideal parallax effect from different perspectives, the ideal synthetic image at the display plane is reconstructed through backpropagation reconstruction.

 figure: Fig. 1.

Fig. 1. Overall illustration of the proposed method: 1) Image processing, 2) Backward reconstruction, 3) 3D display.

Download Full Size | PDF

First, the parallax images are obtained by rendering the 3D object from various perspectives in 3Dmax. Then, the result is simulated by inversing reconstruction. In the process of the backward reconstruction, the ideal image is divided into three channels: RGB. The ideal synthetic image for each channel consists of three steps: 1) propagation to the front surface of the lenticular lens: backward propagation of the ideal parallax image based on wave optics diffraction theory, 2) phase superposition: propagate through the lenticular lens array (LLA), 3) superposition at the display plane: The reconstructed ideal synthetic image is obtained by reconstructing and superimposing the ideal parallax image in the RGB channels. Finally, perform reconstruction verification to simulate and verify the observation results from different viewpoints.

2.1 Simulation of the traditional synthetic images based on wave optics

Figure 2(a) shows the process of simulating the synthesis of different viewpoint images based on wave optics diffraction theory. The complex amplitude of the synthetic image at the front surface of the lenticular lens array, which simulates the propagation of the display plane to the surface of the lenticular lens using the point source method, is given by:

$${U_0}({x,y} )= \textrm{exp} \left( {jk\sqrt {{x^2} + {y^2} + {g^2}} } \right) ,$$
where $g$ is the distance from the display screen to the front surface of the lenticular lens array. The phase distribution of a single lenticular lens is given by:
$$\Phi (x )= \textrm{exp} \{{jk[{d + ({n - 1} )d(x )} ]} \}, $$
where n is the refractive index, d is the distance from the front surface to the back surface of the lenticular lens, and d(x) is the distance from the front surface of the lenticular lens to the transmission position of the lens, which can be expressed as:
$$d(x )= d - [ {r - \sqrt {{r^2} - {x^2}} }], $$
where r is the radius of the lenticular lens, k is the wave vector of the wave light, and $\lambda $ is the wavelength of light. The light wavelengths used in this paper are 637 nm, 520 nm, and 457 nm, respectively. The periodic phase of the lenticular lens array is given by:
$${d_{LLA}}(x )= d - \left\{ {r - \sqrt {{r^2} - {{[{\bmod ({x,p} )} ]}^2}} } \right\}, $$
where p is the lens pitch, substituting the Eq. (2) above:
$$\begin{aligned} {\Phi _{LLA}}(x )&= \textrm{exp} \{{jk[{d + ({n - 1} ){d_{LLA}}(x )} ]} \}\\ &= \textrm{exp} \left\{ {jk\left[ {nd - \left[ {r - \sqrt {{r^2} - {{[{\bmod ({x,p} )} ]}^2}} } \right]} \right]} \right\} \end{aligned}, $$

 figure: Fig. 2.

Fig. 2. Schematic of the propagation process. (a) simulation process of the traditional synthetic images based on the wave optics theory. (b) chromatic aberration caused by different wavelengths in traditional method.

Download Full Size | PDF

The complex amplitude of the light wave propagation to the surface behind the lenticular lens array is given by:

$${U_1}({x,y} )= {U_0}({x,y} )\textrm{exp} [{j{\Phi _{LLA}}(x )} ], $$

The process of simulating the propagation of light waves through the surface of the lenticular lens to the observation plane is based on angular spectrum diffraction. The complex amplitude distribution at the observation plane is given by:

$$\begin{aligned} {U_z} &= \int\!\!\!\int {{u_1}({{f_\xi },{f_\eta }} )\textrm{exp} \left[ {jkz\sqrt {1 - {\lambda^2}f_\xi^2 - {\lambda^2}f_\eta^2} } \right]\textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }d{f_\eta }} \\ &= \int\!\!\!\int {{U_1}({{f_\xi },{f_\eta }} )d\xi d\eta \int\!\!\!\int {\textrm{exp} \left[ {jkz\sqrt {1 - {\lambda^2}f_\xi^2 - {\lambda^2}f_\eta^2} } \right]} } \textrm{exp} [{ - j2\pi ({{f_\xi }\xi + {f_\eta }\eta } )} ]\\ &\textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }d{f_\eta } \end{aligned}, $$
where z is the distance from the surface behind the lens to the observation plane. As shown in Fig. 2(b), To calculate the actual diffraction phenomenon of different wavelengths of light based on the wave optics diffraction theory, the RGB channels of the light have different wavelengths, and when the three channels are combined, chromatic aberration information may occur. The chromatic aberration can be represented as:
$$\begin{aligned} D &= {L_z}[{\tan ({{\theta_{r2}}} )- \tan ({{\theta_{r1}}} )} ]= {L_z}[{\tan ({{\theta_r} - {\theta_i}} )- \tan ({{\theta_{r1}}} )} ]\\ &= {L_z}\left[ {\tan \left( {{{\sin }^{ - 1}}\frac{{{n_\lambda }\sin {\theta_i}}}{n} - {\theta_i}} \right) - \tan ({{\theta_{r1}}} )} \right] \end{aligned}, $$
where ${\theta _i}$ is the incident angle, ${\theta _{r1}}$ and ${\theta _{r2}}$ are the refracted angles for different colors, ${L_z}$ is the propagation distance of the light in the air, ${n_\lambda }$ and n are respectively represent the refractive indices for different wavelengths of light in the propagation medium and the air. Therefore, when the subpixels of the same relative position to the lenticular lens array pass through the lens, the superimposed phases are different.

2.2 Image rendering

In this paper, the 3D scene model and light field viewpoint images are obtained in 3Dmax. The virtual camera array set in the software is shown in Fig. 3, which demonstrates the process of obtaining two-dimensional image information of the target 3D object from different viewing angles. A series of virtual cameras are evenly arranged in one-dimensional direction within the range of the viewing field angle. By rotating the rendering, 23 element images with a resolution of 1920*1080 are obtained, representing the distribution of ideal viewpoint observations from various angles. The target model is a spinal cord structure with a red plus sign structure at the center, which helps us observe the disparity information more clearly.

 figure: Fig. 3.

Fig. 3. Virtual image acquisition.

Download Full Size | PDF

2.3 Backward reconstruction process

Based on the simulation method proposed on above, which analyzes and explains the causes of the chromatic aberration. In this paper, the proposed backward reconstruction process simulates the synthetic viewpoint image at the ideal display plane in order to improve the display effect at the observation plane.

As shown in Fig. 4, the ideal disparity images obtained from backward simulation rendering based on the wavefront optical diffraction theory are propagated to the display plane. The optical transfer function based on the backward propagation process analyzes the result of the backward propagation of the ideal disparity images at the observation plane to the surface of the lenticular lens array. The complex amplitude distribution on the surface of the lenticular lens is given by:

$$\begin{aligned} {U_1}({x,y} )&= \int\!\!\!\int {{u_1}({{f_\xi },{f_\eta }} )\textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }d{f_\eta }} \\ &= \int\!\!\!\int {{u_z}({{f_\xi },{f_\eta }} )} \textrm{exp} \left( {jkz\sqrt {1 - {\lambda^2}f_\xi^2 - {\lambda^2}f_\eta^2} } \right)\textrm{exp} [{ - j2\pi ({{f_\xi }\xi + {f_\eta }\eta } )} ]\\ &\textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }df \end{aligned}, $$
where z is the distance from the observation plane to the surface of the lenticular lens array, ${u_1}({{f_\xi },{f_\eta }} )$ is the spectral distribution of the optical wave at the surface of the lenticular lens array, ${u_z}({{f_\xi },{f_\eta }} )$ is the spectral distribution of the ideal parallax images processed at the observation plane, and ${U_z}({{f_\xi },{f_\eta }} )$ is the spatial domain distribution of the ideal disparity images processed at the observation plane. The optical transfer function of the propagation process can be represented as:
$$H({{f_\xi },{f_\eta }} )= \textrm{exp} \left( {jkz\sqrt {1 - {\lambda^2}f_\xi^2 - {\lambda^2}f_\eta^2} } \right), $$

 figure: Fig. 4.

Fig. 4. Backward reconstruction process.

Download Full Size | PDF

The complex amplitude distribution at the front surface of the lenticular lens array, obtained by backward propagating from the back surface of the lenticular lens, can be represented by superimposing the opposite phase of the forward propagation process:

$${U_2}({x,y} )= {U_1}({x,y} )\textrm{exp} [{ - j{\Phi _{LLA}}(x )} ], $$

According to the optical transfer function, the complex amplitude distribution at the display plane can be given by:

$$\begin{aligned} {U_0}({x,y} )&= \int\!\!\!\int {{u_2}({{f_\xi },{f_\eta }} )H({{f_\xi },{f_\eta }} )} \textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }d{f_\eta }\\ &= \int\!\!\!\int {{U_2}({\xi ,\eta } )d\xi d\eta \int\!\!\!\int {\textrm{exp} \left( {jkg\sqrt {1 - {\lambda^2}f_\xi^2 - {\lambda^2}f_\eta^2} } \right)} \textrm{exp} [{ - j2\pi ({{f_\xi }\xi + {f_\eta }\eta } )} ]} \\ &\textrm{exp} [{j2\pi ({{f_\xi }x + {f_\eta }y} )} ]d{f_\xi }d{f_\eta } \end{aligned}, $$

Figure 5 shows the backward reconstruction process, considering the differences of wavelengths in the propagation process. The backward simulation calculates the ideal synthetic image for the desired viewpoints, which can better simulate the physical propagation of light and effectively eliminate chromatic aberration.

 figure: Fig. 5.

Fig. 5. Backward reconstruction process. The RGB channels are considered separately in the simulation.

Download Full Size | PDF

2.4 Reconstruction validation

According to the proposed viewpoint image reconstruction method by using backward propagation, the validation method based on the theory of wave optics is proposed to eliminate the chromatic aberration. It simulates and validates the ideal synthetic viewpoint image reconstructed by the backward propagation. Figure 6 shows the schematic diagram of the reconstruction validation theory. The entire propagation process is based on the theory of wave optics diffraction, using the optical transfer function of the propagation process to simulate different viewpoint images at various angles at the observation plane.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the validation process.

Download Full Size | PDF

3. Numerical simulations and optical experiments

3.1 Numerical simulations

In order to verify the feasibility of the proposed method, the numerical simulations are carried out to compare the proposed method with the results of the traditional synthetic image simulation method. The ideal synthetic viewpoint image at the display plane is obtained by backward reconstruction of the ideal viewpoint images from different viewing angles. The viewpoint images from different angles are obtained through reconstruction validation. The parameters of the numerical simulations are shown in Table 1.

Tables Icon

Table 1. Parameters of the numerical simulations

As shown in Fig. 7, in order to validate the reconstruction quality of the proposed backward reconstruction optimization method, the quality of the viewpoint images obtained from both methods are compared at five different viewing angles. PSNR and SSIM are used to evaluate the quality of the reconstructed images. Additionally, to verify that the method can effectively eliminates chromatic aberration, an evaluation index of chromatic aberration is proposed. The index can be represented as:

$$\Delta E = 1 - \frac{{\sum\limits_{i = 1}^3 {SSIM(i )} }}{3}, $$
where $SSIM(i )$ respectively represents the similarity indexes of the simulated images in the RGB channels.

 figure: Fig. 7.

Fig. 7. Numerical simulation results. (a) The backward propagation results of each viewpoint. (b) The simulation results of the traditional method for each viewpoint. (c) The reconstruction verification results of the proposed method from different perspectives.

Download Full Size | PDF

The comparison results of PSNR and SSIM for two methods are shown in Fig. 8(a) and 8(b). As shown in Fig. 8(c), the proposed method can effectively eliminate the chromatic aberration.

 figure: Fig. 8.

Fig. 8. Comparison of image quality. (a, b) Reconstruction quality. PSNR and SSIM values of two methods. The proposed method achieves high reconstruction quality. (c) Comparison of chromatic aberration.

Download Full Size | PDF

3.2 Optical experiments

To validate the consistency between numerical simulations and actual display results, the optical experiments are conducted, according to the proposed method to reconstruct the synthetic viewpoint image. The reconstructed images are then used to verify the observation effects from different viewpoints. Figure 9 demonstrates the 3D disparity effects obtained from the proposed method and the traditional method for five different viewing angles. The result shows that the proposed method can accurately displays the disparity information from various viewpoints. BROM and TSIS use the same synthesis algorithm, both of which synthesize viewpoint images by subpixel allocation and obtain viewpoint synthetic image.

 figure: Fig. 9.

Fig. 9. Optical experiment results. Observed images from different perspectives of (a) BROM and (b) TSIS, respectively. (c) partial enlarged results of BROM compared with TSIS.

Download Full Size | PDF

In order to verify the effectiveness of the proposed method for chromatic correction, the two methods using magnified images are compared, as shown in Fig. 9. The experiment indicate that both methods can correctly display parallax information. The upper image in Fig. 9(c) shows the partial enlarged image obtained by BROM, and the image below is the partial enlarged image of the TSIS. In addition, as shown in Fig. 9(c), the proposed method BROM successfully reduced the occurrence of color aberration, thus improving the display quality from various viewpoints, which is consistent with the conclusions obtained from numerical simulations.

4. Conclusion

In conclusion, in order to reduce the chromatic aberration and improve the image quality of super multi-view 3D display, the diffraction-based optimization method is proposed for super multi-view 3D display, which can accurately evaluate the forward propagation and dramatically reduce chromatic aberration of the synthetic viewpoint image based on the proposed backward reconstruction optimization method. To confirm the feasibility of the proposed method, the optical experiments are conducted, which is consistent with the numerical simulation results. Compared with the traditional simulation, the proposed method effectively reduces chromatic aberration and provides improved image quality. With the development of 3D display technology, this proposed method has great potential for achieving low crosstalk multi-viewpoint 3D light field display. This work may pave a new avenue for light filed 3D display, which could lead to applications in virtual reality devices and next-generation display devices such as 3D televisions, telepresence system and 3D display desktop computers.

Funding

Beijing Municipal Science and Technology Commission, Administrative Commission of Zhongguancun Science Park (Z211100004821012); National Natural Science Foundation of China (61975014, 62035003, U22A2079); National Key Research and Development Program of China (2021YFB3600500).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. V. A. Ezhov, “Toward the physical-information fundamentals of three-dimensional displays,” J. Display Technol. 12(11), 1344–1351 (2016). [CrossRef]  

3. J. Hong, Y. Kim, H.-J. Choi, et al., “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]  

4. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484 (2014). [CrossRef]  

5. N. Kim, M. A. Alam, L. T. Bang, et al., “Advances in the light field displays based on integral imaging and holographic techniques,” Chin. Opt. Lett. 12, 060005 (2014). [CrossRef]  

6. M. Yamaguchi, “Light-field and holographic three-dimensional displays [Invited],” J. Opt. Soc. Am. A 33(12), 2348–2364 (2016). [CrossRef]  

7. X. Su, X. Yu, D. Chen, et al., “Regional selection-based pre-correction of lens aberrations for light-field displays,” Opt. Commun. 505, 127510 (2022). [CrossRef]  

8. C. Blackwell, C. Can, J. Khan, et al., “Volumetric 3D display in real space using a diffractive lens, fast projector, and polychromatic light source,” Opt. Lett. 44(19), 4901–4904 (2019). [CrossRef]  

9. D. Smalley, T.-C. Poon, H. Gao, et al., “Volumetric displays turning 3-D inside-out,” Opt. Photonics News 29(6), 26–33 (2018). [CrossRef]  

10. Y. Pan, J. Liu, X. Li, et al., “A Review of Dynamic Holographic Three-Dimensional Display: Algorithms, Devices, and Systems,” IEEE Trans. Ind. Inf. 12(4), 1599–1610 (2016). [CrossRef]  

11. D. Pi, J. Wang, J. Liu, et al., “Color dynamic holographic display based on complex amplitude modulation with bandwidth constraint strategy,” Opt. Lett. 47(17), 4379–4382 (2022). [CrossRef]  

12. Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

13. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]  

14. Y. Takaki, “Development of super multi-view displays,” ITE Transactions on Media Technology and Applications 2(1), 8–14 (2014). [CrossRef]  

15. T. Ueno and Y. Takaki, “Super multi-view near-eye display to solve vergence–accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018). [CrossRef]  

16. C. V. Berkel, D. Parker, and A. Franklin, “Multiview 3D LCD,” Proc. SPIE 2653, 32–39 (1996). [CrossRef]  

17. C. V. Berkel and J. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997). [CrossRef]  

18. M.-K. Kang, H.-P. Nguyen, D. Kang, et al., “Adaptive viewing distance in super multi-view displays using aperiodic 3-D pixel location and dynamic view indices,” Opt. Express 26(16), 20661–20679 (2018). [CrossRef]  

19. L. Yang, X. Sang, X. Yu, et al., “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412 (2018). [CrossRef]  

20. P. Wang, X. Sang, X. Yu, et al., “Demonstration of a low-crosstalk super multi-view light field display with natural depth cues and smooth motion parallax,” Opt. Express 27(23), 34442–34453 (2019). [CrossRef]  

21. B. Liu, X. Sang, X. Yu, et al., “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019). [CrossRef]  

22. B. Liu, X. Sang, X. Yu, et al., “Analysis and removal of crosstalk in a time-multiplexed light-field display,” Opt. Express 29(5), 7435–7452 (2021). [CrossRef]  

23. B. Zhao, R. Huang, and G. Lü, “Micro-projection dynamic backlight for multi-view 3D display,” Chin. Opt. Lett. 19, 092201 (2021). [CrossRef]  

24. T. Huang, B. Han, X. Zhang, et al., “High-performance autostereoscopic display based on the lenticular tracking method,” Opt. Express 27(15), 20421–20434 (2019). [CrossRef]  

25. S.-M. Jung and I.-B. Kang, “Three-dimensional modeling of light rays on the surface of a slanted lenticular array for autostereoscopic displays,” Appl. Opt. 52(23), 5591–5599 (2013). [CrossRef]  

26. S.-M. Jung, S.-C. Lee, and K.-M. Lim, “Two-dimensional modeling of optical transmission on the surface of a lenticular array for autostereoscopic displays,” Curr. Appl. Phys. 13(7), 1339–1343 (2013). [CrossRef]  

27. S.-M. Jung, J.-H. Jang, H.-Y. Kang, et al., “Optical modeling of lenticular array for autostereoscopic displays,” Proc. SPIE 8648, 1 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Overall illustration of the proposed method: 1) Image processing, 2) Backward reconstruction, 3) 3D display.
Fig. 2.
Fig. 2. Schematic of the propagation process. (a) simulation process of the traditional synthetic images based on the wave optics theory. (b) chromatic aberration caused by different wavelengths in traditional method.
Fig. 3.
Fig. 3. Virtual image acquisition.
Fig. 4.
Fig. 4. Backward reconstruction process.
Fig. 5.
Fig. 5. Backward reconstruction process. The RGB channels are considered separately in the simulation.
Fig. 6.
Fig. 6. Schematic diagram of the validation process.
Fig. 7.
Fig. 7. Numerical simulation results. (a) The backward propagation results of each viewpoint. (b) The simulation results of the traditional method for each viewpoint. (c) The reconstruction verification results of the proposed method from different perspectives.
Fig. 8.
Fig. 8. Comparison of image quality. (a, b) Reconstruction quality. PSNR and SSIM values of two methods. The proposed method achieves high reconstruction quality. (c) Comparison of chromatic aberration.
Fig. 9.
Fig. 9. Optical experiment results. Observed images from different perspectives of (a) BROM and (b) TSIS, respectively. (c) partial enlarged results of BROM compared with TSIS.

Tables (1)

Tables Icon

Table 1. Parameters of the numerical simulations

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

U 0 ( x , y ) = exp ( j k x 2 + y 2 + g 2 ) ,
Φ ( x ) = exp { j k [ d + ( n 1 ) d ( x ) ] } ,
d ( x ) = d [ r r 2 x 2 ] ,
d L L A ( x ) = d { r r 2 [ mod ( x , p ) ] 2 } ,
Φ L L A ( x ) = exp { j k [ d + ( n 1 ) d L L A ( x ) ] } = exp { j k [ n d [ r r 2 [ mod ( x , p ) ] 2 ] ] } ,
U 1 ( x , y ) = U 0 ( x , y ) exp [ j Φ L L A ( x ) ] ,
U z = u 1 ( f ξ , f η ) exp [ j k z 1 λ 2 f ξ 2 λ 2 f η 2 ] exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f η = U 1 ( f ξ , f η ) d ξ d η exp [ j k z 1 λ 2 f ξ 2 λ 2 f η 2 ] exp [ j 2 π ( f ξ ξ + f η η ) ] exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f η ,
D = L z [ tan ( θ r 2 ) tan ( θ r 1 ) ] = L z [ tan ( θ r θ i ) tan ( θ r 1 ) ] = L z [ tan ( sin 1 n λ sin θ i n θ i ) tan ( θ r 1 ) ] ,
U 1 ( x , y ) = u 1 ( f ξ , f η ) exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f η = u z ( f ξ , f η ) exp ( j k z 1 λ 2 f ξ 2 λ 2 f η 2 ) exp [ j 2 π ( f ξ ξ + f η η ) ] exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f ,
H ( f ξ , f η ) = exp ( j k z 1 λ 2 f ξ 2 λ 2 f η 2 ) ,
U 2 ( x , y ) = U 1 ( x , y ) exp [ j Φ L L A ( x ) ] ,
U 0 ( x , y ) = u 2 ( f ξ , f η ) H ( f ξ , f η ) exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f η = U 2 ( ξ , η ) d ξ d η exp ( j k g 1 λ 2 f ξ 2 λ 2 f η 2 ) exp [ j 2 π ( f ξ ξ + f η η ) ] exp [ j 2 π ( f ξ x + f η y ) ] d f ξ d f η ,
Δ E = 1 i = 1 3 S S I M ( i ) 3 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.