Abstract

In the angular-spectrum method–based computer-generated hologram, the zero-padding method is used to convert circular convolution into linear convolution. However, it will increase the calculation time and memory usage significantly. Therefore, a fast and simple method is proposed to solve the issue of the numerical convolution in the process of hologram generation by using the intermediate angular-spectrum method in this paper. Through replacing numerical Fourier transform by optical Fourier transform in the hologram generation, the calculation speed is approximately 6 times faster than that of the zero-padding method. And due to the scaling factors introduced by the Fourier lens and without the cropping operation, the reconstruction quality of the proposed method is improved significantly compared with the zero-padding method. Moreover, the optical reconstruction system is more compact than the 4-$f$ filter system in the on-axis holographic reconstruction. Both numerical simulations and optical experiments have validated the effectiveness of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography is regarded as a promising technology for three-dimensional (3-D) display since it can reconstruct the whole optical wave field of a 3-D scene and can provide all of the 3-D information [1,2]. For the electronic display of holography, the hologram can be calculated using the technique of computer-generated hologram (CGH). And due to the high diffraction efficiency of the phase-only spatial light modulator (SLM), it becomes one of the major devices to display CGH [35].

However, the pixelated structure of the SLM will cause direct current (DC) noise in the reconstructed result which is harmful to the holographic display [6,7]. To reduce the influence on the hologram reconstructed results, the off-axis holographic is often set up to filter the DC noise by adding a phase grating [810]. Because most of the energy is concentrated in the DC component, the reconstructed image away from the center will result in non-uniform energy distribution in the reconstructed image [11]. Therefore, achieving on-axis holographic reconstruction is helpful to improve the quality of the holographic display. In order to filter the DC component from the signal, a 4-$f$ optical filter system is usually needed [1214]. Nevertheless, the optical reconstruction system becomes bulky and heavy.

Another main factor which restricts the application of the holographic display is the calculation time of CGH. Many methods have been proposed to accelerate the speed of the process of hologram generation [1522]. In the angular-spectrum method (ASM) based hologram, the fast Fourier transform (FFT) can be used to accelerate the diffraction calculation. Due to the periodicity of the FFT, the FFT-based convolution becomes circular convolution. It results in an error in the diffraction field [23,24] and causes a periodic repetition phenomenon in the CGH reconstructed result, which degrades the quality of the reconstructed image. To solve this problem, zero-padding, which uses zeros to surround the input image to expand the calculation window, is usually employed in the process of diffraction calculation [2527]. After zero-padding, the circular convolution is converted to the linear convolution, the correct diffraction and reconstruction result can be obtained. And the zero-padding process is useful in the long-distance propagation simulation which can be used to obtain a more accurate diffraction result [28]. However, it increases the calculation time and memory usage in the CGH generation. Moreover, in the diffraction field, the cropping operation is usually used to fit the resolution of SLM when using zero-padding method to generate hologram. Nevertheless, the leftover area also contained some information of the object, thereby the quality of the reconstructed image will be degraded.

In this paper, a simple and effective method is proposed to accelerate the generation of the hologram and simplify the optical filter system in the on-axis holographic reconstruction. To avoid the numerical circular convolution and the zero-padding, the intermediate angular-spectrum method (IASM) is used to calculate the diffraction field and the optical Fourier transform is used to replace the numerical FFT, the complete hologram can be achieved at the optical Fourier plane. Because the scaling factors is introduced by the optical Fourier lens and the hologram is generated without the cropping operation, a larger size and high detail reconstructed image can be obtained. Moreover, the size of the optical filter system can be reduced since only a Fourier lens and a DC filter are used to filter the DC noise in the optical Fourier plane. Both numerical simulations and optical experiments are performed to demonstrate the effectiveness of the proposed method.

2. Method

2.1 Angular-spectrum method

The ASM decomposes the optical wavefront into many plane waves with different spatial frequencies and superimposes them in the observation plane. The scheme and expression of the ASM are given in Fig.  1 and Eq. (1), respectively.

$$U_z(x_1,\;y_1)=IFFT\Big\{FFT{\bigg\{}U_0(x,\;y){\bigg\}}\cdot H_f(f_x,\;f_y)\Big\},$$
where $U_0(x,\;y)$ and $U_z(x_1,\;y_1)$ represent the amplitude of the input image in the input plane and the complex amplitude distribution in the output plane, respectively. $IFFT$ represents the inverse FFT algorithm. $H_f(f_x,\;f_y)$ is the angular spectrum transform function. The expression of $H_f(f_x,\;f_y)$ is expressed by:
$$H_f(f_x,\;f_y)=exp\Big( ikz\sqrt{1-(\lambda f_x)^2-(\lambda f_y)^2}\Big),$$
where $k$ represents the wave number and equals $2\pi /\lambda$, $\lambda$ is the light wavelength. $z$ is the diffraction distance, and $f_x$, $f_y$ are the spatial frequencies. Assuming the sampling numbers of the $U_0(x,\;y)$ and $U_z(x_1,\;y_1)$ both are $N\times N$, and the sampling intervals of the $H_f$ are $\Delta f_x$, $\Delta f_y$. In Eq. (1), the discrete FFT-based convolution is a circular convolution, which results in an error in the diffraction field and optical reconstruction.

 

Fig. 1. Angular-spectrum method.

Download Full Size | PPT Slide | PDF

The zero-padding method can convert the circular convolution into linear convolution to solve this problem by expanding the calculation window size to $2N\times 2N$. However, the calculation time and the memory usage are increased significantly when the resolution of the input image increased. Moreover, the cropping operation in the zero-padding method based hologram generation will degrade the quality of the reconstructed image. The scheme of the zero-padding method based phase-only hologram (POH) generation is shown in Fig.  2.

Firstly, the input image is multiplied by a random phase $exp(i\varphi _R)$ with a range of $[0,2\pi ]$, which is usually used to help to encode the amplitude information into the phase-only hologram [29], and the calculation window is extended to $2N\times 2N$ by zero-padding. Next, the diffraction field is calculated by ASM. Then, the diffraction field is crop to the $N\times N$ area which is inside the light red dashed box. After the phase extraction process, the POH is obtained which is inside the light green dashed box, and then we load it on the SLM. Because the random phase is used which can spread the object information, thereby the leftover area (blue slashes) contains some information of the object. Therefore, the cropping operation will degrade the quality of the reconstructed image.

 

Fig. 2. Generation of POH with zero-padding method.

Download Full Size | PPT Slide | PDF

2.2 Intermediate angular-spectrum method

In order to avoid the numerical circular convolution and zero-padding in Eq. (1), we use the optical Fourier transform to substitute the $IFFT$ process. The scheme of the IASM is shown in Fig.  3, and the expression of the IASM is given by Eq. (3).

$$U_i(f_{xi},\;f_{yi})=FFT{\bigg\{}U_0(x,\;y){\bigg\}}\cdot H_{fi}(f_{xi},\;f_{yi}),$$
where $U_0(x,\;y)$ and $U_i(f_{xi},\;f_{yi})$ are the amplitude of the input image in the input plane and the complex amplitude distribution in the IASM result plane, the sampling numbers of the $U_0(x,\;y)$ and $U_i(f_{xi},\;f_{yi})$ both are $N \times N$. $H_{fi}(f_{xi},\;f_{yi})$ represents the transform function of IASM, and $f_{xi}$, $f_{yi}$ are the spatial frequencies of $H_{fi}$. The expression of $H_f(f_{xi},\;f_{yi})$ is written as:
$$H_{fi}(f_{xi},\;f_{yi})=exp\Big(ikz\sqrt{1-(\lambda f_{xi})^2-(\lambda f_{yi})^2}\Big).$$
The difference between $H_f$ and $H_{fi}$ is that the sampling intervals $\Delta f_{xi}$, $\Delta f_{yi}$ are related to the sampling intervals $\Delta x_{SLM}$, $\Delta y_{SLM}$ of the SLM, the wavelength $\lambda$ of the input light source and the focal length $f$ of the Fourier transform lens. The expressions of sampling intervals $\Delta f_{xi}$, $\Delta f_{yi}$ are defined by:
$$\begin{aligned} \Delta f_{xi}&=\frac{\Delta x_{SLM}}{\lambda f}, \nonumber \\ \Delta f_{yi}&=\frac{\Delta y_{SLM}}{\lambda f}. \end{aligned}$$
Since the transform function $H_{fi}$ is sampled, the aliasing error may be introduced in $H_{fi}$. To consider the aliasing condition of the transform function and for simplicity, the one-dimensional expression of the local spatial frequency $f_l$ of the transform function is given by [25,30]:
$$\begin{aligned}&\phi(f_{xi})=kz\sqrt{1-(\lambda f_{xi})^2}, \nonumber \\ &f_{l}=\frac{1}{2\pi}\frac{\partial \phi}{\partial f_{xi}}=\frac{f_{xi}z}{\sqrt{[\lambda^{{-}2}-f_{xi}^{2}]}}. \end{aligned}$$
The sampling interval $\Delta f_{xi}$ of the transform function $H_{fi}$ should meet the Nyquist theorem:
$$\frac{1}{\Delta f_{xi}}\ge 2|f_{l}|.$$
According Eqs. (6), (7), the diffraction distance $z$ is determined by:
$$z\leq \frac{\sqrt{\lambda^{{-}2}-f_{xi}^2}}{2\Delta f_{xi} f_{xi}},$$
when $f_{xi}$ is taken the largest value, $z$ can get the smallest range, and the maximal value of $f_{xi}$ is $\frac {N\Delta f_{xi}}{2}$. Combine Eqs. (5), (8), we can obtain the range of $z$:
$$z\leq \frac{\lambda f\cdot\sqrt{4f^2-\Delta x_{SLM}^{2}N^2}}{2\Delta x_{SLM}^{2}N},$$
in order to avoid the aliasing error in the transform function, the diffraction distance $z$ should meet the above conditions.

 

Fig. 3. Intermediate angular-spectrum method.

Download Full Size | PPT Slide | PDF

2.3 Generation of the intermediate phase-only hologram and the reconstruction system

The entire process of the intermediate phase-only hologram (IPOH) generation and reconstruction is shown in Fig.  4.

Firstly, the input image is multiplied by the random phase $\varphi _R(x,\;y)$ with a range of $[0,2\pi ]$. Next, the complex amplitude distribution in the IASM result plane is calculated by using the IASM, and the IPOH can be obtained by extract the phase component of the IASM result. Here, the blazing grating is used to achieve off-axis and on-axis holographic reconstruction. Then, we load IPOH on the SLM, after optical Fourier transform we can obtain the complete hologram in the Fourier plane (DC filter plane). Finally, through the DC filter and diffraction the clear image can be obtained by the on-axis reconstruction. Besides, the IPOH of a 3-D object can be obtained easily by superposition of the IASM results with different $z$.

 

Fig. 4. Generation of IPOH and the reconstruction system.

Download Full Size | PPT Slide | PDF

In Fig.  4, the sampling intervals $\Delta x_F$, $\Delta y_F$ at the optical Fourier plane (DC filter plane) are defined by the following expressions:

$$\begin{aligned}\Delta x_{F}&=\frac{\lambda f }{\Delta x_{SLM}\cdot N}, \nonumber \\ \Delta y_{F}&=\frac{\lambda f }{\Delta y_{SLM}\cdot N}, \end{aligned}$$
where $\Delta x_F$, $\Delta y_F$ mean the sampling intervals of the IASM recorded object. The sampling intervals of the ASM recorded object are $\Delta x$, $\Delta y$, and $\Delta x=\Delta x_{SLM}$, $\Delta y=\Delta y_{SLM}$. According to Eq. (10), the reconstructed image size of the IASM will be scaled compared with that of the ASM since the expressions of $\Delta x_F$, $\Delta y_F$ and $\Delta x$, $\Delta y$ are different. The scaling factors $S_{x}$, $S_{y}$ are expressed as follows:
$$\begin{aligned}S_{x}&=\frac{\lambda f }{\Delta x_{SLM}\cdot \Delta x\cdot N}=\frac{\lambda f }{(\Delta x_{SLM})^2\cdot N}, \nonumber \\ S_{y}&=\frac{\lambda f }{\Delta y_{SLM}\cdot \Delta y\cdot N}=\frac{\lambda f }{(\Delta y_{SLM})^2\cdot N}. \end{aligned}$$
The value larger or smaller than 1.0 means that the reconstructed image size will zoom out or zoom in compared with that of the ASM, respectively.

2.4 Comparison between angular-spectrum method and intermediate angular-spectrum method

Compared with the ASM, the IASM can avoid circular convolution without zero-padding. The process of IASM only involves one $N\times N$ FFT, the computational complexity of Eq. (3) is proportional to $N^2log_{2}N$, and the computational complexity of Eq. (1) with zero-padding is proportional to $8N^2log_{2}2N$. Therefore, the generation speed of hologram can be improved significantly.

Moreover, in the optical reconstruction system of the IASM, the Fourier lens likewise plays an important role in the on-axis reconstruction, the DC noise can be easily filtered by one Fourier lens and DC filter. While the ASM usually need a 4-$f$ filter to remove the DC noise in the on-axis holographic reconstruction. Compared with the 4-$f$ filter system, the optical filter system of the proposed method is more compact.

In addition, the reconstructed image quality of the IASM should be better than that of the ASM in holographic reconstruction. The reason is that the generation process of the IPOH has no cropping operation to crop the diffraction field to fit the resolution of the SLM.

In the next section, the numerical simulations and optical reconstruction are given to demonstrate the effectiveness of the proposed method.

3. Results

3.1 Numerical simulations

Figure  5 and Table  1 show the calculation times of the zero-padding method and the proposed method in hologram generation. As the resolution of the input image is increasing, the gap between these two methods is becoming larger and larger as shown in Fig.  5(a). When the resolution of the input image becomes $4096\times 4096$, the zero-padding method consumes about 30 s, the proposed method only takes about 5 s. Figure  5(b) shows the time ratio of the zero-padding method and proposed method which is defined as:

$$Ratio = \frac{T_{zeropadding}}{T_{proposed}}.$$
According Fig.  5(b), the proposed method is approximately 6 times faster than the zero-padding method. We used a computer with Intel Core i5 4200M CPU, 16 GB of memory and the Microsoft Windows 10 operating system, the programming language is python 3.7. As a comparison, the time ratio data $Ratio_{zd}$ of the zero-padding method and double-step Fresnel diffraction method from Ref. [24] is also given in Table  1.

 

Fig. 5. The calculation time and the $Ratio$ of the proposed method and the zero-padding method with different resolution of the input image.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Calculation Times and Time Ratio

We combined the random phase integration(RPI) method [31] with the proposed method and zero-padding method to compare the quality of the reconstructed image, respectively. Figure  5(a) is the input image. The numerical simulation conditions of $\Delta x_{SLM}$, $\Delta y_{SLM}$, $\lambda$, and focal length $f$ are 8.0 µm, 8.0 µm, 671.0 nm and 300.0 mm, respectively. And the resolution of the input image and the diffraction distance are $1024\times 1024$ and 0.30 m, respectively. In the RPI method, the sub-holograms are generated by using different random phase, $N_s$ means the number of generated sub-holograms. The reconstruction result is obtained by integrated the sub-holograms reconstructed images, and the number of the integrated sub-holograms is equal with $N_s$. Figures  6(b) and 6(c) show the reconstructed images of the zero-padding method and the proposed method with $N_s=1$, respectively.

 

Fig. 6. Numerical reconstruction results: (a) input image, reconstructed image with (b) zero-padding method and (c) proposed method.

Download Full Size | PPT Slide | PDF

From the red and blue boxes in Figs.  6(b) and 6(c), we can see that the proposed method has a better reconstruction quality. This is because the cropping operation in the diffraction field when using the zero-padding method is degrades the quality of the reconstructed image, while the proposed method has no cropping operation. Note that the simulation results of the proposed method and the zero-padding method did not considered the influences of the DC filter, the DC filter will decrease the reconstructed image quality of both methods.

As usual, the peak signal to noise ratio (PSNR) and speckle noise contrast $C$ are used to evaluate the quality of the reconstructed image. The equation of PSNR for the 8-bit gray-level image is defined as:

$$PSNR=10log\Bigg\{\frac{255^2}{\frac{1}{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}{\bigg(}I_0(i,\;j)-I_r(i,\;j){\bigg)}^2}\Bigg\},$$
where $M$ and $N$ are the horizontal and vertical numbers of pixels, in this case, $M=N$. $I_0$ and $I_r$ are the original and reconstructed images, respectively. A higher PSNR generally indicates that a higher reconstructed quality [9]. The expression of speckle noise contrast $C$ is defined by:
$$C=\frac{\sigma}{\mu},$$
where $\sigma$ and $\mu$ are the standard deviation and the mean value of the amplitude distribution in the red box(as shown in Figs.  6(b) and 6(c)) of the reconstructed image, respectively. A lower value of $C$ which usually indicates a better reconstruction result [32].

Figures  7(a) and 7(b) show the PSNR and $C$ of the reconstructed images with the proposed method and zero-padding method when $N_s$ is increasing. From Figs.  7(a) and 7(b) we can see that the image quality of the proposed method is improved significantly compared with the zero-padding method, and the quality is increasing when $N_s$ is increasing. Figures  7(c) and 7(d) show the calculation time and time ratio of the proposed method and zero-padding method when $N_s$ is increasing, respectively. We can see that the calculation speed of the proposed method is approximately 4 times faster than that of the zero-padding method when combined with the RPI method.

 

Fig. 7. Numerical results: (a) PSNR and (b) speckle contrast of the reconstructed results of zero-padding method and proposed method with different $N_s$. (c) The calculation time and (d) time ratio of the zero-padding method and proposed method with different $N_s$.

Download Full Size | PPT Slide | PDF

3.2 Optical experiments

The optical reconstruction system is shown in Fig.  8. The phase-only SLM is provided by Xi’an Institute of Optics and Precision Mechanics. The sampling interval and resolution of the SLM are 8.0 µm and $1920\times 1080$ (we only use $1024\times 1024$), respectively. The frame rate and the phase modulation range of the SLM are 60 Hz and [0, 2$\pi$], respectively. The wavelength of the input light source is 671.0 nm, and the focal length of the Fourier lens is 300.0 mm. The shape of the DC filter is cross since the shape of the DC component in the DC filter plane is cross. The white and black areas of the DC filter are the meaning of light block and pass, respectively. We display the pre-calculated sub-holograms on the SLM sequentially and use a complementary metal-oxide semiconductor (CMOS) camera (Nikon D810 without lens) to capture the reconstructed image.

 

Fig. 8. Schematic of optical setup.

Download Full Size | PPT Slide | PDF

Figure  9 shows the optical results of the proposed method and zero-padding method in on-axis reconstruction with $N_s=1$. The camera’s exposure time and international organization for standardization (ISO) speed are 1/60 s and 64, respectively. The reconstruction distance $z=0.3$ m. Figures  9(a) and 9(b) show the on-axis reconstruction results of the zero-padding method without 4-$f$ filter system and the proposed method without DC filter, respectively. Figures  9(c) and 9(d) show the on-axis reconstruction results of the zero-padding method with the 4-$f$ filter system and the proposed method with the DC filter, respectively. From Fig.  9, we can see that both the 4-$f$ filter system and DC filter can remove the DC component completely. Cause of the scaling factors $S_x=S_y=3.07$, the size of the reconstructed image of the proposed method is about 3 times larger than that of zero-padding method. In the green box and blue box of Figs.  9(c) and 9(d), we can see that the quality of the proposed method is better than that of the zero-padding method.

 

Fig. 9. Optical results of the zero-padding method and proposed method. (a) and (c) are the reconstruction results of the zero-padding method without and with 4-$f$ filter system, respectively. (b) and (d) are the reconstruction results of the proposed method without and with DC filter, respectively.

Download Full Size | PPT Slide | PDF

Figure  10 shows the reconstructed image with zero-padding method and the proposed method in on-axis reconstruction with $N_s$ is 10, 20, 30. Green box and blue box show the details of the reconstruction result, and the reconstruction distance $z=0.3$ m. Figures  10(a)–10(c) and Figs.  10(d)–10(f) show the reconstructed images of the zero-padding method and the proposed method, respectively. The camera’s exposure times from columns 1-3 of Fig.  10 are 1/6 s, 1/3 s, 1/2 s, respectively. The camera’s ISO speed is 64.

 

Fig. 10. Optical results of the zero-padding method and proposed method with RPI method. (a)-(c) Reconstruction results of the zero-padding method when the exposure time is 1/6 s, 1/3 s, 1/2 s, respectively. (d)-(f) Reconstruction results of the proposed method when exposure time is 1/6 s, 1/3 s, 1/2 s, respectively.

Download Full Size | PPT Slide | PDF

From Fig.  10, We can see that the quality of the proposed method is increasing when $N_s$ is increasing, and the quality is much better than that of the zero-padding method. In Figs.  10(a)–10(c), there are some aberrations in the reconstructed image, which probably are caused by the 4-$f$ filter system. While in Figs.  10(d)–10(f), the aberrations in the reconstructed image are small since we used a more compact optical reconstruction system.

Figure  11 shows the 3-D reconstructed results of the proposed method in on-axis holographic reconstruction with $N_s=30$. The camera’s exposure time and ISO speed are 1/2 s and 64, respectively. The reconstruction distance $z$ of Figs.  11(a)–11(d) is 0.26 m, 0.28 m, 0.30 m, 0.32 m, respectively. As shown in Fig.  11, the 3-D object is successfully reconstructed in on-axis holographic reconstruction, the DC noise is removed fully by the DC filter, and the detail of the reconstructed image is clear.

 

Fig. 11. 3-D reconstructed results of the proposed method in on-axis holographic reconstruction with the reconstruction distance: (a) $z=0.26$ m, (b) $z=0.28$ m, (c) $z=0.30$ m and (d) $z=0.32$ m.

Download Full Size | PPT Slide | PDF

4. Discussion

As we can see from the above, the proposed method can obtain a better reconstruction result compare with zero-padding method. However, the quality of the proposed method relies on the focal length of the Fourier lens. The different focal length of the Fourier lens will lead to different reconstruction results. Figure  12 shows the quality of the reconstructed results of the proposed method with different focal length when the complex amplitude is used in the reconstruction. The PSNR and structural similarity (SSIM) is used to measure the quality of the reconstructed image, and the details of SSIM can be found in Ref. [33].

 

Fig. 12. Reconstruction quality of the proposed method with different focal length: (a) PSNR and (b) SSIM of the reconstruction results.

Download Full Size | PPT Slide | PDF

From Figs.  12(a) and 12(b), we can see that the larger focal length can obtain a better reconstruction result, and the reconstruction quality is decreasing when diffraction distance $z$ is increasing. To explain the reason, assuming there are two objects with different sampling interval $\Delta x_1$, $\Delta x_2$, the number of sampling points are equal, and the size of the object2 is larger than that of the object1 as shown in Fig.  13. The maximal spatial frequency $f_{max}$ of the input object is given by [17]:

$$\begin{aligned}f_{max} &= \frac{sin\theta}{\lambda} = \frac{1}{2\Delta x}, \nonumber \\ sin\theta &= \frac{\lambda}{2\Delta x}. \end{aligned}$$
According Eq. (15), the different sampling interval corresponds to different maximal object’s light spread angle $\theta$. Therefore, the object1 has a larger $\theta$ than that the object2 has, the diffraction plane must closer enough to the object1 to receive all information of the object1. When the diffraction distance is increasing, the lost information of the object is increasing as shown in Fig.  13(a). Object2 has a smaller $\theta$ and larger size, therefore, the diffraction distance can set longer than that of the object1 to receive high spatial frequency as shown in Fig.  13(b).

According Eq. (10), a different focal length will lead to a different sampling interval and size of the recorded object. Therefore, according to the above analysis, a larger focal length will lead to a better reconstruction quality.

 

Fig. 13. Objects with different sampling interval

Download Full Size | PPT Slide | PDF

However, to reduce the size of the system as much as possible, we can not use a very large focal length since the system size will increase when the focal length increase. And in the off-axis holographic reconstruction, due to the additional optical Fourier lens, the size of the optical reconstruction system is no longer an advantage. Moreover, the diffraction distance $z$ needs to satisfy the Eq. (9) and should not be chosen a value which is very close to 0 since the DC filter is set up in the plane of $z=0$ m.

Besides, the IASM has a characteristic that the amplitude distribution of IASM result is similar to that of the Fourier transform which has very high contrast. Thus, to load hologram on a 8-bit grey-lever SLM, the input image needs to be multiplied by a random phase to smooth the amplitude distribution in the resulting plane. This characteristic means that the proposed method can not combine with the optimized method which does not require random phase such as double phase method [9]. However, we can combine the proposed method with the optimized method which require random phase such as pixel separation method [32] to further improve the quality of the reconstructed image.

5. Conclusion

In this work, we have proposed a fast and simple hologram generation method by using the IASM. The proposed method can accelerate the speed of the hologram generation with an approximate 6 times. And with a Fourier lens, the quality of the reconstructed image is improved significantly compared with the zero-padding method. Moreover, the size of the DC filter system is reduced in the on-axis holographic reconstruction. Furthermore, the others convolution-based diffraction such as scaled diffraction algorithm can likewise use the proposed method to improve the calculation speed. In future works, the process could be further speed up if it is combined with a graphics-processing unit (GPU), which is useful for real-time high-resolution holographic display. Additionally, the distance between SLM and Fourier lens could be further reduced through illuminating with the converging light, which is a benefit to integrating the holographic display system into a more compact device.

Funding

National Key Research and Development Program of China (2017YFB1002900); National Natural Science Foundation of China (U1933132); Chengdu Science and Technology Program (2019-GH02-00070-HZ).

Acknowledgments

The authors would like to thanks Prof. Yi-Ping Cao for some helpful suggestions.

References

1. F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

2. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]  

3. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969). [CrossRef]  

4. Y.-Z. Liu, J.-W. Dong, Y.-Y. Pu, B.-C. Chen, H.-X. He, and H.-Z. Wang, “High-speed full analytical holographic computations for true-life scenes,” Opt. Express 18(4), 3345–3351 (2010). [CrossRef]  

5. H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971). [CrossRef]  

6. D. Palima and V. R. Daria, “Holographic projection of arbitrary light patterns with a suppressed zero-order beam,” Appl. Opt. 46(20), 4197–4201 (2007). [CrossRef]  

7. P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light gerchberg-saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018). [CrossRef]  

8. H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. 48(30), 5834–5841 (2009). [CrossRef]  

9. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016). [CrossRef]  

10. C. Chen, J. Wang, D. Xiao, and Q.-H. Wang, “Fast method for ringing artifacts reduction in random phase-free kinoforms,” Appl. Opt. 58(5), A13–A20 (2019). [CrossRef]  

11. J. Cho, S. Kim, S. Park, B. Lee, and H. Kim, “DC-free on-axis holographic display using a phase-only spatial light modulator,” Opt. Lett. 43(14), 3397–3400 (2018). [CrossRef]  

12. M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009). [CrossRef]  

13. H. Zhang, Q. Tan, and G. Jin, “Holographic display system of a three-dimensional image with distortion-free magnification and zero-order elimination,” Opt. Eng. 51(7), 075801 (2012). [CrossRef]  

14. X. Wang, H. Zhang, L. Cao, and G. Jin, “Generalized single-sideband three-dimensional computer-generated holography,” Opt. Express 27(3), 2612–2620 (2019). [CrossRef]  

15. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

16. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

17. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

18. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

19. N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006). [CrossRef]  

20. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

21. J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

22. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef]  

23. T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, and T. Ito, “Efficient diffraction calculations using implicit convolution,” OSA Continuum 1(2), 642–650 (2018). [CrossRef]  

24. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]  

25. K. Matsushima and T. Shimobaba, “Band-Limited Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]  

26. J.-P. Liu, “Controlling the aliasing by zero-padding in the digital calculation of the scalar diffraction,” J. Opt. Soc. Am. A 29(9), 1956–1964 (2012). [CrossRef]  

27. J. Jia, J. Si, and D. Chu, “Fast two-step layer-based method for computer generated hologram using sub-sparse 2D fast Fourier transform,” Opt. Express 26(13), 17487–17497 (2018). [CrossRef]  

28. X. Yu, T. Xiahui, Q. Y. xiong, P. Hao, and W. Wei, “Wide-window angular spectrum method for diffraction propagation in far and near field,” Opt. Lett. 37(23), 4943–4945 (2012). [CrossRef]  

29. T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015). [CrossRef]  

30. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996), chap. 2.2.

31. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20(22), 25130–25136 (2012). [CrossRef]  

32. M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21(24), 29205–29216 (2013). [CrossRef]  

33. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010).
    [Crossref]
  2. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011).
    [Crossref]
  3. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
    [Crossref]
  4. Y.-Z. Liu, J.-W. Dong, Y.-Y. Pu, B.-C. Chen, H.-X. He, and H.-Z. Wang, “High-speed full analytical holographic computations for true-life scenes,” Opt. Express 18(4), 3345–3351 (2010).
    [Crossref]
  5. H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971).
    [Crossref]
  6. D. Palima and V. R. Daria, “Holographic projection of arbitrary light patterns with a suppressed zero-order beam,” Appl. Opt. 46(20), 4197–4201 (2007).
    [Crossref]
  7. P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light gerchberg-saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018).
    [Crossref]
  8. H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. 48(30), 5834–5841 (2009).
    [Crossref]
  9. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016).
    [Crossref]
  10. C. Chen, J. Wang, D. Xiao, and Q.-H. Wang, “Fast method for ringing artifacts reduction in random phase-free kinoforms,” Appl. Opt. 58(5), A13–A20 (2019).
    [Crossref]
  11. J. Cho, S. Kim, S. Park, B. Lee, and H. Kim, “DC-free on-axis holographic display using a phase-only spatial light modulator,” Opt. Lett. 43(14), 3397–3400 (2018).
    [Crossref]
  12. M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
    [Crossref]
  13. H. Zhang, Q. Tan, and G. Jin, “Holographic display system of a three-dimensional image with distortion-free magnification and zero-order elimination,” Opt. Eng. 51(7), 075801 (2012).
    [Crossref]
  14. X. Wang, H. Zhang, L. Cao, and G. Jin, “Generalized single-sideband three-dimensional computer-generated holography,” Opt. Express 27(3), 2612–2620 (2019).
    [Crossref]
  15. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
    [Crossref]
  16. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
    [Crossref]
  17. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
    [Crossref]
  18. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
    [Crossref]
  19. N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).
    [Crossref]
  20. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015).
    [Crossref]
  21. J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015).
    [Crossref]
  22. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013).
    [Crossref]
  23. T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, and T. Ito, “Efficient diffraction calculations using implicit convolution,” OSA Continuum 1(2), 642–650 (2018).
    [Crossref]
  24. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
    [Crossref]
  25. K. Matsushima and T. Shimobaba, “Band-Limited Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields,” Opt. Express 17(22), 19662–19673 (2009).
    [Crossref]
  26. J.-P. Liu, “Controlling the aliasing by zero-padding in the digital calculation of the scalar diffraction,” J. Opt. Soc. Am. A 29(9), 1956–1964 (2012).
    [Crossref]
  27. J. Jia, J. Si, and D. Chu, “Fast two-step layer-based method for computer generated hologram using sub-sparse 2D fast Fourier transform,” Opt. Express 26(13), 17487–17497 (2018).
    [Crossref]
  28. X. Yu, T. Xiahui, Q. Y. xiong, P. Hao, and W. Wei, “Wide-window angular spectrum method for diffraction propagation in far and near field,” Opt. Lett. 37(23), 4943–4945 (2012).
    [Crossref]
  29. T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015).
    [Crossref]
  30. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996), chap. 2.2.
  31. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20(22), 25130–25136 (2012).
    [Crossref]
  32. M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21(24), 29205–29216 (2013).
    [Crossref]
  33. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref]

2019 (2)

2018 (5)

2016 (1)

2015 (3)

2013 (3)

2012 (4)

2011 (1)

2010 (2)

2009 (4)

2008 (1)

2007 (1)

2006 (1)

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

1993 (1)

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

1971 (1)

H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971).
[Crossref]

1969 (1)

L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
[Crossref]

Agour, M.

M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
[Crossref]

Akamatsu, T.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Cao, L.

Chang, C.

Chang, S.

Chen, B.-C.

Chen, C.

Chen, J.-S.

Chen, N.

Cho, J.

Choi, H.-J.

Chu, D.

Chu, D. P.

Dammann, H.

H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971).
[Crossref]

Daria, V. R.

Dong, J.-W.

Ducin, I.

Endo, Y.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015).
[Crossref]

Falldorf, C.

M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
[Crossref]

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996), chap. 2.2.

Görtler, K.

H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971).
[Crossref]

Hahn, J.

Hao, P.

Hasegawa, S.

He, H.-X.

Hirayama, R.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015).
[Crossref]

Hirsch, P. M.

L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
[Crossref]

Hiyama, D.

Hong, J.

Hoshikawa, N.

Ichihashi, Y.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref]

Ito, T.

Jia, J.

Jin, G.

Jordan, J. A.

L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
[Crossref]

Kakarenko, K.

Kakue, T.

Kang, H.

F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010).
[Crossref]

Kim, E.-S.

Kim, H.

Kim, S.

Kim, S.-C.

Kim, Y.

Kolenovic, E.

M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
[Crossref]

Kolodziejczyk, A.

Kong, D.

Lee, B.

Lesem, L. B.

L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
[Crossref]

Li, X.

Liu, J.

Liu, J.-P.

Liu, S.

Liu, Y.-Z.

Lucente, M. E.

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

Makowski, M.

Masuda, N.

Matsushima, K.

Min, S.-W.

Nagahama, Y.

Nakayama, H.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

Nishitsuji, T.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, and T. Ito, “Efficient diffraction calculations using implicit convolution,” OSA Continuum 1(2), 642–650 (2018).
[Crossref]

Oi, R.

Oikawa, M.

Okada, N.

Onural, L.

F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010).
[Crossref]

Palima, D.

Pan, Y.

Park, J.-H.

Park, S.

Pu, Y.-Y.

Qi, Y.

Sano, M.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Shimobaba, T.

Shiraki, A.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, and T. Ito, “Efficient diffraction calculations using implicit convolution,” OSA Continuum 1(2), 642–650 (2018).
[Crossref]

N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).
[Crossref]

Si, J.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Sugie, T.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015).
[Crossref]

N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).
[Crossref]

Sun, P.

Suszek, J.

Sypek, M.

Takada, N.

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

Takahashi, T.

Tan, Q.

H. Zhang, Q. Tan, and G. Jin, “Holographic display system of a three-dimensional image with distortion-free magnification and zero-order elimination,” Opt. Eng. 51(7), 075801 (2012).
[Crossref]

Tanaka, T.

Tao, X.

von Kopylow, C.

M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
[Crossref]

Wang, C.

Wang, H.-Z.

Wang, J.

Wang, Q.-H.

Wang, X.

Wang, Y.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Wei, W.

Xia, J.

Xiahui, T.

Xiao, D.

Xie, J.

xiong, Q. Y.

Yamamoto, K.

Yamamoto, Y.

Yaras, F.

F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010).
[Crossref]

Yu, X.

Zhang, H.

Zhao, Y.

Zheng, Z.

Appl. Opt. (6)

IBM J. Res. Dev. (1)

L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. Dev. 13(2), 150–155 (1969).
[Crossref]

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

J. Disp. Technol. (1)

F. Yaras, H. Kang, and L. Onural, “State of the Art in Holographic Displays: A Survey,” J. Disp. Technol. 6(10), 443–454 (2010).
[Crossref]

J. Electron. Imaging (1)

M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

J. Opt. A: Pure Appl. Opt. (1)

M. Agour, E. Kolenovic, C. Falldorf, and C. von Kopylow, “Suppression of higher diffraction orders and intensity improvement of optically reconstructed holograms from a spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(10), 105405 (2009).
[Crossref]

J. Opt. Soc. Am. A (1)

Nat. Electron. (1)

T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).
[Crossref]

Opt. Commun. (1)

H. Dammann and K. Görtler, “High-efficiency in-line multiple imaging by means of multiple phase holograms,” Opt. Commun. 3(5), 312–315 (1971).
[Crossref]

Opt. Eng. (1)

H. Zhang, Q. Tan, and G. Jin, “Holographic display system of a three-dimensional image with distortion-free magnification and zero-order elimination,” Opt. Eng. 51(7), 075801 (2012).
[Crossref]

Opt. Express (13)

X. Wang, H. Zhang, L. Cao, and G. Jin, “Generalized single-sideband three-dimensional computer-generated holography,” Opt. Express 27(3), 2612–2620 (2019).
[Crossref]

N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).
[Crossref]

Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015).
[Crossref]

J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015).
[Crossref]

Y.-Z. Liu, J.-W. Dong, Y.-Y. Pu, B.-C. Chen, H.-X. He, and H.-Z. Wang, “High-speed full analytical holographic computations for true-life scenes,” Opt. Express 18(4), 3345–3351 (2010).
[Crossref]

Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016).
[Crossref]

P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light gerchberg-saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018).
[Crossref]

J. Jia, J. Si, and D. Chu, “Fast two-step layer-based method for computer generated hologram using sub-sparse 2D fast Fourier transform,” Opt. Express 26(13), 17487–17497 (2018).
[Crossref]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref]

K. Matsushima and T. Shimobaba, “Band-Limited Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields,” Opt. Express 17(22), 19662–19673 (2009).
[Crossref]

T. Shimobaba, T. Kakue, Y. Endo, R. Hirayama, D. Hiyama, S. Hasegawa, Y. Nagahama, M. Sano, M. Oikawa, T. Sugie, and T. Ito, “Random phase-free kinoform for large objects,” Opt. Express 23(13), 17269–17274 (2015).
[Crossref]

M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20(22), 25130–25136 (2012).
[Crossref]

M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21(24), 29205–29216 (2013).
[Crossref]

Opt. Lett. (3)

OSA Continuum (1)

Other (1)

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996), chap. 2.2.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Angular-spectrum method.
Fig. 2.
Fig. 2. Generation of POH with zero-padding method.
Fig. 3.
Fig. 3. Intermediate angular-spectrum method.
Fig. 4.
Fig. 4. Generation of IPOH and the reconstruction system.
Fig. 5.
Fig. 5. The calculation time and the $Ratio$ of the proposed method and the zero-padding method with different resolution of the input image.
Fig. 6.
Fig. 6. Numerical reconstruction results: (a) input image, reconstructed image with (b) zero-padding method and (c) proposed method.
Fig. 7.
Fig. 7. Numerical results: (a) PSNR and (b) speckle contrast of the reconstructed results of zero-padding method and proposed method with different $N_s$. (c) The calculation time and (d) time ratio of the zero-padding method and proposed method with different $N_s$.
Fig. 8.
Fig. 8. Schematic of optical setup.
Fig. 9.
Fig. 9. Optical results of the zero-padding method and proposed method. (a) and (c) are the reconstruction results of the zero-padding method without and with 4-$f$ filter system, respectively. (b) and (d) are the reconstruction results of the proposed method without and with DC filter, respectively.
Fig. 10.
Fig. 10. Optical results of the zero-padding method and proposed method with RPI method. (a)-(c) Reconstruction results of the zero-padding method when the exposure time is 1/6 s, 1/3 s, 1/2 s, respectively. (d)-(f) Reconstruction results of the proposed method when exposure time is 1/6 s, 1/3 s, 1/2 s, respectively.
Fig. 11.
Fig. 11. 3-D reconstructed results of the proposed method in on-axis holographic reconstruction with the reconstruction distance: (a) $z=0.26$ m, (b) $z=0.28$ m, (c) $z=0.30$ m and (d) $z=0.32$ m.
Fig. 12.
Fig. 12. Reconstruction quality of the proposed method with different focal length: (a) PSNR and (b) SSIM of the reconstruction results.
Fig. 13.
Fig. 13. Objects with different sampling interval

Tables (1)

Tables Icon

Table 1. Calculation Times and Time Ratio

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

U z ( x 1 , y 1 ) = I F F T { F F T { U 0 ( x , y ) } H f ( f x , f y ) } ,
H f ( f x , f y ) = e x p ( i k z 1 ( λ f x ) 2 ( λ f y ) 2 ) ,
U i ( f x i , f y i ) = F F T { U 0 ( x , y ) } H f i ( f x i , f y i ) ,
H f i ( f x i , f y i ) = e x p ( i k z 1 ( λ f x i ) 2 ( λ f y i ) 2 ) .
Δ f x i = Δ x S L M λ f , Δ f y i = Δ y S L M λ f .
ϕ ( f x i ) = k z 1 ( λ f x i ) 2 , f l = 1 2 π ϕ f x i = f x i z [ λ 2 f x i 2 ] .
1 Δ f x i 2 | f l | .
z λ 2 f x i 2 2 Δ f x i f x i ,
z λ f 4 f 2 Δ x S L M 2 N 2 2 Δ x S L M 2 N ,
Δ x F = λ f Δ x S L M N , Δ y F = λ f Δ y S L M N ,
S x = λ f Δ x S L M Δ x N = λ f ( Δ x S L M ) 2 N , S y = λ f Δ y S L M Δ y N = λ f ( Δ y S L M ) 2 N .
R a t i o = T z e r o p a d d i n g T p r o p o s e d .
P S N R = 10 l o g { 255 2 1 M N i = 0 M 1 j = 0 N 1 ( I 0 ( i , j ) I r ( i , j ) ) 2 } ,
C = σ μ ,
f m a x = s i n θ λ = 1 2 Δ x , s i n θ = λ 2 Δ x .

Metrics