Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Complex amplitude field reconstruction in atmospheric turbulence based on deep learning

Open Access Open Access

Abstract

In this paper, we use deep neural networks (DNNs) to simultaneously reconstruct the amplitude and phase information of the complex light field transmitted in atmospheric turbulence based on deep learning. The results of amplitude and phase reconstruction by four different training methods are compared comprehensively. The obtained results indicate that the training method that can more accurately reconstruct the complex amplitude field is to input the amplitude and phase pattern pairs into the neural network as two channels to train the model.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

When light transmits through a random media such as atmospheric turbulence, the output field will become a disordered distribution. In atmospheric turbulence, light propagation will be affected by the inhomogeneous atmospheric refractive index disturbance caused by the small changes of atmospheric random temperature or other factors, which will lead to random changes. Atmospheric turbulence effects include phase fluctuation, beam wander, beam scintillation, beam spreading and so on, which derive the energy loss and diffusion of the light field, resulting in the chaos of the output field. Although these output light fields look entirely different from the input light fields, the information is still contained in the output light field, from which the input light field can be reconstructed. The amplitude and phase information of the light field are of great significance in many applications, such as biomedical imaging [14], optical sensing [5,6], and three-dimensional imaging [7,8], etc. As the information that can be directly captured by human eyes, the importance of amplitude information is self-evident, while the phase information can represent deeper information, such as contour and internal structure.

Several traditional approaches have been proposed to reconstruct the information of the light field distorted by atmospheric turbulence or to compensate for the atmospheric turbulence effects. For instance, the transmission matrix (TM) framework method [9] characterizes the input–output relation of a fixed scattering medium as a linear shift-variant matrix to reconstruct the light field. Yet, it is confined by the difficulty of the large-scale measurements of TM. The phase conjugation method [10] which reverses the phase of distorted wavefront to compensate for distorted light field is also efficient, although it needs a critical alignment process. The adaptive optical (AO) systems, including wavefront sensor (WFS)-based systems [11] and WFS-less systems [12], are capable of compensating for turbulence-distorted light fields. However, the WFS-based systems suffer from the high cost and the low resolution of WFS. The WFS-less systems generally apply wavefront-retrieval algorithms to detect the wavefront of distorted light fields, therefore they feature disadvantages of multiple iteration times. Fortunately, deep learning solves these above shortcomings to a certain extent, and provides a new solution to accomplish the reconstruction or compensation of distorted light field. Remarkably, since the computational imaging method based on deep learning was proposed, it has been developed and applied in diverse research areas, such as phase recovery [1315], scattering imaging [1620], multi-mode fiber imaging [2125], digital holography [26,27], computational ghost imaging [28,29], and biomedical imaging [30,31], etc.

Many researchers have used deep learning to reconstruct the distorted light field. Despite applying different training methods to achieve high reconstruction performance in their works [3234], deep learning is mostly used for image reconstruction, that is, amplitude reconstruction or phase reconstruction. In this paper, we reconstruct the complex amplitude field after atmospheric turbulence transmission based on deep learning. To the best of our knowledge, there is no clear conclusion on which kind of training method can achieve a better complex amplitude field reconstruction performance. We use four different training methods to simultaneously reconstruct the amplitude and phase information, and compare the amplitude and phase reconstruction comprehensively. Our work may provide a guidance on the choice of proper training method for turbulence-distorted complex amplitude field reconstruction based on deep learning.

2. Methods

2.1 Data acquisition

The binary MNIST handwritten digits database [35] and the EMNIST letters database [36] are used as the amplitude and phase information of the input light field, respectively. The modulated complex input light field transmits through atmospheric turbulence simulated by MATLAB, and the phase and amplitude information of the output light field are collected. These are complex input-output field pairs, which are the datasets used for training the DNN model.

The refractive index in the atmospheric turbulence changes randomly due to the random changes of temperature, pressure and other factors, resulting in uneven refractive index distribution. It is the reason why the light field is prone to be distorted in atmosphere turbulence transmission. Because of the randomness of atmospheric refractive index, turbulence is always described by statistical characteristics. When atmospheric turbulence is considered locally isotropic, the refractive index structure function of atmospheric turbulence ${D_n}(r )$ can be expressed as [37]

$${D_n}(r )= C_n^2{r^{2/3}}\; \; \; \; ({{l_0} \ll r \ll {L_0}} ), $$
where $C_n^2$ is the atmospheric refractive index structure constant, and the turbulence intensity can be expressed by $C_n^2$, r is the distance of two point in turbulence, ${l_0}$ and ${L_0}$ correspond to the inner and outer scale of turbulence, respectively. Then, the Kolmogorov power spectral density function is proposed [38]
$${\Phi _n}(\kappa )= 0.033C_n^2{\kappa ^{ - 11/3}}\; \; \; ({1/{L_0} \ll \kappa \ll 1/{l_0}} ), $$
where $\mathrm{\kappa }$ represents the spatial wave number.

In atmospheric turbulence transmission, although both amplitude and phase are distorted, the influence of phase distortion is much stronger than that of amplitude distortion. Therefore, we use random phase screens to simulate atmospheric turbulence. Based on the power spectral density function of atmospheric turbulence, the Fourier transform method [39] is commonly used to generate atmospheric turbulence phase screens. Firstly, a complex Gaussian random number matrix with zero mean and unit variance is filtered by the atmospheric phase power spectrum function ${\Phi _n}({{{\rm K}_x},{{\rm K}_y}} )$, and then the random phase screen is obtained by inverse Fourier transform method. The atmospheric phase power spectrum function ${\Phi _n}({{{\rm K}_x},{{\rm K}_y}} )$ is related to ${\Phi _n}(\kappa )$

$${\Phi _n}({{{\rm K}_x},{{\rm K}_y}} )= 2\pi k_0^2\Delta z{\Phi _n}(\kappa ), $$
where $\Delta z$ is the turbulence length and ${k_0}$ is the wave number of the field. However, the random phase screen generated by the Fourier transform method lacks low-frequency information. In order to solve this problem, a low-frequency subharmonics method is proposed [40]. The subharmonics method reduces the sampling interval in the low-frequency part to obtain the subharmonics phase screen, and it directly compensates for the lack of low-frequency in turbulence phase screen by adding the low-frequency subharmonics phase screen to the phase screen which is only generated by Fourier transform method.

We use Kolmogorov model power spectrum inversion method and low-frequency subharmonics method to generate atmospheric turbulence phase screens. During the overall transmission, we numerically simulate atmospheric turbulence according to the multiple phase-screen method. Under the assumption that the phase change caused by refractive index fluctuation is small enough, free-space propagation and the phase change caused by turbulence can be regarded as two independent processes completed at the same time. Therefore, the propagation process of light in atmospheric turbulence can be simplified into two parts, i.e. propagation in free space and phase change caused by turbulence, and the output light field can be obtained by repeating these two processes, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of complex amplitude field reconstruction in atmospheric turbulence based on deep learning. (a) Amplitude of modulated input field. (b) Phase of modulated input field. (c) Amplitude of output field. (d) Phase of output field. (e) Simulation of atmospheric turbulence propagation. Multiple phase-screen method simplifies the atmospheric turbulence transmission into two parts: propagation in free space and phase change (phase screen). Modulated input field ((a), (b)) is transmitted in atmospheric turbulence, and the output field ((c), (d)) is obtained. The output field is fed into neural network, and the reconstructed field is the output of neural network.

Download Full Size | PDF

Then, the simulated output field is used as the input of the neural network, and the amplitude and phase pattern of the modulated input field is the target of the neural network output. After the neural network is trained by learning the simulated dataset, it will try to reconstruct the amplitude and phase pattern of the modulated input field.

The original MNIST handwritten digits data and the EMNIST letters data are 28×28 pixels. For the reconstruction, data with higher resolution are preferred. In order to reconstruct a resolution of 256×256 pixels rather than 28×28 pixels, we resize the dataset from 28×28 pixels to 256×256 pixels. The dataset has a total of 12000 input-output field pairs, of which 10000 pairs are selected for model training. These 10000 pairs of data are divided into two parts, of which 90% are used as the training set, 10% as the verification set. As for the remaining 2000 input-output field pairs, these are used as the test set to test the final trained model.

2.2 DNN architecture and four different training methods

The structure of the DNN is the U-net architecture, and the input of DNN is the amplitude or phase pattern of 256×256 pixels, which has been preprocessed. DNN architecture is mainly divided into two parts, contracting path and expansion path, as illustrated in Fig. 2. Firstly, the input goes through the contracting path to extract features, and then the output resolution is increased through the expansion path. The last layer is the convolution layer, which changes the output dimension to the target size by 1×1 size filter. In the network structure, the number of input and output channels can be changed flexibly. We use the mean-square-error (MSE) as the loss function, and the Adam as the optimizer.

 figure: Fig. 2.

Fig. 2. U-net structure. The left side of the dotted line is the contracting path and the right side is the expansion path. The number of channels in input layer and output layer can be changed according to the needs of training method.

Download Full Size | PDF

The amplitude and phase pattern of the output field transmitted through atmospheric turbulence is the input of the neural network model. The output is the amplitude and phase pattern of the input field trying to reconstruct. The handwritten digits and handwritten letters used to modulate the input field are the ground truth. There are four training methods for comparison.

Method 1: The amplitude and phase pattern pairs of the output field transmitted through atmospheric turbulence are simultaneously input into the DNN model as two input channels, and the output of the DNN model is also two channels, respectively corresponding to the amplitude and phase pattern pairs of the reconstructed field.

Method 2: The datasets of amplitude and phase patterns are mixed into one dataset. The mixed dataset is disordered to train the DNN model, and the amplitude and phase patterns are indiscriminately reconstructed by the same model.

Method 3: The amplitude and phase patterns are reconstructed independently. The amplitude pattern of input field is input into the DNN model, while the phase pattern is input into another DNN model with the same structure. These two DNN models with the same structure reconstruct the amplitude and phase patterns of the input field, respectively.

Method 4: The amplitude and phase patterns of the output field are spliced horizontally from 256×256 pixels to 256×512 pixels. The handwritten digits and handwritten letters used to modulate the input field are spliced horizontally in the same way as the ground truth. The output of the DNN model is divided horizontally into two parts, which are the amplitude and phase patterns of the input field reconstructed at the same time.

In order to compare the four training methods, the DNN model structure (except the number of channels in input and output layers) and hyperparameters including batch size, learning epochs and learning rate are the same.

3. Results and discussions

The incident light wavelength of the simulation datasets is 632.5 nm, and the complex amplitude field transmits 1.6 km in the atmospheric turbulence. A random phase screen is set every 320 m to simulate the phase fluctuation, that is, $\Delta \textrm{Z}$ is equal to 320 m. The phase screens are 256×256 pixels, and the distance from the phase screen to the output is 100 m, with a total transmission distance of 1.7 km.

As shown in Fig. 3, we simulate the transmission of two different turbulence intensities and obtain the datasets, with one turbulence intensity $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$, and the other turbulence intensity $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$. In these two datasets, the input fields are modulated equally, but the output fields are completely different due to different turbulence intensities.

 figure: Fig. 3.

Fig. 3. Two datasets for different turbulence intensities are simulated. (a) Amplitude of modulated input field. (b) Phase of modulated input field. (c) Amplitude of output field when $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (d) Phase of output field when $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (e) Amplitude of output field when $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$. (f) Phase of output field when $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.

Download Full Size | PDF

The training and verification loss curves of these four training methods are depicted in Fig. 4. With the increase of training epochs, both the training loss and verification loss are decreasing, and the loss curves tend to be stable.

 figure: Fig. 4.

Fig. 4. The training and verification loss curves of these four training methods. (a) The training loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (b) The verification loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (c) The training loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$. (d) The verification loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.

Download Full Size | PDF

We use the test sets to test the network models trained by these four different training methods. Some reconstruction results from test sets of two different turbulence intensities are shown in Fig. 5. From the comparison between the DNN reconstruction results and the ground truth values, whether the turbulence intensity is $1.5 \times {10^{ - 13}}{m^{2/3}}$ or $3 \times {10^{ - 13}}{m^{2/3}}$, all four training methods have the ability to reconstruct amplitude and phase patterns.

 figure: Fig. 5.

Fig. 5. Reconstruction results from two test sets. (a) Reconstruction results from test sets of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (b) Reconstruction results from test sets of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.

Download Full Size | PDF

Visually, although there are differences between these two simulated datasets, some conclusions of the comparison results between the methods are similar. In terms of amplitude reconstruction, by comparing the details of the reconstructed patterns, it is found that the accuracy of Method 1 is the highest. As for phase reconstruction, the reconstruction accuracy of Method 1 is the highest, and Method 3 is the second.

As shown in Fig. 5, compared with amplitude reconstruction, phase reconstruction is more difficult, and the gap between the four methods is more obvious. Moreover, with the increase of turbulence intensity, the overall difficulty of reconstruction is increasing, and the gap between the four methods is also more obvious. In conclusion, when the reconstruction difficulty is low, all the four methods can reconstruct well, but when the reconstruction difficulty becomes high, the difference between the methods will become significant.

Pearson correlation coefficient (PCC) [41] and multi-scale structural similarity (MS-SSIM) indices [42] are used to further exactly evaluate the reconstruction fidelity of these four methods. PCC is commonly used for measuring the reconstruction fidelity by calculating the degree of correlation between the ground truth and the reconstructed pattern, while MS-SSIM calculates the structural similarity between the ground truth and the reconstructed pattern at different scales, which makes MS-SSIM closer to the subjective evaluation results of human eyes. The higher the similarity is, the closer the PCC and MS-SSIM values are to 1. In the test sets, the average PCC and MS-SSIM indices of amplitude and phase reconstructed by the model trained by four methods are summarized in Table 1.

Tables Icon

Table 1. PCC and MS-SSIM Indices of Four Training Methods for Two Turbulence Intensities

The results of PCC and MS-SSIM indices comparison are consistent with the intuitive comparison. Combined with all the results, Method 1 shows the best reconstruction performance, Method 3 ranges the second, and Method 4 is slightly better than Method 2.

In these two simulation datasets, no matter the reconstruction results of amplitude or phase, the Method 1 is the best of these four methods. In the process of light field transmission, the amplitude and phase are not completely independent. The amplitude information will affect the output phase information, and the phase information will also affect the output amplitude information. Method 1 inputs the amplitude and phase into the model as two channels, and the model gets the amplitude and phase information at the same time. In the convolution layer of the model, two channels carrying the amplitude and phase information are convoluted with the filter to get two feature maps, and they are added to get the feature map of this filter. This is multichannel convolution which makes the information of amplitude and phase fully mixed, it will affect the amplitude and phase of the reconstructed output field. Therefore, Method 1 is more in line with the actual transmission process. We think this is the reason why Method 1 shows the best reconstruction results in both amplitude and phase.

In Method 2, both amplitude and phase reconstruction affect the parameters training in the model, but amplitude and phase information do not affect each other's pattern reconstruction.

As for Method 3, the amplitude and phase reconstructions are independent, corresponding to the two models, and cannot affect each other at all. But Method 3 uses two DNN models, it takes more resources, which may be the reason why Method 3 ranges the second.

In Method 4, the amplitude and phase information are input into the model concurrently as in Method 1, but in Method 4, most information will not be mixed because the amplitude and phase information are horizontally spliced, there is little information exchange between amplitude and phase.

4. Conclusion

In summary, we simultaneously reconstruct the amplitude and phase of the light field transmitted in atmospheric turbulence, using four different deep learning training methods. The training method that can reconstruct the complex amplitude field more accurately is to input the amplitude and phase pattern pairs into the DNN model as two channels to train it (Method 1). Using two independent models to reconstruct amplitude and phase respectively (Method 3) ranges the second, and it is better than training the model with patterns spliced horizontally by amplitude and phase (Method 4). The last method is training the model with mixed amplitude and phase datasets (Method 2) to reconstruct amplitude and phase without distinction. The obtained results provide a useful theoretical basis for the choice of proper training method for deep learning assisted turbulence-distorted complex amplitude field reconstruction. Considering the reported atmospheric turbulence transmission experiment [43], with future improvement, further experimental demonstration is expected on the proposed complex amplitude field reconstruction in atmospheric turbulence based on deep learning.

Funding

National Natural Science Foundation of China (62125503); Key R&D Program of Hubei Province of China (2020BAA007, 2020BAB001); Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20200109114018750); Fundamental Research Funds for the Central Universities (2019kfyRCPY037).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Chen, Y. Zhu, M. Sun, D. Li, Q. Mu, and L. Xuan, “Apodized coherent transfer function constraint for partially coherent Fourier ptychographic microscopy,” Opt. Express 27(10), 14099 (2019). [CrossRef]  

3. J. Sun, Y. Zhang, C. Zuo, Q. Chen, S. Feng, Y. Hu, and J. Zhang, “Coded multi-angular illumination for Fourier ptychography based on Hadamard codes,” Proc. SPIE 9524, 95242C (2015). [CrossRef]  

4. G. Popescu, “The power of imaging with phase, not power,” Phys. Today 70(5), 34–40 (2017). [CrossRef]  

5. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11(7), 1949–1957 (1994). [CrossRef]  

6. P. Artal, “Optics of the eye and its impact in vision: A tutorial,” Adv. Opt. Photonics 6(3), 340–367 (2014). [CrossRef]  

7. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photonics 4(4), 472–553 (2012). [CrossRef]  

8. N. Chen, C. Zuo, E. Lam, and B. Lee, “3D Imaging Based on Depth Measurement Technologies,” Sensors 18(11), 3711 (2018). [CrossRef]  

9. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

10. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express 20(10), 10583–10590 (2012). [CrossRef]  

11. Y. Ren, G. Xie, H. Huang, C. Bao, Y. Yan, N. Ahmed, M. P. J. Lavery, B. I. Erkmen, S. Dolinear, M. Tur, M. A. Neifeld, M. J. Padgett, R. W. Boyd, J. H. Shapiro, and A. E. Willner, “Adaptive optics compensation of multiple orbital angular momentum beams propagating through emulated atmospheric turbulence,” Opt. Lett. 39(10), 2845–2848 (2014). [CrossRef]  

12. Y. Liang, X. Su, C. Cai, L. Wang, J. Liu, H. Wang, and J. Wang, “Adaptive turbulence compensation and fast auto-alignment link for free-space optical communications,” Opt. Express 29(24), 40514–40523 (2021). [CrossRef]  

13. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

14. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

15. Y. Xue, S. Cheng, Y. Li, and L. Tian, “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

16. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

17. B. Rahmani, D. Loterie, E. Kakkava, N. Borhani, U. Teğin, D. Psaltis, and C. Moser, “Actor neural networks for the robust control of partially measured nonlinear systems showcased for image propagation through diffuse media,” Nat. Mach. Intell. 2(7), 403–410 (2020). [CrossRef]  

18. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018). [CrossRef]  

19. Y. Li, S. Cheng, Y. Xue, and L. Tian, “Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network,” Opt. Express 29(2), 2244–2257 (2021). [CrossRef]  

20. M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photon. 1(03), 1 (2019). [CrossRef]  

21. P. Caramazza, O. Moran, R. Murray-Smith, and D. Faccio, “Transmission of natural scene images through a multimode fibre,” Nat. Commun. 10(1), 2029 (2019). [CrossRef]  

22. J. Zhao, X. Ji, M. Zhang, and X. Wang, “High-fidelity Imaging through Multimode Fibers via Deep Learning,” J. Phys. Photonics 3(1), 015003 (2021). [CrossRef]  

23. L. Zhang, R. Xu, H. Ye, K. Wang, B. Xu, and D. Zhang, “High definition images transmission through single multimode fiber using deep learning and simulation speckles,” Opt. Lasers Eng. 140, 106531 (2021). [CrossRef]  

24. B. Rahmani, D. Loterie, G. Konstantinou, D. Psaltis, and C. Moser, “Multimode optical fiber transmission with a deep learning network,” Light Sci. Appl. 7(1), 69 (2018). [CrossRef]  

25. P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibers,” Opt. Express 27(15), 20241–20258 (2019). [CrossRef]  

26. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

27. E. Y. Lam and T. Zeng, “Computational Imaging in Digital Holographic Reconstruction with Machine Learning,” in Proceedings of IEEE International Conference on Computational Electromagnetics (IEEE, 2020), pp. 77–78.

28. J. Li, M. Le, J. Wang, W. Zhang, B. Li, and J. Peng, “Object identification in computational ghost imaging based on deep learning,” Appl. Phys. B 126(10), 166 (2020). [CrossRef]  

29. S. Liu, X. Meng, Y. Yin, H. Wu, and W. Jiang, “Computational ghost imaging based on an untrained neural network,” Opt. Lasers Eng. 147, 106744 (2021). [CrossRef]  

30. R. Manwar, X. Li, S. Mahmoodkalayeh, E. Asano, D. Zhu, and K. Avanaki, “Deep Learning Protocol for Improved Photoacoustic Brain Imaging,” J. Biophotonics 13(10), e202000212 (2020). [CrossRef]  

31. M. Mirbagheri, A. Jodeiri, N. Hakimi, V. Zakeri, and S. K. Setarehdan, “Accurate Stress Assessment based on functional Near Infrared Spectroscopy using Deep Learning Approach,” in Proceedings of 26th National and 4th International Iranian Conference on Biomedical Engineering (IEEE, 2019), pp. 4–10.

32. J. Liu, P. Wang, X. Zhang, Y. He, X. Zhou, H. Ye, Y. Li, S. Xu, S. Chen, and D. Fan, “Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication,” Opt. Express 27(12), 16671–16688 (2019). [CrossRef]  

33. Y. Zhai, S. Fu, J. Zhang, X. Liu, H. Zhou, and C. Gao, “Turbulence aberration correction for vector vortex beams using deep neural networks on experimental data,” Opt. Express 28(5), 7515–7527 (2020). [CrossRef]  

34. H. Wang, X. Yang, Z. Liu, J. Pan, Y. Meng, Z. Shi, Z. Wan, H. Zhang, Y. Shen, X. Fu, and Q. Liu, “Deep-learning-based recognition of multi-singularity structured light,” Nanophotonics 11(4), 779–786 (2022). [CrossRef]  

35. Y. LeCun, C. Cortes, and C. Burges, “MNIST handwritten digit database,” http://yann.lecun.com/exdb/mnist/.

36. G. Cohen, S. Afshar, J. Tapson, and A. Schaik, “EMNIST: an extension of MNIST to handwritten letters,” arXiv:1702.05373v1(2017).

37. A. N. Kolmogorov, “The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers,” Proc. R. Soc. Lond. A 434(1890), 9–13 (1991). [CrossRef]  

38. L. C. Andrews and R. L. Phillips, Laser Beam Propagation Through Random Media, 2nd ed. (SPIE, 2005).

39. B. L. McGlamery, “Restoration of Turbulence-Degraded Images*,” J. Opt. Soc. Am. 57(3), 293–297 (1967). [CrossRef]  

40. R. Frehlich, “Simulation of laser propagation in a turbulent atmosphere,” Appl. Opt. 39(3), 393–397 (2000). [CrossRef]  

41. A. A. Goshtasby, Image Registration: Principles, Tools and Methods (Springer, 2012).

42. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), pp. 1398–1402.

43. S. Zhao, H. Yang, Y. Li, F. Cao, Y. Sheng, W. Cheng, and L. Gong, “The influence of atmospheric turbulence on holographic ghost imaging using orbital angular momentum entanglement: Simulation and experimental studies,” Opt. Commun. 294, 223–228 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of complex amplitude field reconstruction in atmospheric turbulence based on deep learning. (a) Amplitude of modulated input field. (b) Phase of modulated input field. (c) Amplitude of output field. (d) Phase of output field. (e) Simulation of atmospheric turbulence propagation. Multiple phase-screen method simplifies the atmospheric turbulence transmission into two parts: propagation in free space and phase change (phase screen). Modulated input field ((a), (b)) is transmitted in atmospheric turbulence, and the output field ((c), (d)) is obtained. The output field is fed into neural network, and the reconstructed field is the output of neural network.
Fig. 2.
Fig. 2. U-net structure. The left side of the dotted line is the contracting path and the right side is the expansion path. The number of channels in input layer and output layer can be changed according to the needs of training method.
Fig. 3.
Fig. 3. Two datasets for different turbulence intensities are simulated. (a) Amplitude of modulated input field. (b) Phase of modulated input field. (c) Amplitude of output field when $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (d) Phase of output field when $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (e) Amplitude of output field when $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$. (f) Phase of output field when $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.
Fig. 4.
Fig. 4. The training and verification loss curves of these four training methods. (a) The training loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (b) The verification loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (c) The training loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$. (d) The verification loss curves of four training methods using the dataset of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.
Fig. 5.
Fig. 5. Reconstruction results from two test sets. (a) Reconstruction results from test sets of atmospheric turbulence which $C_n^2 = 1.5 \times {10^{ - 13}}{m^{2/3}}$. (b) Reconstruction results from test sets of atmospheric turbulence which $C_n^2 = 3 \times {10^{ - 13}}{m^{2/3}}$.

Tables (1)

Tables Icon

Table 1. PCC and MS-SSIM Indices of Four Training Methods for Two Turbulence Intensities

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

D n ( r ) = C n 2 r 2 / 3 ( l 0 r L 0 ) ,
Φ n ( κ ) = 0.033 C n 2 κ 11 / 3 ( 1 / L 0 κ 1 / l 0 ) ,
Φ n ( K x , K y ) = 2 π k 0 2 Δ z Φ n ( κ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.