Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental optical encryption based on random mask encoding and deep learning

Open Access Open Access

Abstract

We present an experimental scheme for optical encryption using random mask encoding and deep learning technique. A phase image is encrypted into a speckle pattern by a random amplitude modulation in the optical transmission. Before decryption processing, a neural network model is used to learn the mapping relationship between the pure-phase object and the speckle image rather than characterizing the filter film used in the scheme explicitly or parametrically. The random binary mask is made by a polyethylene terephthalate film and 2500 object-speckle pairs are used for training. The experimental results demonstrate that the proposed scheme based on deep learning could be successfully used as a random binary mask encrypted image processor, which can quickly output the primary image with high quality from the cyphertext.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

22 March 2022: A correction was made to the funding section.

1. Introduction

Optical image encryption techniques based on double random phase encoding (DRPE) play an important role in the field of information security due to its inherent advantage of high-speed parallel processing of complex multi-dimensional data [18]. To improve the performance of the encryption system, some researchers have proposed fully-phase image encryption method to encode pure-phase images [913], and the computer simulation results demonstrate that fully phase-based encryption could perform better than traditional DRPE-based encryption under the noise conditions [9,10]. However, the DRPE-based techniques are vulnerable to various attacks because of the linearity and symmetry of these encryption systems. Phase-truncated Fourier transform (PTFT) based encryption systems have been also proposed to further enhance security [14]. Unfortunately, some of the DRPE-based cryptosystems, especially the fully phase encryption methods are experimentally difficult to verify because they usually require a complex interferometer configuration with spatial light modulators (SLMs) for displaying the phase-encoded image and random phase masks. Besides, the decoding process of those DRPE-based cryptosystems usually requires very precise alignment [2,3].

Recently, deep learning technique has been employed to analyze the security of optical image encryption systems. Some conventional encryption systems based on DRPE [15], computer-generated hologram [16], interference-based encryption [17], joint transform correlation encryption [18] and ghost imaging [19] were found vulnerable to neural networks trained by ciphertext-plaintext pairs [2029]. What should be pointed out is that different methods of attack not only test the security of cryptosystems but also provide new ideas for designing new optical systems for image encryption [30]. In addition, some experimental setups for traditional optical image encryption are cumbersome and sensitive to vibrations. For example, a complex holographic scheme was necessary for recording complex-valued encrypted data in the DRPE system and precise alignment of the phase code was required for decryption. Thus, it is worth considering how to take advantage of the new technique for practical and secure image encryption [31,32].

In this paper, we propose an experimental optical image encryption based on deep learning. In the optical experiment, the pure phase objects are placed behind a random binary mask (RBM) made of polyethylene terephthalate (PET), and the speckle pattern, i.e., the cyphertext, is captured by a Charge-Coupled Device (CCD) after the object beam passing through the RBM. In the decryption processing, a modified end-to-end convolutional neural network (CNN) model based on U-type network is used to recover the original image from the cyphertext. The CNN model trained by using a series of object and encrypted images pairs allows for a high-quality recovery of object images from cyphertexts. In the proposed cryptosystem, the keys used for encryption are different from the keys used for decryption and the decryption process is not the inverse of encryption process. Different from most of previously proposed traditional methods, deep learning technique is adopted to obtain a pre-trained model for achieving a high-quality decryption and a high level of security.

The rest of this paper is organized as follows. The principle of the proposed method is described in Section 2. The experiment results and discussions are given in Section 3. Finally, concluding remarks are given in Section 4.

2. Principle and method

The optical experimental setup we built up for fully-phase encryption and decryption is shown in Fig. 1. The spatial light modulator (DHC, GCI-770402) is illuminated by a collimated and expanded beam from a pumped semiconductor laser (MGL-III-532, wavelength: 532 nm), the sampling interval and resolution of the SLM are $6.3\mu m \times 6.3\mu m$ and $1280 \times 720$ respectively. A pure-phase object image $\textrm{exp} [{j\mathrm{\pi }o({x,y} )} ]$ is loaded to the SLM for modulating the incident beam. The light beam with the information of pure-phase object then propagates through a PET film after propagation of a distance d1 in free space. A CCD detector is applied on the other side with the distance d2 to capture the speckle patterns diffracted by the PET film.

 figure: Fig. 1.

Fig. 1. Optical experimental arrangement for fully-phase encryption and decryption.

Download Full Size | PDF

In the experiment, the images with the size of $28 \times 28$ pixels in the MNIST and Fashion-mnist databases [33,34] are applied as pure-phase object images since they are readily available and widely used in the research of various deep learning problems. Before being fed into SLM, those images are resized into $512 \times 512$ and zero-padded. At the receiver end, a CCD camera is placed and digitally record the diffraction patterns. We use the central $512 \times 512$ portion of the data captured by CCD.

Information recovery from noisy optical transmission is considered challenging because the collected data are usually ambiguous and strongly ill-conditioned. With the development of computer technology and algorithm, deep neural networks (DNNs) have been shown to be successful in various inverse problems in imaging in the last few years overcoming the problems of stagnation and slow convergence of traditional phase retrieval algorithms [35,36]. In this work, an end-to-end CNN based on U-net is used to recover the primary pure-phase objects from the cyphertexts. The overall architecture is based on the encoder-decoder U-net framework, but with some proper modifications. The modified network architecture used in this work is depicted in Fig. 2(a). We replace the convolutional layers with the dense blocks to strengthen feature propagation. Besides, an additional convolution layer at the output is used to further increase the depth of the network and nonlinearity. In the training process, the cyphertexts and original images are respectively set as the inputs and outputs of the proposed model. The input layer generates feature maps with the size of $64 \times 64 \times 24$ from the input speckle pattern by using convolution operation, of which the kernel size is $3 \times 3$ and the number of filters is 24. Each convolved image is successively decimated in the encoder path, consisting of six dense blocks connected by transition blocks. Each dense block contains four layers and each layer consists of batch normalization (BN), rectified linear unit (ReLu), and a convolutional layer (Conv) with filter size of $3 \times 3$, which is shown in Fig. 2(b). Let $[{{F_{i,0}},{F_{i,1}}, \cdots ,{F_{i,c - 1}}} ]$ denote the channel combination of the feature maps from layer 0 to the output of layer c, i.e. the concatenation into one tensor. The cth layer of dense block can be described as

$${F_{i,c}} = {H_{i,c}}({[{{F_{i,0}},{F_{i,1}}, \cdots ,{F_{i,c - 1}}} ]} ),$$
where ${H_{i,c}}({\cdot} )$ stands for non-linear transformation, i.e., a combination of three consecutive operations: BN, ReLU and Conv.

 figure: Fig. 2.

Fig. 2. The structure of CNN. (a) The architecture of the CNN. (b) The 4-layer dense block.

Download Full Size | PDF

The transition block contains BN, ReLu, a convolutional layer with filter size of $3 \times 3$, followed by an average pooling layer with a kernel size of $2 \times 2$, stride size of 2. These successively-decimated signal then goes through decoder path and additional “BN + convolutional” blocks. In the decoder path, there are six dense blocks which are connected by upsampling convolutional layers, increasing the dimension of the input by a factor of 2. Finally, a convolutional layer followed by the last layer produces the network outputs.

Let the input of $i\textrm{th}$ layer in the proposed CNN structure be ${x_{i - 1}}$. The resultant output of this layer can be expressed as

$${x_i} = \textrm{ReLU}[{\textrm{BN}({{w_i}{x_{i - 1}} + {b_i}} )} ],$$
where ${w_i}$ is the weight and ${b_i}$ is the bias. The activate function ReLU is given by $\textrm{ }R(x )= \textrm{max}({0,\textrm{ }x} )$. In this work, the loss function mean squared error (MSE) is used to evaluate the similarity between the output and the ground truth, for which the mathematical expression is
$$L(\Theta )= \frac{1}{K}\sum\limits_{k = 1}^K {{\big \|}{{y^k} - Y({x_0^k;\Theta } )} {\big \|}} _2^2,$$
where $x_0^k$, ${y^k}$ represent the encrypted image and ground truth, respectively. K is the number of images for each minibatch. $\Theta $ denotes the weights and biases in the neural network, i.e. $\Theta \textrm{ = }\{{{w_i},{b_i}} \}, i = 1,2\ldots 15$. $Y({\cdot} )$ represents the output image of the last layer of the neural network.

The adaptive moment estimation (Adam) optimization algorithm [37] is applied in the training process. A learning rate constant of 0.001 and fifty epochs are used. The neural network is installed with the TensorFlow framework, and the training process is implemented on a NVIDIA GTX1650 GPU.

3. Results and discussion

Optical experiments were carried out to verify the feasibility of the proposed method for fully-phase encryption and decryption. The propagation distances are ${d_1} = 20\textrm{ cm}$ and ${d_2} = 10\textrm{ cm}$. 2500 pure-phase objects are selected randomly from the MINST handwritten digit database, which are resized into $512 \times 512$ and then zero-padded. We sequentially loaded pure-phase objects into the SLM and captured their corresponding speckle patterns with the CCD detector. The central $512 \times 512$ portion of the captured data is further reduced by a factor of 4. The input and output images of the neutral network are both resized into $128 \times 128$ pixels. In the training process, 2000 object-speckle pattern pairs are used for training and another 500 object–speckle pattern pairs are used as the test data. The number of training epochs is set as 100. The execution time of CNN model training is about 6.0 hours.

To verify the effectiveness of the designed CNN model, test sets are fed into each of the pre-trained model. The results of the prediction are shown in Fig. 3. The diffraction patterns without using the RBM of PET are shown in Fig. 3(b), and the encrypted images resultant from PET film and the reconstructed images are respectively shown in Fig. 3(c) and Fig. 3(d). It is obvious that high quality images can be reconstructed by using the proposed CNN model. We calculated the structural similarity index (SSIM) and peak signal to noise ratio (PSNR) for all test images. The SSIM and PSNR evaluate the structural similarity and average difference between the ground truth and predicted image, respectively. The average values of SSIM and PSNR of the Mnist sets we used are 0.97 and 25.96, and those of the Fashion-mnist are 0.91 and 22.87, respectively.

 figure: Fig. 3.

Fig. 3. Input primary images and decrypted results. (a) Plaintext images, (b) the diffraction results without using RBM, (c) the encrypted results, (d) the reconstructed images by using the proposed CNN model, (e) decryption results with wrong CNN model. The decryption images obtained (f) when the eavesdropping rate of wi is 99.7%, (g) when the eavesdropping rate of bi is 99.75%.

Download Full Size | PDF

The trained learning model and its parameters, e.g., the size of kernel, the number of kernels, the activation function, and the matrix ${w_i}$ and vector ${b_i}$, can be used as security keys in the proposed scheme. For simplicity and illustration purpose, we only test the performance of ${w_i}$ and ${b_i}$ in security testing. When all of the keys except ${w_i}$ are correct, the decrypted images cannot be identified with the naked eye. Figure 3(f) shows the decrypted images obtained when the eavesdropping rate of ${w_i}$ is lower than 99.80%. When the eavesdropping rate of ${b_i}$ is below 99.85%, the resultant decrypted images are shown in Fig. 3(g). These results imply that the security of the proposed deep learning-based optical cryptosystem can be fully guaranteed.

We further consider the effect of the axial displacement of CCD on the quality of reconstructed image. In general, the axial displacement of the CCD decreases the correlation of the scatter intensity measurements. In traditional optical encryption schemes, deviation in positions of the optical components in encryption may lead to a seriously degraded decryption. For example, in the Fourier domain DRPE system, the encrypted image is recorded in the back focal plane of the second lens, i.e., the imaging plane of the system. A position deviation in imaging at the encryption stage will undoubtedly degrade the quality of decrypted images. Fortunately, DNN can be applied to this problem since it was proved to be able to make high-quality object predictions in various inverse problems in imaging [35,36]. Our experimental results show that high-quality decrypted images can still be obtained by using the proposed CNN model even when the CCD is deviated from the set position by 30 mm.

The experimental principle of measurement is illustrated in Fig. 4(a), where the initial position of the CCD is ${Z_4}$. The CCD successively moved along the Z axis to the left (right) by 1 cm, 2 cm and 3 cm, i.e., moved to the positions of ${Z_3}({\; {Z_5}} )$, ${Z_2}({{Z_6}} )$ and ${Z_1}({{Z_7}} )$. When these measurements are input to the pre-trained neural network model, the outputs are given by Figs. 4(b) and 4(c), which demonstrates remarkable stability and visual quality. The error bars for the average PSNR and SSIM of Mnist and Fashion-mnist data are shown in Fig. 5. As we can see, the mean SSIM and PSNR of the reconstructed images with respect to the two testing datasets are greater than 0.65 and 15.5 respectively, which implies that the proposed method can offer a high position tolerance for the CCD detector. As shown in Fig. 1, the scheme is simple and flexible control. The toleration about the position of the detector further ensures that the system is able to provide an acceptable quality of decrypted images. As mentioned above, precise alignment is required in many traditional systems and the optically decrypted image usually has a poor quality. The combination of the deep leaning technique and random mask encoding provide an alternative for solving the problems of low position tolerance and poor image quality in optical image encryption.

 figure: Fig. 4.

Fig. 4. Experimental investigation on the position tolerance of the CCD camera. (a) Schematic diagram of deviation in the CCD position, the testing results at different locations with respect to (b) Mnist data and (c) Fashion-mnist data.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Evaluation of the image quality. The mean values and the standard deviations of the (a) SSIM, (b) PSNR of 50 reconstructed images.

Download Full Size | PDF

Sometimes, data may be lost during transmission. We investigate the robustness of the proposed method against data loss by randomly setting the pixel values of the encrypted image to zero. Specifically, some pixels in an encrypted image are randomly replaced with zero-values. Figure 6 shows the images decrypted from encrypted images with different percentages of data loss using pre-trained model. Figures 6(a)–6(e) are the reconstructed images when the encrypted images with percent of data loss of 10%, 20%, 30%, 40% and 50%, respectively. The mean values of SSIM and PSNR between the ground truth and reconstructed image are calculated, respectively, as shown in Fig. 7. It should be noted that all the reconstructed images with high quality are obtained without retraining the CNN model.

 figure: Fig. 6.

Fig. 6. Robustness test against data loss. (a)-(e) The images decrypted from the cyphertext with percent of data loss of 10%, 20%, 30%, 40% and 50%, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Evaluation of the quality of the decrypted images using the encrypted images with data loss. The data points and the error bars in (a) and (b) represent the mean values and the standard deviations of the SSIM/PSNR of 50 reconstructed images.

Download Full Size | PDF

4. Conclusion

In conclusion, we have proposed an experimental approach towards a flexible realization of optical encryption by using a PET RBM and deep learning technique. The modified CNN model is built up according to the densely CNN architecture and then trained by a series of primary and encrypted image pairs. The plaintext can be efficiently decrypted from the cyphertext with high quality. The position tolerance and the robustness against data loss have been investigated in detail. The proposed optical scheme can be directly used as a RBM encrypted image processor. The presented considerations might also be helpful for the application of deep learning in the imaging of the pure phase objects.

Funding

National Natural Science Foundation of China (61975185, 61575178); Natural Science Foundation of Zhejiang Province (LY19F030004); Scientific Research and Developed Fund of Zhejiang University of Science and Technology (F701108L03).

Acknowledgments

The authors would like to thank the anonymous reviewers for their helpful suggestions which have improved the quality of this paper.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]  

2. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef]  

3. A. Alfalou and C. Brosseau, “Optical image compression and encryption methods,” Adv. Opt. Photonics 1(3), 589 (2009). [CrossRef]  

4. W. Chen, B. Javidi, and X. Chen, “Advances in optical security systems,” Adv. Opt. Photonics 6(2), 120–155 (2014). [CrossRef]  

5. B. Javidi, A. Carnicer, M. Yamaguchi, T. Nomura, E. Pérez-Cabré, M. S. Millán, N. K. Nishchal, R. Torroba, J. F. Barrera, W. He, X. Peng, A. Stern, Y. Rivenson, A. Alfalou, C. Brosseau, C. Guo, J. T. Sheridan, G. Situ, M. Naruse, T. Matsumoto, I. Juvells, E. Tajahuerce, J. Lancis, W. Chen, X. Chen, P. W. H. Pinkse, A. P. Mosk, and A. Markman, “Roadmap on optical security,” J. Opt. 18(8), 083001 (2016). [CrossRef]  

6. Z. Liu, Q. Guo, L. Xu, M. A. Ahmad, and S. Liu, “Double image encryption by using iterative random binary encoding in gyrator domains,” Opt. Express 18(11), 12033–12043 (2010). [CrossRef]  

7. Y. Qin, Q. Gong, Z. Wang, and H. Wang, “Optical multiple-image encryption in diffractive-imaging-based scheme using spectral fusion and nonlinear operation,” Opt. Express 24(23), 26877–26886 (2016). [CrossRef]  

8. X. Wang and S. Mei, “Information authentication using an optical dielectric metasurface,” J. Phys. D: Appl. Phys. 50(36), 36LT02 (2017). [CrossRef]  

9. N. Towghi, B. Javidi B, and Z. Luo, “Fully phase encrypted image processor,” J. Opt. Soc. Am. A 16(8), 1915–1927 (1999). [CrossRef]  

10. X. Tan, O. Matoba, T. Shimura, K. Kuroda, and B. Javidi, “Secure optical storage that uses fully phase encryption,” Appl. Opt. 39(35), 6689–6694 (2000). [CrossRef]  

11. N. K. Nishchal, J. Joseph, and K. Singh, “Fully phase encrypted memory using cascaded extended fractional Fourier transform,” Opt. Lasers Eng. 42(2), 141–151 (2004). [CrossRef]  

12. X. Wang and D. Zhao, “Fully phase multiple-image encryption based on superposition principle and the digital holographic technique,” Opt. Communi 285(21-22), 4280–4284 (2012). [CrossRef]  

13. R. A. Muhammad, “Fully phase multiple information encoding based on superposition of two beams and Fresnel-transform domain,” Opt. Commun. 356, 306–324 (2015). [CrossRef]  

14. W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett. 35(2), 118–120 (2010). [CrossRef]  

15. S. Xi, X. Wang, L. Song, Z. Zhu, B. Zhu, S. Huang, and H. Wang, “Experimental study on optical image encryption with asymmetric double random phase and computer-generated hologram,” Opt. Express 25(7), 8212–8222 (2017). [CrossRef]  

16. Y. Zhang and B. Wang, “Optical image encryption based on interference,” Opt. Lett. 33(21), 2443–2445 (2008). [CrossRef]  

17. T. Nomura and B. Javidi, “Optical encryption using a joint transform correlator architecture,” Opt. Eng. 39(8), 2031–2035 (2000). [CrossRef]  

18. P. Clemente, V. Durán, V. Torres-Company, E. Tajahuerce, and J. Lancis, “Optical encryption based on computational ghost imaging,” Opt. Lett. 35(14), 2391–2393 (2010). [CrossRef]  

19. H. Hai, S. Pan, M. Liao, D. Lu, W. He, and X. Peng, “Cryptanalysis of random-phase-encoding-based optical cryptosystem via deep learning,” Opt. Express 27(15), 21204–21213 (2019). [CrossRef]  

20. W. He, S. Pan, M. Liao, D. Lu, Q. Xing, and X. Peng, “A learning-based method of attack on optical asymmetric cryptosystems,” Opt. Lasers Eng. 138, 106415 (2021). [CrossRef]  

21. H. Wu, X. Meng, X. Yang, P. Wang, W. He, and H. Chen, “Ciphertext-only attack on optical cryptosystem with spatially incoherent illumination based deep-learning correlography,” Opt. Lasers Eng. 138, 106454 (2021). [CrossRef]  

22. L. Wang, Q. Wu, and G. Situ, “Chosen-plaintext attack on the double random polarization encryption,” Opt. Express 27(22), 32158–32167 (2019). [CrossRef]  

23. L. Zhou, Y. Xiao, and W. Chen, “Learning-based attacks for detecting the vulnerability of computer-generated hologram based optical encryption,” Opt. Express 28(2), 2499–2510 (2020). [CrossRef]  

24. S. Jiao, Y. Gao, T. Lei, and X. Yuan, “Known-plaintext attack to optical encryption systems with space and polarization encoding,” Opt. Express 28(6), 8085–8097 (2020). [CrossRef]  

25. L. Zhou, Y. Xiao, and W. Chen, “Machine-learning attacks on interference-based optical encryption: experimental demonstration,” Opt. Express 27(18), 26143–26154 (2019). [CrossRef]  

26. L. Chen, B. Peng, W. Gan, and Y. Liu, “Plaintext attack on joint transform correlation encryption system by convolutional neural network,” Opt. Express 28(19), 28154–28163 (2020). [CrossRef]  

27. L. Zhou, Y. Xiao, and W. Chen, “Vulnerability to machine learning attacks of optical encryption based on diffractive imaging,” Opt. Lasers Eng. 125, 105858 (2020). [CrossRef]  

28. S. Yuan, L. Wang, X. Liu, and X. Zhou, “Forgery attack on optical encryption based on computational ghost imaging,” Opt. Lett. 45(14), 3917–3920 (2020). [CrossRef]  

29. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

30. X. Wang, Y. Chen, Q. Chao, and D. Zhao, “Discussion and a new attack of the optical asymmetric cryptosystem based on phase truncated Fourier transform,” Appl. Opt. 53(2), 208–213 (2014). [CrossRef]  

31. L. Zhou, Y. Xiao, and W. Chen, “Learning-based optical authentication in complex scattering media,” Opt. Lasers Eng. 141, 106570 (2021). [CrossRef]  

32. X. Wang, W. Wang, H. Wei, B. Xu, and C. Dai, “Holographic and speckle encryption using deep learning,” Opt. Lett. 46(23), 5794–5797 (2021). [CrossRef]  

33. Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs (2010).

34. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for bench- marking machine learning algorithms,” arXiv 1708:07747 (2017).

35. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

36. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921 (2019). [CrossRef]  

37. K. Diederik and J. Ba, “Adam: a method for stochastic optimization,” arXiv 1412:6980 (2014).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Optical experimental arrangement for fully-phase encryption and decryption.
Fig. 2.
Fig. 2. The structure of CNN. (a) The architecture of the CNN. (b) The 4-layer dense block.
Fig. 3.
Fig. 3. Input primary images and decrypted results. (a) Plaintext images, (b) the diffraction results without using RBM, (c) the encrypted results, (d) the reconstructed images by using the proposed CNN model, (e) decryption results with wrong CNN model. The decryption images obtained (f) when the eavesdropping rate of wi is 99.7%, (g) when the eavesdropping rate of bi is 99.75%.
Fig. 4.
Fig. 4. Experimental investigation on the position tolerance of the CCD camera. (a) Schematic diagram of deviation in the CCD position, the testing results at different locations with respect to (b) Mnist data and (c) Fashion-mnist data.
Fig. 5.
Fig. 5. Evaluation of the image quality. The mean values and the standard deviations of the (a) SSIM, (b) PSNR of 50 reconstructed images.
Fig. 6.
Fig. 6. Robustness test against data loss. (a)-(e) The images decrypted from the cyphertext with percent of data loss of 10%, 20%, 30%, 40% and 50%, respectively.
Fig. 7.
Fig. 7. Evaluation of the quality of the decrypted images using the encrypted images with data loss. The data points and the error bars in (a) and (b) represent the mean values and the standard deviations of the SSIM/PSNR of 50 reconstructed images.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

Fi,c=Hi,c([Fi,0,Fi,1,,Fi,c1]),
xi=ReLU[BN(wixi1+bi)],
L(Θ)=1Kk=1KykY(x0k;Θ)22,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.