Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Information security scheme using deep learning-assisted single-pixel imaging and orthogonal coding

Open Access Open Access

Abstract

Providing secure and efficient transmission for multiple optical images has been an important issue in the field of information security. Here we present a hybrid image compression, encryption and reconstruction scheme based on deep learning-assisted single-pixel imaging (SPI) and orthogonal coding. In the optical SPI-based encryption, two-dimensional images are encrypted into one-dimensional bucket signals, which will be further compressed by a binarization operation. By overlaying orthogonal coding on the compressed signals, we obtain the ciphertext that allows multiple users to access with the same privileges. The ciphertext can be decoded back to the binarized bucket signals with the help of orthogonal keys. To enhance reconstruction efficiency and quality, a deep learning framework based on DenseNet is employed to retrieve the original optical images. Numerical and experimental results have been presented to verify the feasibility and effectiveness of the proposed scheme.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical image encryption techniques have attracted widespread attention due to the high-speed parallel processing and the freedom of multi-dimensional space [15]. Many methods have been proposed based on random phase mask (RPM) encryption [610], optical inference principle [11,12], phase retrieval algorithm [13,14], and computer holography [1517]. To increase encoding capabilities, a number of methods for multiple image encryption have also been proposed. For example, Chen proposed an optical multiple-image encryption using three-dimensional space [18]. Wu et al. designed a new method for multiple-image encryption based on phase retrieval algorithm and fractional Talbot effect with improved security and capacity [19]. Huang et al. combined a nonlinear multi-image encryption scheme with a chaotic system and two-dimensional linear canonical transform to circumvent the complexity in large-scale image processing [20]. Chen et al. proposed an asymmetric multi-image encryption method based on compressed sensing and feature fusion to avoid the loss of high-frequency information [21].

Recently, single-pixel imaging (SPI) has proved to be useful in optical image encryption [2231]. Unlike traditional real-time imaging, SPI is able to extract two-dimensional (2D) spatial information from one-dimensional (1D) bucket signals [32,33], where a single-pixel detector for signal detection is used to replace the pixelated sensor array. It is usually necessary for SPI to capture sufficient bucket signals, ensuring good reconstruction performance. However, the transmission of the large number of illumination patterns required for decryption may cause a security issue and increase severeness of the load on transmission. Meanwhile, some SPI approaches have been proposed to reduce acquisition time and improve the reconstruction quality based on compressive sensing [34,35], orthogonal Hadamard [36,37] and Fourier basis [38]. It is worth mentioning that deep learning (DL) has been emerging as a powerful approach for image reconstruction in SPI. DL-based SPI can retrieve a low noise image from a small number of measurements based on optimization of the neural network structure and parameters [3944].

In this paper, we propose a hybrid scheme for multiple image compression, encryption and reconstruction based on deep learning-assisted SPI and orthogonal coding. The 1D bucket signals detected with a SPI architecture are further compressed by binarization operations [45]. To provide the spreading of the signals, the compressed signals are encrypted individually using orthogonal codes, and the encoded data are subsequently merged into the ciphertext. During the decryption process, the ciphertext is decoded back to the binarized bucket signals, which are subsequently fed into the input layer of a pre-trained neural network. In our proposal, the optical images are greatly compressed, and the orthogonal coding provides a way to enable multiple users to access and take advantage of secret data for image reconstruction. The use of the DenseNet, one of DL architectures and originally proposed for computer vision applications, further enhance the quality and efficiency of image reconstruction. The feasibility of this scheme and its robustness against potential attacks such as noise attack and occlusion attack are verified by both computer simulations and experiment. The proposed scheme provides an alternative strategy for secure transmission of data, which also can significantly reduce the burden of data acquisition and transmission.

The rest of this paper is organized as follows. The principle of our proposed optical image encryption is described in Section 2. Simulation and experimental results are given in Sections 3. Finally, a conclusion is given in Section 4.

2. Principle and method

2.1 Optical image compression and encryption

The flowchart of the hybrid scheme is illustrated in Fig. 1. During the encryption process, the optical images are first recorded by a SPI architecture with Hadamard basis patterns for illumination. Hadamard matrix is an orthonormal binary matrix with the elements of +1 or −1, and higher order Hadamard matrices can be generated from lower-order Hadamard matrices

$${H_{{2^k}}} = \left[ {\begin{array}{{cc}} {{H_{{2^{k - 1}}}}}&{{H_{{2^{k - 1}}}}}\\ {{H_{{2^{k - 1}}}}}&{ - {H_{{2^{k - 1}}}}} \end{array}} \right] = {H_2} \otimes {H_{{2^{k - 1}}}}$$
where ${H_2} = \frac{1}{{\sqrt 2 }}\left[ {\begin{array}{{cc}} 1&1\\ 1&{ - 1} \end{array}} \right]$ and ${\otimes} $ represents inner product. N rows of permutated Hadamard matrices are divided into complementary pairs so as to generate illumination patterns, ${H_i}({i = 1,2, \cdots ,N} )$, that can be calculated by ${H_i} = {H_{i + }} - {H_{i - }}$, where we have ${H_{i + }} = {{({H{}_i + 1} )} / 2}$, ${H_{i - }} = 1 - {H_{i + }}$. A set of Hadamard patterns are successively projected onto the target object O, and the light intensities reflected from the object are then collected by a single-pixel detector. The detected bucket signals ${D_i}({i = 1,2, \cdots ,N} )$ can be expressed by
$${D_i} = \int\!\!\!\int {{H_i}} ({\mu ,\nu } )O({\mu ,\nu } )\textrm{d}\mu \textrm{d}\nu$$

 figure: Fig. 1.

Fig. 1. Flowchart of (a) the compression, and (b) encryption algorithms.

Download Full Size | PDF

The object information can be calculated by the second-order correlation between bucket signals and illumination patterns

$$\hat{O} = \frac{1}{N}\sum\nolimits_{i = 1}^N {\left( {{D_i} - \left\langle D \right\rangle } \right){H_i}} $$
where $\left\langle \cdot \right\rangle $ denotes the ensemble average operation.

As can be shown in Fig. 1(b), different plaintext images are compressed into 1D binary signals with the help of the same set of Hadamard patterns. These signals will be further compressed by using a special quantization method with two levels, 0 and 1. Since the normalized detected signals, ${D_1},{D_2}, \cdots ,{D_M}$, has the values in the range $[{\textrm{1, } - \textrm{1}} ]$, the simplest way of binarization is thresholding, setting pixels to 1 if the value is equal or greater than zero. Thus, the binarized bucket signals ${B_i}$ are given by

$${B_i} = \left\{ {\begin{array}{{cc}} 1&{{\textrm{D}_i} \ge 0}\\ 0&{otherwise} \end{array}} \right.$$

This binarization step for further compression involved in the flowchart can facilitate a quick orthogonal coding of the signals. In this method, orthogonal code sequences composed of $n$-bits bipolar codes $({ + 1/{-} 1} )$ are used as the keys, where n is even and should be larger than M when encrypting M binary signals. Two different code sequences are mutually orthogonal, and their inner product is 0, which can prevent crosstalk during signal transmission. For example, suppose we have two sequences to be encrypted and transmitted, the sequence to be encrypted is $[{1,1,0,1} ]$ and the code sequence is $[{1,1} ]$. After encoding, all values greater than 0 are interpreted as $[{1,1} ]$, while all values less than 1 are interpreted as $[{ - 1, - 1} ]$. The resultant encoded signal is ${L_1} = [{1,\textrm{ }1,\textrm{ }1,\textrm{ }1,\textrm{ } - 1,\textrm{ } - 1,\textrm{ }1,\textrm{ }1} ]$. Likewise, the encrypted result of sequence $[{0,1,0,0} ]$ with another code sequence $[{1, - 1} ]$ is ${L_2} = [{ - 1,\textrm{ }1,\textrm{ }1,\textrm{ } - 1,\textrm{ } - 1,\textrm{ }1,\textrm{ } - 1,\textrm{ }1} ]$. ${L_1}$ and ${L_2}$ are superimposed to get the ciphertext $L = [{0,\textrm{ }2,\textrm{ }2,\textrm{ }0,\textrm{ } - 2,\textrm{ }0,\textrm{ }0,\textrm{ }2} ]$.

Consequently, M binary signals, ${B_1},{B_2}, \cdots ,{B_M}$, can be encoded into M encoded sequences, ${L_1},{L_2}, \cdots ,{L_M}$, by independently using the orthogonal code sequences ${\ell _1},{\ell _2}, \cdots ,{\ell _M}$ as $Key1,Key2, \cdots ,KeyM$. When all the sequences are superimposed, we get the final encrypted sequence L.

$$L = \sum\limits_{j = 1}^M {{L_j}} $$

2.2 Sequence decryption and image reconstruction

Figure 2 shows the schematic diagram of sequence decryption and image reconstruction. Each receiver can recover the original binarized signal from the ciphertext sequence L by using corresponding sequence key. Each $N$-bits binarized signal B can be recovered from the ciphertext L with their corresponding $n$-bits bipolar codes $\ell $.

$$B = \frac{{L\cdot \ell }}{{{{|\ell |}^2}}}$$
where the subscripts are omitted for simplicity, and the numbers $- 1$ in the decoded signal change to 0 for recovering the binarized signal.

 figure: Fig. 2.

Fig. 2. Schematic of the sequence decryption and image reconstruction.

Download Full Size | PDF

The orthogonal coding employed on the individual binarized bucket signals allows multiple users to access the ciphertext simultaneously and implement decoding without interference to each other. According to Eq. (6), the binary signals can be retrieved from the ciphertext by using their corresponding bipolar codes, which is allows the optically and compressive encrypted data for multi-user detection and interference cancellation. However, the optically recorded signal is binarization and the number of measurements is limited, increasing the efficiency of information transmission. Thus, it is not a good choice to use the conventional SPI algorithm for recovering the original images with the binarized bucket signals and insufficient measurements, which seriously affects the reconstructed image quality. Here we turn to deep learning to decode the retrieved binary bucket signals.

Figure 3 illustrates the network architecture that we used for image reconstruction, which inspired by the DenseNet [46] and Unet [47]. The inputs of the network model are $1 \times N$ binary signals. The input layer has two fully connected layers, extracting the features from each signal. Dense blocks and transition blocks are used for downsampling. After decimated by six layers of “dense block + transition block”, the feature map propagates to a layer of dense block, and successively passes through six layers of “two convolution layer + dense block”. The downsampling and upsampling layers are concatenated are concatenated in the upsampling path via skip connections, which are used to pass the high frequency information down the network. The unsampled feature map is finally to pass on to a convolution layer, where an activation function sigmoid is used.

 figure: Fig. 3.

Fig. 3. The network architecture for image reconstruction from 1D binarized signals.

Download Full Size | PDF

Let R represent network map function and $\theta $ denote the network parameters, respectively, and the prediction of the network can be express as ${R_\theta }({{B_i}} )\textrm{ }({i = 1,2, \cdots ,\tau } )$. In the training process, the network parameters $\theta $ is continuously optimized with the decrease of the mean squared error (MSE) between the output of the network, ${R_\theta }({{B_i}} )$, and its corresponding ground truth (GT) ${y_i}$. The loss function, MSE, is defined as

$$\textrm{MSE} = \frac{1}{\tau }\sum\limits_{i = 1}^\tau {{{{\bigg \|}{{y_i} - {R_\theta }({{B_i}} )} {\bigg \|}}^2}} $$

In this method, we train our model using the Adam optimizer with a learning rate of 0.02. A dropout rate of 0.2 is set to prevent overfitting, and the number of training epochs is set to be 50. All programs are run in Python 3.6 environment with Tensorflow 1.11 framework and speed up operation by using NVIDIA Geforce RTX2070 GPU. Once the training is completed, the decoded binary sequence can be directly deciphered to a plaintext image by using the trained model, we thus get

$${O^{\prime}_i} = {R_\theta }({{{B^{\prime}}_i}} )$$

3. Numerical simulation and analysis

3.1 Numerical simulations

Numerical simulations have been carried out to verify the feasibility of the proposed image compression, encryption, and reconstruction scheme. As shown in Fig. 4(a), four plaintext images selected from Mnist testing dataset [48] are supposed to be encrypted. We first resize all selected images to the same size of $64 \times 64$ pixels. The number of Hadamard patterns is set to be $410$, corresponding to the sampling ratio 10%. In SPI system, the Hadamard patterns are projected on the plaintext image and a single-pixel detector is used to collect the bucket signals, which is further compressed by binarization operation. The binarized bucket signals is illustrated in Fig. 4(b). Figure 4(c) shows the four orthogonal code sequences of length 8 bits, by which the four plaintext images are independently encoded into sequences. The final ciphertext obtained by superimposing the four encoded sequence together is demonstrated in Fig. 4(d). With the help of code sequences, the ciphertext sequence can be separately decoded back to the binary bucket signals, as can be shown in Fig. 4(e). Once these recovered signals are input into the pre-trained model, the primary secret images can be quickly output with high visual quality, as can be shown in Fig. 4(f).

 figure: Fig. 4.

Fig. 4. Simulation results. (a) Plaintext images. (b) Binarized bucket signals. (c) Code sequences. (d) Ciphertext. (e) Recovered binary signals. (f) Reconstructed plaintext.

Download Full Size | PDF

The proposed network model is pre-trained using 10,000 “plaintext-ciphertext” pairs. 1000 images are used as the testing data, which are totally different from the training data. We demonstrate the effectiveness of our method on two popular image datasets, MNIST and Fashion MNIST [49]. Figure 5 presents the results reconstructed from decoded binary bucket signals under different sampling ratios, which implies the model performance is still quite good even under a very low sampling ratio. It can be also shown that, in the proposed scheme, increasing the sampling ratio only makes substantial improvements in the sharpness of output images up to a certain point.

 figure: Fig. 5.

Fig. 5. Reconstruction performance on two popular image datasets.

Download Full Size | PDF

To show the good performance of our deep learning-assisted SPI (DLSPI), we compare it with traditional SPI reconstruction algorithm, and the total variation (TV) regularization-based SPI (TVSPI) algorithm [35]. The reconstruction performances are investigated under the same sampling ratio (SR), and the Peak signal-to-noise ratio (PSNR) is used a performance measure for image reconstruction. We calculate the averaged PSNR values of 100 reconstructed images that are randomly and respectively selected from MNIST test set and Fashion-MNIST test set. The corresponding results in shown in Fig. 6(a) and Fig. 6(b). Undoubtedly, DLSPI outperform the other two methods in imaging quality, and traditional SPI is second, and TVSPI is worst. It is very clearly to observe that DLSPI has obvious advantages to recovering images from binary signals under the same SR.

 figure: Fig. 6.

Fig. 6. The Average PSNR curves obtained by SPI, TVLSPI, DLSPI at different SR with (a) the MNIST dataset, (b) the Fashion-MNIST dataset.

Download Full Size | PDF

We further investigate the security of the proposed cryptosystem. Figures 7(a) are the images to be encrypted, and Fig. 7(b)-(e) are the retrieved images from the binary bucket sequences that are decrypted with wrong 8-bit code sequences. Note that only one bit at a time changes in the robustness test, e.g. Figure 7(b) corresponds to the incorrect first bit and Fig. 7(c) to the incorrect second bit, and likewise, Fig. 7(d) and 7(e) follows. No useful information about the original secret images is revealed. The simulation results show that the proposed cryptosystem based on orthogonal coding is able to ensure the security of data transmission.

 figure: Fig. 7.

Fig. 7. (a) The images to be encrypted. (b)-(e) The images reconstructed from incorrectly decrypted binary bucket sequences with wrong codes.

Download Full Size | PDF

In the process of information measurement and transmission, it will inevitably be attacked by noise interference. However, binarization processing not only compresses the information, but also improves the anti-noise ability of the system. Figure 8 are the PSNR curves corresponding to three different reconstruction methods under different noise intensity. With the noise intensity from 0 dB to 30 dB, all the PSNR curves are flat without huge fluctuations and the influence of noise can be neglected. It is expectable that DLSPI has the highest PSNR value at the same noise intensity, which means that reconstructed images using DLSPI is experienced minimal noise disturbance. Thus, the encryption system has great anti-noise ability based on binarization processing and reconstruction based on DLSPI.

 figure: Fig. 8.

Fig. 8. Comparison of different SIP algorithms in noise resistance.

Download Full Size | PDF

We analyze the sensitivity of occlusion attack by testing the linear correlation coefficient (CC) between reconstructed and GT images in the case of partial information loss in ciphertext. Here the sampling ratio of encrypted images is set to be 10% and the loss percentage increases from 0 to 80% in 10% intervals. Figure 9 gives the relationship between the CC values and loss rate, which shows a downward trend. The degree of visual clarity of the reconstructed images are gradually decreasing with the loss rate increasing. However, it is worth noting that the CC value is maintained above 0.9 when the loss rate is less than 30%, which implies that a successful reconstruction can still be achieved with partial loss of ciphertext during transmission.

 figure: Fig. 9.

Fig. 9. The CC curve with different loss rates.

Download Full Size | PDF

3.2 Optical experiment results

A SPI experimental setup is used to test the performance of our proposed method. Figure 10 shows the configuration of the experimental setup for image compression and encryption. A digital light projector (DLP, EPSON CB-2055) controlled by a computer is used to produce a series of illumination patterns. A photodetector (Thorlabs PDA100A2) is used to collect the light reflected from the object scene, and the intensity data are acquired and transferred to a computer by a data acquisition (DAQ) card (USB-6341, National Instruments).

 figure: Fig. 10.

Fig. 10. Configuration of experimental setup.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Experimental results. (a) Plaintext images. (b) Binarized measurments. (c) Code sequences. (d) Ciphertext. (e) Reconstructed binary signals. (f) Reconstructed images.

Download Full Size | PDF

For the sake of convenience, the four images that have been used for simulations are selected as plaintexts in our experiment, which can be shown in Fig. 11(a). The bucket signals recorded with 410 Hadamard illumination patterns are binarized before coding, the DLP projects patterns at a rate of 5 frames per second, and the total time for pattern projection and bucket signals collection is 164 seconds. The resultant binary sequences are given in Fig. 11(b). Figure 11(c) presents the code sequences used for the encryption of binary signals. The ciphertext obtained by superimposing the encoded sequences is shown in Fig. 11(d). Figure 11(e) illustrates the binary signals decrypted from the ciphertext with orthogonal code sequences. To reduce time spent on data preparation, the DenseNet model that has been pre-trained using simulation data is used for image reconstruction. When the retrieved binary signals are fed into the pre-trained network model, the reconstructed images are given in Fig. 11(f). Despite the binarization step and low sampling ratio, which can affect quite critically the reconstruction performance in traditional SPI algorithms, a high-quality reconstruction can still be achieved. Thanks to DL-assisted reconstruction, there is no distinct difference in visual quality between the experimental and simulation results.

The security of the proposed scheme is also verified by security analysis and experimental results. Similarly, Fig. 12(a) are four plaintext images and Fig. 12(b)–(e) are the four decrypted images corresponding to different wrong key sequences. The results proved again the proposed encryption scheme is very sensitive to the keys, ensuring the security of data transmission.

 figure: Fig. 12.

Fig. 12. (a) The images to be encrypted. (b)-(e) The images reconstructed from incorrectly decrypted binary bucket sequences with wrong codes.

Download Full Size | PDF

The CC curve with different loss rates of the ciphertext using the collected data in experiment are presented in Fig. 13. When the loss rates increase from 0 to 80%, the corresponding CC values drop from 0.96 to 0.28. Even when the loss rate reaches 40%, the CC value is close to 0.8 and the reconstructed images from the decrypted binary signals are relatively clear. Only when the increase of loss rates up to 60%, the reconstructed image quality degrades significantly. The results verify the robustness of the system against occlusion attack.

 figure: Fig. 13.

Fig. 13. The CC curve with different loss rates of experimental data.

Download Full Size | PDF

4. Conclusion

In conclusion, we have proposed an efficient hybrid image compression, encryption and reconstruction scheme based on deep learning-assisted SPI and orthogonal coding, in which optical information is compressed and encrypted into sequence data, and several independent users access a common communications channel. The orthogonal coding provides the spreading of the signals and ensure the security of optical information, while the deep learning-assisted SPI not only increases reconstruction efficiency greatly but also offers a desired reconstruction quality even at a low sampling ratio. The combination of the compressive imaging and multiplexing technique allows the optically and compressive encrypted data for multi-user detection and interference cancellation, increasing the efficiency and security of data storage and transmission security. We believe that the scheme of optical information processing presented here could play an important part in stemming the tide of information fraud and data theft.

Funding

National Natural Science Foundation of China (61975185, 61575178); Scientific Research and Developed Fund of Zhejiang University of Science and Technology (F701108L03).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Refrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]  

2. W. Chen, B. Javidi, and X. Chen, “Advances in optical security systems,” Adv. Opt. Photon. 6(2), 120–155 (2014). [CrossRef]  

3. N. K. Nishchal, “Optical asymmetric encryption schemes and attack analysis," Proc. SPIE 10795, Electro-Optical and Infrared Systems: Technology and Applications XV, 107950A (2018). [CrossRef]  

4. N. K. Nishchal, Optical Cryptosystems, (IOP Publishing Ltd., London, 2019).

5. O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical Techniques for Information Security,” Proc. IEEE 97(6), 1128–1148 (2009). [CrossRef]  

6. G. Situ and J. Zhang, “Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]  

7. S. Yu, N. Zhou, L. Gong, and Z. Nie, “Optical image encryption algorithm based on phase-truncated short-time fractional Fourier transform and hyper-chaotic system,” Opt. Lasers Eng. 124, 105816 (2020). [CrossRef]  

8. Y. Wang, A. Markman, C. Quan, and B. Javidi, “Double-random-phase encryption with photon counting for image authentication using only the amplitude of the encrypted image,” J. Opt. Soc. Am. A 33(11), 2158–2165 (2016). [CrossRef]  

9. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]  

10. M. Liao, S. Zheng, S. Pan, D. Lu, and W. He, “Deep-learning-based ciphertext-only attack on optical double random phase encryption,” Opto-Electron Adv. 4(5), 200016 (2021). [CrossRef]  

11. Y. Zhang and B. Wang, “Optical image encryption based on interference,” Opt. Lett. 33(21), 2443–2445 (2008). [CrossRef]  

12. L. Liu, M. Shan, Z. Zhong, L. Yu, and B. Liu, “Compressive interference-based image encryption via sparsity constraints,” Opt. Lasers Eng. 134, 106297 (2020). [CrossRef]  

13. Z. Liu, C. Guo, J. Tan, Q. Wu, L. Pan, and S. Liu, “Iterative phase-amplitude retrieval with multiple intensity images at output plane of gyrator transforms,” J. Opt. 17(2), 025701 (2015). [CrossRef]  

14. H. Wei and X. Wang, “Optical multiple-image authentication and encryption based on phase retrieval and interference with sparsity constraints,” Optics & Laser Technology 142, 107257 (2021). [CrossRef]  

15. I. Muniraj, C. Guo, R. Malallah, J. Ryle, J. Healy, B. Lee, and J. Sheridan, “Low photon count based digital holography for quadratic phase cryptography,” Opt. Lett. 42(14), 2774–2777 (2017). [CrossRef]  

16. W. Wang, X. Wang, B. Xu, and J. Chen, “Optical image encryption and authentication using phase-only computer-generated hologram,” Opt. Lasers Eng. 146, 106722 (2021). [CrossRef]  

17. J. Li, L. Chen, W. Cai, J. Xiao, J. Zhu, Y. Hu, and K. Wen, “Holographic encryption algorithm based on bit-plane decomposition and hyperchaotic Lorenz system,” Opt. Laser Technol. 152, 108127 (2022). [CrossRef]  

18. W. Chen, “Optical Multiple-image encryption using three-dimensional space,” IEEE Photonics J. 8(2), 1–8 (2016). [CrossRef]  

19. J. Wu, J. Wang, Y. Nie, and L. Hu, “Multiple-image optical encryption based on phase retrieval algorithm and fractional Talbot effect,” Opt. Express 27(24), 35096–35107 (2019). [CrossRef]  

20. Z. Huang, S. Cheng, L. Gong, and N. Zhou, “Nonlinear optical multi-image encryption scheme with two-dimensional linear canonical transform,” Opt. Lasers Eng. 124, 105821 (2020). [CrossRef]  

21. X. Chen, Q. Liu, J. Wang, and Q. Wang, “Asymmetric encryption of multi-image based on compressed sensing and feature fusion with high quality image reconstruction,” Optics & Laser Technology 107, 302–312 (2018). [CrossRef]  

22. P. Clemente, V. Durán, and E. Tajahuerce, “Optical encryption based on computational ghost imaging,” Opt. Lett. 35(14), 2391–2393 (2010). [CrossRef]  

23. L. Zhang, Z. Pan, L. Wu, and X. Ma, “High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform,” Opt. Lasers Eng. 86, 329–337 (2016). [CrossRef]  

24. W. Chen, “Optical cryptosystem based on single-pixel encoding using the modified Gerchberg-Saxton algorithm with a cascaded structure,” J. Opt. Soc. Am. A 33(12), 2305–2311 (2016). [CrossRef]  

25. X. Li, X. Meng, X. Yang, Y. Wang, Y. Yin, X. Peng, W. He, G. Dong, and H. Chen, “Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme,” Opt. Lasers Eng. 102, 106–111 (2018). [CrossRef]  

26. Y. Kang, L. Zhang, H. Ye, M. Zhao, S. Kanwal, C. Bai, and D. Zhang, “One-to many optical information encryption transmission method based on temporal ghost imaging and code division multiple access,” Photon. Res. 7(12), 1370 (2019). [CrossRef]  

27. J. Xiong, P. Zheng, Z. Gao, and H. Liu, “Algorithm-Dependent Computational Ghost Encryption and Imaging,” Phys. Rev. Applied 18(3), 034023 (2022). [CrossRef]  

28. P. Zheng, J. Li, Z. Li, M. Ge, S. Zhang, G. Zheng, and H. Liu, “Compressive Imaging Encryption with Secret Sharing Metasurfaces,” Adv. Optical Mater. 10(15), 2200257 (2022). [CrossRef]  

29. Y. Liu, P. Zheng, and H. Liu, “Anti-loss-compression image encryption based on computational ghost imaging using discrete cosine transform and orthogonal patterns,” Opt. Express 30(9), 14073–14087 (2022). [CrossRef]  

30. A. Zhu, S. Lin, and X. Wang, “Optical color ghost cryptography and steganography based on multi-discriminator generative adversarial network,” Opt. Commun. 512, 128032 (2022). [CrossRef]  

31. S. Lin, X. Wang, A. Zhu, J. Xue, and B. Xu, “Steganographic optical image encryption based on single-pixel imaging and an untrained neural network,” Opt. Express 30(20), 36144–36154 (2022). [CrossRef]  

32. G. Gibson, S. Johnson, and M. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

33. J. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

34. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

35. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78–87 (2018). [CrossRef]  

36. M. Sun, L. Meng, M. Edgar, M. Padgett, and N. Radwell, “A Russian dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017). [CrossRef]  

37. L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh-Hadamard transform,” Photon. Res. 4(6), 240–244 (2016). [CrossRef]  

38. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

39. F. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

40. S. Rizvi, J. Cao, and K. Zhang, “Deringing and denoising in extremely under-sampled Fourier single pixel imaging,” Opt. Express 28(5), 7360–7374 (2020). [CrossRef]  

41. H. Wu, R. Wang, Z. Huang, H. Xiao, J. Liang, D. Wang, X. Tian, T. Wang, and L. Cheng, “Online adaptive computational ghost imaging,” Opt. Laser Eng. 128, 106028 (2020). [CrossRef]  

42. M. Lyu, W. Wang, and H. Wang, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

43. H. Wu, R. Wang, G. Zhao, H. Xiao, D. Wang, J. Liang, X. Tian, L. Cheng, and X. Zhang, “Sub-Nyquist computational ghost imaging with deep learning,” Opt. Express 28(3), 3846–3853 (2020). [CrossRef]  

44. X. Yang, P. Jiang, M. Jiang, L. Xu, L. Wu, C. Yang, W. Zhang, J. Zhang, and Y. Zhang, “High imaging quality of Fourier single pixel imaging based on generative adversarial networks at low sampling rate,” Opt. Laser Eng. 140, 106533 (2021). [CrossRef]  

45. S. Yuan, L. Wang, X. Liu, and X. Zhou, “Forgery attack on optical encryption based on computational ghost imaging,” Opt. Lett. 45(14), 3917–3920 (2020). [CrossRef]  

46. G. Huang, Z. Liu, G. Pleiss, L. V. Der Maaten, and K. Q. Weinberger, “Convolutional Networks with Dense Connectivity,” IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 8704–8716 (2022). [CrossRef]  

47. O. Ronneberger, P. Fischer, and T. Brox, “U-net:convolutional networks for biomedical image segmentation,” Med. Image Comput. Comput. Assist. Interv. 9351, 234–241 (2015). [CrossRef]  

48. https://github.com/abhi9716/handwritten-MNIST-digit-recognition.

49. X. Han, R. Kashif, and V. Roland, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv, arXiv:1708.07747 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Flowchart of (a) the compression, and (b) encryption algorithms.
Fig. 2.
Fig. 2. Schematic of the sequence decryption and image reconstruction.
Fig. 3.
Fig. 3. The network architecture for image reconstruction from 1D binarized signals.
Fig. 4.
Fig. 4. Simulation results. (a) Plaintext images. (b) Binarized bucket signals. (c) Code sequences. (d) Ciphertext. (e) Recovered binary signals. (f) Reconstructed plaintext.
Fig. 5.
Fig. 5. Reconstruction performance on two popular image datasets.
Fig. 6.
Fig. 6. The Average PSNR curves obtained by SPI, TVLSPI, DLSPI at different SR with (a) the MNIST dataset, (b) the Fashion-MNIST dataset.
Fig. 7.
Fig. 7. (a) The images to be encrypted. (b)-(e) The images reconstructed from incorrectly decrypted binary bucket sequences with wrong codes.
Fig. 8.
Fig. 8. Comparison of different SIP algorithms in noise resistance.
Fig. 9.
Fig. 9. The CC curve with different loss rates.
Fig. 10.
Fig. 10. Configuration of experimental setup.
Fig. 11.
Fig. 11. Experimental results. (a) Plaintext images. (b) Binarized measurments. (c) Code sequences. (d) Ciphertext. (e) Reconstructed binary signals. (f) Reconstructed images.
Fig. 12.
Fig. 12. (a) The images to be encrypted. (b)-(e) The images reconstructed from incorrectly decrypted binary bucket sequences with wrong codes.
Fig. 13.
Fig. 13. The CC curve with different loss rates of experimental data.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

H 2 k = [ H 2 k 1 H 2 k 1 H 2 k 1 H 2 k 1 ] = H 2 H 2 k 1
D i = H i ( μ , ν ) O ( μ , ν ) d μ d ν
O ^ = 1 N i = 1 N ( D i D ) H i
B i = { 1 D i 0 0 o t h e r w i s e
L = j = 1 M L j
B = L | | 2
MSE = 1 τ i = 1 τ y i R θ ( B i ) 2
O i = R θ ( B i )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.