Table 1.
Pixel Changed Value Rate of Each Convolutional Layer under
Different Neural Network Architecture
Layer | Only Convolution | Add Instance Normalization | Add Instance Normalization Tanh ReLU | Add Instance Normalization Tanh ReLU
upsampling | Add Instance Normalization Tanh ReLU upsampling
Addition |
---|
1 | 4.3442e-07 | 4.7576e-07 | 4.7219e-07 | 4.4036e-07 | 4.6977e-07 |
2 | 5.1594e-07 | 1.3970e-06 | 1.3246e-06 | 1.3726e-06 | 1.3651e-06 |
3 | 7.4517e-07 | 2.1353e-06 | 2.4067e-06 | 2.4082e-06 | 2.4235e-06 |
4 | 1.0990e-06 | 3.3669e-06 | 4.5878e-06 | 4.6336e-06 | 4.6305e-06 |
5 | 1.3661e-06 | 4.2884e-06 | 6.9319e-06 | 6.9454e-06 | 7.1858e-06 |
6 | 1.7046e-06 | 5.0537e-06 | 9.4087e-06 | 1.0147e-05 | 8.0937e-06 |
7 | 1.9277e-06 | 5.7124e-06 | 1.3016e-05 | 1.3484e-05 | 1.1663e-05 |
8 | 2.2478e-06 | 6.2956e-06 | 1.6898e-05 | 1.7517e-05 | 1.1211e-05 |
9 | 2.5574e-06 | 6.8228e-06 | 2.2944e-05 | 2.3807e-05 | 1.6308e-05 |
10 | 2.7574e-06 | 7.2916e-06 | 2.8593e-05 | 2.9814e-05 | 1.3941e-05 |
11 | 2.9773e-06 | 7.7333e-06 | 3.7236e-05 | 3.9003e-05 | 2.0639e-05 |
12 | 3.1002e-06 | 8.1280e-06 | 4.7824e-05 | 4.9572e-05 | 1.7046e-05 |
13 | 3.1901e-06 | 8.5063e-06 | 5.9773e-05 | 6.1400e-05 | 2.5305e-05 |
14 | 3.4531e-06 | 8.8553e-06 | 7.4531e-05 | 7.8961e-05 | 2.0234e-05 |
15 | 3.6922e-06 | 9.1933e-06 | 9.5103e-05 | 9.5515e-05 | 2.9950e-05 |
16 | 3.8857e-06 | 9.5027e-06 | 0.00011571 | 0.00012063 | 2.3155e-05 |
17 | 4.0373e-06 | 9.8137e-06 | 0.00014264 | 0.00014930 | 3.4711e-05 |
18 | 4.1053e-06 | 1.0118e-05 | 0.00018212 | 0.00019641 | 2.6290e-05 |
19 | 4.2295e-06 | 1.0395e-05 | 0.00021878 | 0.00022856 | 3.8430e-05 |
20 | 4.2432e-06 | 1.0666e-05 | 0.00027659 | 0.00028847 | 2.8444e-05 |
21 | 4.3959e-06 | 1.0934e-05 | 0.00034346 | 0.00036212 | 4.4064e-05 |
22 | 4.3726e-06 | 1.1195e-05 | 0.00042764 | 0.00042295 | 3.0909e-05 |
23 | 4.6582e-06 | 1.1428e-05 | 0.00052727 | 0.00054838 | 4.4469e-05 |
24 | 4.3361e-06 | 1.2389e-05 | 0.00057682 | 0.00070261 | 5.8946e-05 |
Output | 4.3361e-06 | 1.2389e-05 | 0.00035246 | 0.00044919 | 3.8195e-05 |
Uint8 | / | / | 0.00063184 | 0.00180263 | 7.9417e-05 |
Table 2.
Pixel Changed Amounts Rate of Each Convolutional Layer
under Different Neural Network Architecture
Layer | Convolution | Add Instance Normalization | Add Instance Normalization Tanh ReLU | Add Instance Normalization Tanh ReLU
upsampling | Add Instance Normalization Tanh ReLU upsampling
Additions |
---|
1 | 0.00072839 | 0.00074750 | 0.00073694 | 0.00074127 | 0.00074745 |
2 | 0.00117913 | 0.93041396 | 0.91892546 | 0.92170795 | 0.91850448 |
3 | 0.00243137 | 0.96561842 | 0.96409534 | 0.96422722 | 0.96407282 |
4 | 0.00631174 | 0.96986568 | 0.97557610 | 0.97584826 | 0.97581293 |
5 | 0.01193708 | 0.97537113 | 0.98299635 | 0.98296583 | 0.98332281 |
6 | 0.01933226 | 0.97870115 | 0.98714268 | 0.98786503 | 0.98477831 |
7 | 0.02838684 | 0.98109053 | 0.99043459 | 0.99069668 | 0.98913168 |
8 | 0.03889650 | 0.98290016 | 0.99254780 | 0.99277304 | 0.98871616 |
9 | 0.05076242 | 0.98432005 | 0.99441971 | 0.99459860 | 0.99208193 |
10 | 0.06402901 | 0.98540510 | 0.99552389 | 0.99568966 | 0.99092588 |
11 | 0.07865742 | 0.98634514 | 0.99653816 | 0.99668547 | 0.99372637 |
12 | 0.09417774 | 0.98710438 | 0.99728703 | 0.99738340 | 0.99259157 |
13 | 0.11072437 | 0.98780676 | 0.99783712 | 0.99790037 | 0.99486012 |
14 | 0.12852476 | 0.98841549 | 0.99825556 | 0.99834938 | 0.99371113 |
15 | 0.14742535 | 0.98893694 | 0.99863094 | 0.99865259 | 0.99564032 |
16 | 0.16713364 | 0.98940828 | 0.99888851 | 0.99892841 | 0.99451908 |
17 | 0.18778992 | 0.98981171 | 0.99909424 | 0.99914261 | 0.99622229 |
18 | 0.20929787 | 0.99021000 | 0.99929035 | 0.99933691 | 0.99514990 |
19 | 0.23106195 | 0.99053637 | 0.99941364 | 0.99944439 | 0.99661896 |
20 | 0.25345884 | 0.99085546 | 0.99953426 | 0.99956516 | 0.99554260 |
21 | 0.27664907 | 0.99112972 | 0.99962237 | 0.99964945 | 0.99702314 |
22 | 0.30036716 | 0.99141975 | 0.99969696 | 0.99970370 | 0.99591154 |
23 | 0.32382751 | 0.99160141 | 0.99976135 | 0.99978136 | 0.99729753 |
24 | 0.39915690 | 0.99427897 | 0.99980794 | 0.99985209 | 0.99826528 |
Output | 0.39915690 | 0.99427897 | 0.19661784 | 0.24746857 | 0.19319448 |
Uint8 | / | / | 0.02611816 | 0.03280396 | 0.00529622 |
Table 3.
Neural Network Structure of the Encryption Network
Convolution Layer Name | Number | Convolution Kernel Size | Input Channels | Output Channels | Parameters | Total Parameters |
---|
Convolution1 | 1 | | 6 | 64 | 18,880 | 18,880 |
Convolution2 | 1 | | 64 | 128 | 73,856 | 92,736 |
Convolution3 | 1 | | 128 | 256 | 295,168 | 387,904 |
Residual Blocks | 27 | | 256 | 256 | 590,080 | 16,320,064 |
Deconvolution1 | 1 | | 256 | 128 | 295,040 | 16,615,104 |
Deconvolution2 | 1 | | 128 | 64 | 73,792 | 16,688,896 |
Deconvolution3 | 1 | | 64 | 3 | 9411 | 16,698,307 |
Table 4.
Neural Network Structure of the Decryption Network
Convolution Layer Name | Number | Convolution Kernel Size | Input Channels | Output Channels | Parameters | Total Parameters |
---|
Convolution1 | 1 | | 6 | 64 | 9472 | 9472 |
Convolution2 | 1 | | 64 | 128 | 73,856 | 83,328 |
Convolution3 | 1 | | 128 | 256 | 295,168 | 378,496 |
Residual blocks | 27 | | 256 | 256 | 590,080 | 16,310,656 |
Deconvolution1 | 1 | | 256 | 128 | 295,040 | 16,605,696 |
Deconvolution2 | 1 | | 128 | 64 | 73,792 | 16,679,488 |
Deconvolution3 | 1 | | 64 | 3 | 9411 | 16,688,899 |
Table 5.
SSIM Index between Two Encrypted Images Encrypted by Two Different Encryption Key1s
Image | | | | |
---|
| 1 | 0.005 | 0.010 | 0.001 |
| 0.005 | 1 | 0.009 | 0.002 |
| 0.010 | 0.009 | 1 | 0.002 |
| 0.001 | 0.002 | 0.002 | 1 |
Table 6.
SSIM Index between Two Decrypted Images Decrypted by Two Different Decryption Key1s
Image | | | | |
---|
| 1 | 0.261 | 0.028 | 0.049 |
| 0.261 | 1 | 0.026 | 0.025 |
| 0.028 | 0.026 | 1 | 0.036 |
| 0.049 | 0.025 | 0.036 | 1 |
Table 7.
Quantitative Evaluation Results of Different Methods
| Cycle- | | Model Setting | |
---|
Quota | GAN | Ref. [17] | Batch Size to 2 | Ours |
---|
SSIM (encrypted) | 0.6581 | 0.01 | 0.0070 | 0.0073 |
PSNR (encrypted) | 13.6499 | / | 7.4737 | 7.4770 |
PSNR (decrypted) | 31.0617 | 37.43 | 33.4834 | 33.1800 |
SSIM (decrypted) | 0.9750 | 0.93 | 0.9345 | 0.9360 |
Table 8.
Image Entropy Comparisons among Different Methods
Method | Image Entropy |
---|
Original image | 6.4780 |
Cycle-GAN | 6.6521 |
Model without further diffusion | 7.7675 |
Ref. [17] | 7.95 |
Model setting batch size to 2 | 7.9973 |
Ref. [18] | 7.9986 |
Ref. [7] | 7.9995 |
Ours | 7.9972 |
Table 9.
Correlation Coefficients of Two Adjacent Pixels in Ciphertext Images Obtained by Different Methods in Different Directions
Scan direction | Horizontal | Diagonal | Vertical |
---|
Original image | 0.9733 | 0.9701 | 0.9867 |
Cycle-GAN | 0.5112 | 0.3881 | 0.4575 |
Model without | −0.0050 | 0.0011 | −0.0022 |
further diffusion | | | |
Model setting | | | |
batch size to 2 | | | |
Ref. [18] | 0.0386 | 0.2259 | 0.1158 |
Ref. [7] | | | |
Ours | | | |
Table 10.
NPCR and UACI Values of the Encoded Images Acquired by Different Methods
Method | NPCR | UACI |
---|
Cycle-GAN | 23.04% | 0.12% |
Model without further diffusion | 1.5% | 0.006% |
Ref. [17] | 94.21% | / |
Model setting batch size to 2 | 98.16% | 33.05% |
Ref. [18] | 99.59% | 23.19% |
Ref. [7] | 99.61% | 33.48% |
Ours | 99.64% | 33.49% |
Table 11.
Quantitative Evaluation Results of Noise and Data Loss Attacks
Noise/loss data | PSNR | SSIM |
---|
5% salt and pepper noise | 21.2763 | 0.6300 |
10% salt and pepper noise | 18.6249 | 0.4576 |
data losses in the all three color planes | 15.6588 | 0.7716 |
Two data losses in all three color planes | 14.9911 | 0.6518 |
data losses in all three color planes | 14.8815 | 0.4941 |