Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Geometric invariant blind image watermarking by invariant Tchebichef moments

Open Access Open Access

Abstract

Many proposed image watermarking techniques are sensitive to geometric distortions, such as rotation, scaling and translation. Geometric distortions, even by slight amount, can make watermark decoder disable. In this paper, a geometric invariant blind image watermarking is designed by utilizing the invariant Tchebichef moments. The detailed construction of invariant Tchebichef moments is described. Watermark is generated independent to the original image and inserted into the perceptually significant invariant Tchebichef moments of original image. The watermark decoder extracts watermark blindly utilizing Independent Component Analysis (ICA). The computational aspects of the proposed watermarking are also discussed in detail. Experimental results have demonstrated that the proposed watermarking technique is robust against geometric distortions and other attacks performed by popular watermark benchmark-Stirmark, such as filtering, image compression, additive noise, random geometric distortions.

©2007 Optical Society of America

1. Introduction

Digital watermarking technique is an effective means to resolve copyright protection and information security problems by embedding watermark information into digital protected media. In order for a watermark to be useful, it must have robustness against image processing operations and various possible attacks by pirates[1].

In watermarking applications, the robustness against common signal processing and geometric attacks is essential to watermarking system. Geometric distortions damage inevitably the synchronization which is crucial for the correct detection of watermark techniques. This is due to the fact that minor geometric manipulation to the watermarked image could dramatically reduce the detector’s ability to detect the watermark. It is well known that many existing image watermarking algorithms are vulnerable to geometric attacks [2]. Despite the efforts and progress made in this direction, the watermark’s robustness against geometric distortions has not been well addressed, and it still remains an open problem. The effect of geometric attacks can be better understood by making an analogy between watermarking system and communication system. The principle of synchronization between encoder and decoder in the communication system holds in the watermarking system as well. Synchronization is a serious problem to any watermarking schemes, especially to blind watermarking techniques.

Some research has been done to deal with the watermark’s vulnerability to geometric distortions. Various methods have been proposed [3,4,5], which are summarized and categorized as follows. Fist category is based on distortion inversion. A registration pattern is inserted into the host signal along with watermark [6] or the watermark is designed with a special structure [7], so that in the stage of watermark detection the involved geometric distortions can be identified and measured. Thus the geometric distortions can be removed by an inversion process. The nature of this approach is the incorporation of two watermarks, one for data payload and the other for distortion detection. The second category is based on image normalization [8]. An image can be normalized to a certain position, orientation and size, which are invariant to image translation, rotation and scaling. The host image is normalized prior to watermark insertion, and the watermarked image is de-normalized back to its original look after watermark insertion. At the watermark detector, the test image has to undergo the same normalization process before watermark detection. An outstanding disadvantage of this approach is that an image has to experience transformations twice in the watermarking process, which inevitably causes extra quality degradation of the image on top of the watermark-induced distortion. The third category is based on the invariance properties of some image features. Different image features have different invariance properties. Image features that have been used for invariance watermarking include Fourier-Mellin transform domain coefficients[9], geometric moment invariants and Zernike moments[10].

Moments due to its ability of represent global features have found extensive application in the field of image processing. Geometric moments of digital images are used to capture global features for applications in pattern recognition, image analysis and object classification. Moments of orthogonal basis functions, such as the Legendre and Zernike polynomials introduced by Teague [11] can be used to represent the image by a set of mutual independent descriptors with a minimal amount of information redundancy[12]. However, these moments present several problems. The most important of these problems is their inaccuracy due to quantization errors induced by the discrete approximation of continuous integrals and the coordinate space. Additionally, these approximations present a very large computational burden. Tchebichef moments were proposed by Mukundan et al[13] to address these problems. They are discrete orthogonal moments and present a number of advantages over moments of continuous orthogonal basis[14]. Mukundan’s study showed that the implementation of Tchebichef moments does not involve any numerical approximation since the basis set is orthogonal in the discrete domain of the image coordinate space. This property makes Tchebichef moments superior to the conventional continuous orthogonal moments in terms of preserving the analytical property needed to ensure information redundancy in a moment set.

Recently, blind source separation by ICA has received much more attention because of its potential applications in signal processing such as in speech recognition systems, medical signal processing and telecommunications. ICA is a general purpose statistical technique, which extracts a linear transformation for given a set of observed data such that the resulting variables are as statistically independent as possible.

In this paper, a novel geometric invariant blind image watermarking is proposed with the introduction of invariant Tchebichef moments. The construction of invariant Tchebichef moments is described in detail. The watermark was generated randomly independent to the original image and embedded by modifying perceptual invariant Tchebichef moments of the original image. ICA is utilized by watermark detector to extract the perfect watermark blindly. The accuracy of watermark detector depends on the secret key used in embedding process and the statistical independence between original image and watermark. The computational aspects of the proposed watermarking are also discussed in detail. Experimental results have demonstrated that the proposed watermarking is robust against various attacks performed by Stirmark.

2. Methodology of Invariant Tchebichef Moments Construction

2.1 The Definition of Tchebichef Moments

The discrete Tchebichef moments with order p + q of image intensity function f(x,y) are defined as [13]:

Tpq=1ρpNρqNx=0N1y=0N1tp(x)tp(x)fxy,p,q=0,1,2,,N1

Where, ρpN=x=0N1{tp(x)}2=N2(N21)(N222)(N2n2)2n+1=(2n)!(N+n2n+1).

The discrete Tchebichef polynomials are expressed as:

tn(x)=n!k=0n(1)nk(N1knk)(n+kn)(xk)

The inverse moment transform can be defined as:

fxy=p=0N1q=0N1Tpqtp(x)tq(y)

2.2 Invariant Tchebichef Moments Construction

In order to construct the geometric invariant Tchebichef moments, the set of weighted Tchebichef polynomial {t̃n (x)} can be written as:

t˜n(x)=tn(x)βnN=k=0nck,n,Nxk

Where, ck,n,N=(1)nkn!k!(N1knk)(n+kn).

The explicit expression of the weighted Tchebichef polynomials up to 3 order are expressed as follows:

t˜0xN=1
t˜1xN=1NN+2Nx
t˜2xN=N23N+2N2+6(1N)N2x+6N2x2
t˜3xN=N3+6N211N+6N3+12N230N+22N3x+30(1N)N3x2+20N3x3

The explicit expression of the coefficients {ck,n,N} up to 3 order are expressed as follows:

c0,0,N=1
c0,1,N=1NNc1,1,N=2N
c0,2,N=N23N+2N2c1,2.N=6(1N)N2c2,2,N=6N2
c0,3,N=N3+6N211N+6N3c1,3,N=12N230N+22N3c2,3,N=30(1N)N3c3,3,N=20NN3

According to Eq.(1) and Eq.(4), the Tchebichef moments can be written as:

Tpq=1ρ˜pNρ˜qNx=0N1y=0N1(i=0pci,p,Nxi)(j=0qcj,q,Nxj)f(x,y)

Where, ρ˜nN=ρnNβnN2=N(11N2)(122N2)(1n2N2)2n+1, β(n,N) is a suitable constant which is independent of x and the simplest choice is β(n,N)=Nn.

If geometric moments with p + q order of an image f(x,y) are expressed as[15]:

mpq=x=0N1y=0N1xpyqfxy

Then the Tchebichef moments can be simplified as:

Tpq=1ρ˜pNρ˜qNi=0pj=0qcj,q,Nci,p,Nmi,j

Note that Eq. (9) transforms the no orthogonal geometric moments to form the orthogonal Tchebichef moments. It is seen that Tchebichef moments depend on geometric moments up to the same order. The explicit expression of Tchebichef moments in terms of geometric moments up to 2 order (for β(n,N) = Nn) are as follows:

T00=m00N2T10=6m10+3(1N)m00N(N21)T01=6m01+3(1N)m00N(N21)
T20=30m20+30(1N)m10+5(1N)(2N)m00(N21)(N22)T02=30m02+30(1N)m01+5(1N)(2N)m00(N21)(N22)
T11=36m11+18(1N)(m10+m01)+9(1N2)m00(N21)2

In order to construct geometric invariant Tchebichef moments, the center of the image f(x,y) is translated to its centriod (x̄,ȳ) to obtain f(x+x̄,y+ȳ). Where, x¯=m10m00 and y¯=m10m00. Then the translated image is rotated with θ angle to obtain f(x̄ + x cosθ-ysinθ,ȳ+ycosθ-xsinθ). Where, θ=12tan12μ11μ20μ02μnm is the central moment defined as[15]:

μnm=(xx¯)n(xy¯)mf(x,y)dxdy

The value of θ is limited to -45° ≤ θ ≤ 45° in the experiments. To obtain the angle θ in the range of 0° to 360°, modification can be done to limit it to -45° ≤ θ ≤ 45°. Finally, The image is scaled with factor α to obtain the normalized image g(x,y). Where, α=(N2m00)12. Notice that g(x,y) according to above does not fall inside the domain of [0,N-1]×[0,N-1], which is required by Tchebichef moment. The center of the normalized image g(x,y) is translated to N2N2. So the relationship of g(x,y) and f(x,y) is:

gx+N2y+N2=fx¯+xcosθysinθαy¯+ycosθxsinθα

The geometric moments of the normalized image g(x,y) is computed as:

m̅nm=x=0N1x=0N1α2fxy{α[(xx¯)cosθ+(yy¯)sinθ]+N2}n×{α[(yy¯)cosθ(xx¯)sinθ]+N2}m

Note that the geometric moments {m nm} is rotation, scale and translation invariant[15]. The new set of Tchebichef moments can be formed by replacing the regular geometric moments {mnm} by their invariant counterparts {m nm} according to Eq.(9), which is rotation, scale and translation invariant. So the invariant Tchebichef moments T̄pq can be expressed as:

T̅pq=1ρ˜pNρ˜qNi=0pj=0qcj,q,Nci,P,Nm¯i,j

The invariant Tchebichef moments can be modified to according to Eq.(13) in terms of f(x,y):

T¯pq=x=0N1y=0N1N22m00fxy{[(xx¯)cosθ+(yy¯)sinθ]N22m00+N2}p{[(yy¯)cosθ(xx¯)sinθ]N22m00+N2}q

3. Methodology of Geometric Invariant Watermarking

3.1 Watermark Embedding

In this section, the proposed geometric invariant watermarking scheme based on invariant Tchebichef moments will be described. The watermark is imperceptibly embedded into the invariant Tchebichef moments of the original image and the embedding intensity is determined by Just Noticeable Difference incorporates the characteristics of Human Visual System. The embedding steps of the proposed watermarking system are described as follows:

Step 1: To create and preprocess the watermark. The watermark is generated independent to the original image having the same size as the original image. Preprocess the watermark before embedding can enhance the security of the watermark.

Step 2: Watermark embedding process. A set of low order invariant Tchebichef moments To={T om1,⋯ T omk,nk} and Tchebichef moments Tw={T wm1,n1T wmk,nk} are constructed from the original image f(x,y) and watermark W respectively. Watermark is embedded by adjusting the invariant Tchebichef moments of f(x,y) with:

T̂mi,ni=Tomi,ni+aminiTwmi,ni

Then a set of adjusted invariant Tchebichef moments T̂ = {T̂m1,n1, ⋯ T̂mk,nk} is obtained. The parameter a is the embedding intensity of the watermark incorporated the characteristics of human visual system.

Step 3: Perform the inverse Tchebichef transform to retrieve the watermarked image.

3.2 Watermark Detection

ICA process is the core of the watermark detector accomplished by FASTICA algorithm [16]. Assume that n observed linear mixtures x 1,⋯xn of n independent components can be expressed as:

xj=aj1s1+aj2s2++ajnsnforallj=1..n

Without loss of generality, both the mixture variables and the independent components can be assumed to be zero mean. Then it can be expressed as: X = AS. Where, S is original source vector and A is the mixed matrix. Both S and A are unknown. X represents all the observed data, and both A and S must be estimated from X. If the separate matrix L is known, the source signal can be estimated easily by: S = LX. It is helpful to perform preprocessing such as centering and whitening to simplify the ICA process. The process of proposed watermark detector is described as follows:

Step1: Preprocess of the test image by centering and whitening. The observed variable x is centered by subtracting the mean vector m=E{x} from the observed variable, this makes x a zero-mean variable. This preprocessing is designed to simplify the ICA algorithms. After estimating the mixing matrix A with centered data, the estimation can be completed by adding the mean vector of the original source signal back to the centered estimations of the source data. Another preprocessing is to writen the observed variables. Whitening means to transform the variable x linearly so that the new variable x̃ is white, i.e., its components are uncorrelated, and their variances equal unity. In other words, x̃ is white means the covariance matrix of x̃ equals to the identity matrix: E[x̃x̃T]=I, that is, x̃ is uncorrelated. Whitening can be computed by eigenvalue decomposition of the covariance matrix E[x x T]=EDET. Where, E is the orthogonal matrix of eigenvector of E[x x T]. D is a diagonal matrix of its eigenvalues, that is D = diag(d 1, ⋯, dn). Note that E[x x T] can be estimated in a standard way from the availability of x.

Step2: Perform ICA to the signal that has been centered and whitened, that is to find the separate matrix L:

  • 1. Choose an initial (e.g., random) weight vector L; Let L+ = E{yG(LTy)}-E{G′(LTy)}L, L = L +/∥L +∥, where, E(·) is the mean function. G(·) is a non-linear function and the following choices of G(·) have been proved to be very useful: G1(u) = tanh(a 1 u), G2(u)=uexp(u22).
  • 2. If the difference between the iterative results is less than the threshold, that is, ∣L + - L∣ < ε, the conclusion can be drawn that the process is converged and the cycle will be terminated; otherwise, go back to Step 2 until the result is converged. The threshold ε can be defined by user and ε = 10 -6 is used in our experiments. If the result is still not converged after 3000 cycles, then the process will be forced to terminate and it can be concluded that there is no independent component for the test image.

    If there are multiple watermarks in the test image, the extracted watermark must be subtracted before extracting the next one.

Step3: Retrieve the perfect watermark with the secret key used in the watermark embedding process.

4. Computation Aspects of Watermarking System

In this section, the computational aspects of the proposed watermarking system will be discussd. In the watermark embedding process, the Tchebichef moments must be computed. Since the invariant Tchebichef moments are Tchebichef moments of normalized image, the most of computation time are cost by Tchebichef moment computation. During the watermark detection process, ICA is performed by fast computation-FASTICA.

The symmetry property can be used to considerably reduce in time required for computing the Tchebichef moments. The weighted Tchebichef polynomials have symmetry property satisfying:

t˜n(N1x)=(1)nt˜n(x)

This relation suggests the subdivision of the domain of an N×N image (when N is even) into four equal parts and performs the computation of the polynomials only in the first quadrant, where 0x,y(N21). If N is odd, the image can be zero-padded to achieve an even N. So the expression for Tchebichef moments can be modified with the help of Eq.(18) as:

Tpq=1ρ˜pNρ˜qNx=0N21y=0N21t˜p(x)t˜q(y){fxy+(1)pf(N1x,y)+(1)qf(x,N1y)+(1)p+qf(N1x,N1y)}

And the reconstruction formula can be modified to:

fxy={m=0N1n=0N1Tmnt˜m(x)t˜n(y)x,y<(N2)m=0N1n=0N1(1)mTmnt˜m(N1x)t˜n(y)y<(N2);x(N2)m=0N1n=0N1(1)mTmnt˜m(x)t˜n(N1y)x<(N2);y(N2)m=0N1n=0N1(1)m+nTmnt˜m(N1x)t˜n(N1y)x,y(N2)

The matrix form of representation is very useful especially for fast computation and implementation in MATLAB. In matrix form the Tchebichef moment can be computed using:

T=CFCT

Where, T is the Tchebichef moment matrix, T={Ti,j}i,j=0N1,N1. F is the image matrix, F={fij}i,j=0N1,N1. C is the Tchebichef polynomial matrix, C={ti,j}i,j=0N1,N1.

The inverse moment transform is:

F=CTTC

Set β(n.N)=ρ(n,N)to further simplify the computation. The direct calculation a two-dimensional Tchebichef moments with p + q order from Eq. (1) requires 16N(p+q+2)(p+q+1)(4(p+q)+3N9) multiplications and 16(p+q+2)(p+q+1)(2N(p+q)+3N26N9) additions. It is known that the term ρ̃(n,N) needs only to be calculated at most once per moment. Then for Tchebichef moment with p + q order will require 124(p+q+2)(p+q+1)((p+q)2+7(p+q)+24) multiplications and 13(p+q)3+(p+q)2+23(p+q) additions.

5. Experimental Results

This section presents the experimental results for the proposed watermarking scheme. In our experiments, all the attacks are performed by the popular watermark benchmark — Stirmark. The constraint is set to pF ≤ 10-6 and experiments have been done to many images including Lena image with the size of 256×256. The watermark is produced randomly with the same size as the original image, that is, it is independent to the original image. Experimental results showed that the proposed watermarking technique based on invariant Tchebichef moments can extract the watermark with good robustness without using the original image. Fig.1 gives the original image, watermark and the watermarked image. Normalization Correlation function (NC) is used to express the similarity between the original watermark w and the extracted watermark w′ quantitatively and Peak Signal to Noise Ratio (PSNR) is used to express the difference between the watermarked image z(x,y) and the original image f(x,y). It is observed that the higher NC, the more similarity between the extracted watermark and the original watermark. The definitions of the NC and PSNR are given below:

PSNR=10log(f(x,y)max2/(N2x=0N1y=0N1(f(x,y)z(x,y))2))
NC(w,w)=iw(i)w´(i)/(iw2(i)iw2(i))
 figure: Fig. 1.

Fig. 1. Original image, watermark and watermarked image. (a)original image (b)watermark (c)permuted watermark (d)watermarked image

Download Full Size | PDF

5.1 Robustness against additive noise

Generally, lower order Tchebichef moments of the image captures the low spatial frequency features of an image, while the higher order captures the high frequency features. Since the noise can be seen as the high frequency features of the image, it is showed that reconstructing an image with just its lower order moments could be remove additive noise.

The model of an image f(x,y) degraded by additive random noise is given by:

f̂(x,y)=fxy+v(x,y)

Where, v(x,y) represents the signal independent additive random noise uncorrelated with respect to the image. Examples of additive random noise generation include electronic circuit noise, amplitude quantization noise and faulty switching during imaging process. In the experiments, the Gaussian, salt-and pepper and speckle noise are added to test the robustness of proposed watermarking against additive noise. Gaussian noise results from the distribution:

p(V=v)=12πσexp((vμ´)22σ2)

Where, μ′ and σ 2 are the mean and variance respectively. Salt-and-pepper noise affects the image pixels with impulsive noise. And the model of speckle noise is given as:

vxy=λfxy

Where, λ is uniformly distributed random noise with mean μ′ and variance σ 2. The experimental results listed in Table 1 showed that even the watermarked images degraded by Gaussian, salt-and-pepper and speckle noise, the detector can still get a high NC.

Tables Icon

Table 1. Robust to additive noise

5.2 Robustness against JPEG compression

JPEG compression causes the high frequency components of an image to be attenuated. Tchebichef moments belong to the class of discrete orthogonal moments. Therefore, the implementation of Tchebichef moments does not involve any numerical approximation. Moreover, Tchebichef polynomials do not require coordinate space transformations. The image can be reconstructed with a partial lower order Tchebichef moments, so the proposed watermarking technique has a good robustness agaist JPEG compression. Experiments were performed to examine the robustness of the proposed watermarking technique to the JPEG compression produced by Stirmark with different qualify factor Q from 90 to 40. Experimental results are listed in Table 2 with a good robustness against JPEG compression.

Tables Icon

Table 2. Robust to JPEG compression produced by Stirmark

5.3 Robustness against geometric distortions

Experiments are also done to test the robustness of proposed watermarking against of geometric attacks performed by Stirmark. Fig.2(a) shows the NC due to the scaling attacks as a function of scaling factor parameters. Fig.2(b) gives the NC due to rotation attack with parameter of rotation angle.

 figure: Fig. 2.

Fig. 2. Experimental results of robust against geometric distortions. (a)NC due to scaling attack (b) NC due to rotation attacks

Download Full Size | PDF

5.4 Robustness against other attacks performed by Stirmark

Experiments are also performed to test the robustness of proposed watermarking against other attacks performed by Stirmark. Fig.3(a) shows NC due to the low pass filtering as a function of cut-off frequency. The number on the abscissa means the multiple of maximum frequency of the host image. Fig.3(b) shows NC due to median filtering as a function of filter length. The number on the abscissa means the length of median filter. Experiments are also done to test the robustness of the proposed watermarking with respect to the attacks performed by Stirmark. Table 3 lists NC between the original watermark and the watermark extracted from the test image which has been attacked by Stirmark.

 figure: Fig. 3.

Fig. 3. Experimental results of robustness against filtering. (a)NC due to low pass filtering (b)NC due to median filtering

Download Full Size | PDF

Tables Icon

Table 3. Robustness against attacks produced by Stirmark

Experimental results listed in Table 4 are compared with those of ref.[17] and two commercial watermark techniques.

Tables Icon

Table 4. Experimental results compared to [17] and two commercial watermarking

Experimental results are also compared with those of ref [18], which adopted ICA in spatial domain. Fig. 4 shows the comparisons with the attacks of scaling and rotation.

 figure: Fig. 4.

Fig. 4. NC comparison (scaling and rotation).

Download Full Size | PDF

6. Conclusions

Tchebichef moments are discrete orthogonal moments and present a number of advantages over moments of continuous orthogonal basis. The implementation of Tchebichef moments does not involve any numerical approximation since the basis set is orthogonal in the discrete domain of the image coordinate space. A new geometric invariant blind image watermarking method based on algebraic invariants is proposed with the introduction of the invariant Tchebichef moments. The watermark is generated randomly independent to the original image and embedded by modifying the invariant Tchebichef moments of the original image. ICA is utilized by watermark detector which can extract the watermarks blindly not merely just detect them. The mathematical description has been shown about the invariant moments and the simulation is also performed. The approach proposed in this paper withstands rotation, scaling and translation by being invariant to these transformations. This avoids the need for an exhaustive search for an embedded watermark in a complicated multi-dimensional space. The computation aspects of the proposed watermarking system are also described. Experimental results show that the proposed image watermarking technique is robust against attacks performed by Stirmark, such as geometric distortions, filtering, additive noise, JPEG compression.

Acknowledgments

This work is partly supported by National Nature Science Foundation of China (Grant No. 60502027) and Foundation of State Key Laboratory of Networking and Switching Technology.

References and Links

1. S. Voloshynovskiy, S. Pereira, and T. Pun, University of Geneva. J.J. Eggers and J.K. Su, “Attacks on digital watermarks: classification, estimation-based attacks and benchmarks,” IEEE Communications Magazine 8, 2∼10 (2001)

2. Choong-Hoon Lee and Heung-Kyu Lee, “Geometric attack resistant watermarking in wavelet transform domain,” OPTICS EXPRESS 13, 1307–1321 (2005) [CrossRef]   [PubMed]  

3. M. Alghoniemy and A. H. Tewfik, “Geometric invariants in image watermarking,” IEEE Transactions on Image Processing 13, 145–153 (2004) [CrossRef]   [PubMed]  

4. Y. Xin, S. Liao, and M. Pawlak, “Geometrically robust image watermarking via pseudo-Zernike moments,” IEEE Canadian Conference on Electrical and Computer Engineering 2, 939–942 (2004)

5. Y. Xin, S. Liao, and M. Pawlak, “A multibit geometrically robust image watermark based on Zernike moments,” International Conference on Pattern Recognition IV, 861–864 (2004)

6. S. Pereira and T. Pun, “Robust template matching for affine resistant image watermarks,” IEEE Transations on Image Processing 9, 1123–1129 (2000) [CrossRef]  

7. M. Kutter, “Performance improvement of spread spectrum based image watermarking schemes through M-ary modulation,” Lecture Notes in Computer Science 1728, 238–250 (1999)

8. P. Dong and N. P. Galasanos, “Affine transform resistant watermarking based on image normalization,” IEEE Int. Conf. Image Pro. 480–492 (2002)

9. J. O’Ruanaidh and T. Pun, “Rotation, scale and translation invariant spread spectrum digital image watermarking,” Signal Processing 66, 303–317 (1998) [CrossRef]  

10. H. S. Kim and H. K, Lee. “Invariant image watermark using Zernike moments,” IEEE Trans. Circuits and Systems for Vid. 13, 766–775 (2003) [CrossRef]  

11. M. R. Teague, “Image analysis via the general theory of moments,” J. Opt. Soc. Amer. 70, 920–930 (1980) [CrossRef]  

12. Chao Kan and Mandyam D. Srinath, “Invariant character recognition with Zernike and orthogonal Fourier-Mellin moments,” Pattern Recognition 35, 143–154 (2002) [CrossRef]  

13. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Trans, Image Processing 10, 1357–1364 (2001) [CrossRef]  

14. R. Mukundan, “Some computational aspects of discrete orthonormal moments,” IEEE Transactions on Image Processing 13, 1055–1059 (2004) [CrossRef]   [PubMed]  

15. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory IT-8, 179–187 (1962)

16. Hyvarinen A, and Oja E., “Independent component analysis: A tutorial. Notes for International Joint Conference on Neural Networks (IJCNN’99),” Washington D. C. , http://www.cis.hut.fi/projects/ica/ (1999)

17. S. Pereira and T Pun, “Robust template matching for affine resistant image watermarks,” IEEE Transactions on Image Processing 9, 1123∼1129 (2000) [CrossRef]  

18. D. Yu, F. Sattar, and K.K. Ma, “Watermark detection and extraction using independent component analysis method,” EURASIP Journal on Applied Signal Processing 1, 92–104 (2002)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Original image, watermark and watermarked image. (a)original image (b)watermark (c)permuted watermark (d)watermarked image
Fig. 2.
Fig. 2. Experimental results of robust against geometric distortions. (a)NC due to scaling attack (b) NC due to rotation attacks
Fig. 3.
Fig. 3. Experimental results of robustness against filtering. (a)NC due to low pass filtering (b)NC due to median filtering
Fig. 4.
Fig. 4. NC comparison (scaling and rotation).

Tables (4)

Tables Icon

Table 1. Robust to additive noise

Tables Icon

Table 2. Robust to JPEG compression produced by Stirmark

Tables Icon

Table 3. Robustness against attacks produced by Stirmark

Tables Icon

Table 4. Experimental results compared to [17] and two commercial watermarking

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

T pq = 1 ρ p N ρ q N x = 0 N 1 y = 0 N 1 t p ( x ) t p ( x ) f x y , p , q = 0,1,2 , , N 1
t n ( x ) = n ! k = 0 n ( 1 ) n k ( N 1 k n k ) ( n + k n ) ( x k )
f x y = p = 0 N 1 q = 0 N 1 T pq t p ( x ) t q ( y )
t ˜ n ( x ) = t n ( x ) β n N = k = 0 n c k , n , N x k
t ˜ 0 x N = 1
t ˜ 1 x N = 1 N N + 2 N x
t ˜ 2 x N = N 2 3 N + 2 N 2 + 6 ( 1 N ) N 2 x + 6 N 2 x 2
t ˜ 3 x N = N 3 + 6 N 2 11 N + 6 N 3 + 12 N 2 30 N + 22 N 3 x + 30 ( 1 N ) N 3 x 2 + 20 N 3 x 3
c 0 , 0 , N = 1
c 0,1 , N = 1 N N c 1,1 , N = 2 N
c 0,2 , N = N 2 3 N + 2 N 2 c 1,2 . N = 6 ( 1 N ) N 2 c 2,2 , N = 6 N 2
c 0,3 , N = N 3 + 6 N 2 11 N + 6 N 3 c 1,3 , N = 12 N 2 30 N + 22 N 3 c 2,3 , N = 30 ( 1 N ) N 3 c 3,3 , N = 20 N N 3
T pq = 1 ρ ˜ p N ρ ˜ q N x = 0 N 1 y = 0 N 1 ( i = 0 p c i , p , N x i ) ( j = 0 q c j , q , N x j ) f ( x , y )
m pq = x = 0 N 1 y = 0 N 1 x p y q f x y
T pq = 1 ρ ˜ p N ρ ˜ q N i = 0 p j = 0 q c j , q , N c i , p , N m i , j
T 00 = m 00 N 2 T 10 = 6 m 10 + 3 ( 1 N ) m 00 N ( N 2 1 ) T 01 = 6 m 01 + 3 ( 1 N ) m 00 N ( N 2 1 )
T 20 = 30 m 20 + 30 ( 1 N ) m 10 + 5 ( 1 N ) ( 2 N ) m 00 ( N 2 1 ) ( N 2 2 ) T 02 = 30 m 02 + 30 ( 1 N ) m 01 + 5 ( 1 N ) ( 2 N ) m 00 ( N 2 1 ) ( N 2 2 )
T 11 = 36 m 11 + 18 ( 1 N ) ( m 10 + m 01 ) + 9 ( 1 N 2 ) m 00 ( N 2 1 ) 2
μ nm = ( x x ¯ ) n ( x y ¯ ) m f ( x , y ) dxdy
g x + N 2 y + N 2 = f x ¯ + x cos θ y sin θ α y ¯ + y cos θ x sin θ α
m ̅ nm = x = 0 N 1 x = 0 N 1 α 2 f x y { α [ ( x x ¯ ) cos θ + ( y y ¯ ) sin θ ] + N 2 } n × { α [ ( y y ¯ ) cos θ ( x x ¯ ) sin θ ] + N 2 } m
T ̅ pq = 1 ρ ˜ p N ρ ˜ q N i = 0 p j = 0 q c j , q , N c i , P , N m ¯ i , j
T ¯ pq = x = 0 N 1 y = 0 N 1 N 2 2 m 00 f x y { [ ( x x ¯ ) cos θ + ( y y ¯ ) sin θ ] N 2 2 m 00 + N 2 } p { [ ( y y ¯ ) cos θ ( x x ¯ ) sin θ ] N 2 2 m 00 + N 2 } q
T ̂ m i , n i = T om i , n i + a m i n i T wm i , n i
x j = a j 1 s 1 + a j 2 s 2 + + a jn s n for all j = 1 . . n
t ˜ n ( N 1 x ) = ( 1 ) n t ˜ n ( x )
T pq = 1 ρ ˜ p N ρ ˜ q N x = 0 N 2 1 y = 0 N 2 1 t ˜ p ( x ) t ˜ q ( y ) { f x y + ( 1 ) p f ( N 1 x , y ) + ( 1 ) q f ( x , N 1 y ) + ( 1 ) p + q f ( N 1 x , N 1 y ) }
f x y = { m = 0 N 1 n = 0 N 1 T mn t ˜ m ( x ) t ˜ n ( y ) x , y < ( N 2 ) m = 0 N 1 n = 0 N 1 ( 1 ) m T mn t ˜ m ( N 1 x ) t ˜ n ( y ) y < ( N 2 ) ; x ( N 2 ) m = 0 N 1 n = 0 N 1 ( 1 ) m T mn t ˜ m ( x ) t ˜ n ( N 1 y ) x < ( N 2 ) ; y ( N 2 ) m = 0 N 1 n = 0 N 1 ( 1 ) m + n T mn t ˜ m ( N 1 x ) t ˜ n ( N 1 y ) x , y ( N 2 )
T = CFC T
F = C T TC
PSNR = 10 log ( f ( x , y ) max 2 / ( N 2 x = 0 N 1 y = 0 N 1 ( f ( x , y ) z ( x , y ) ) 2 ) )
NC ( w , w ) = i w ( i ) w ´ ( i ) / ( i w 2 ( i ) i w 2 ( i ) )
f ̂ ( x , y ) = f x y + v ( x , y )
p ( V = v ) = 1 2 π σ exp ( ( v μ ´ ) 2 2 σ 2 )
v x y = λ f x y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.