CN116167905A - Anti-screen robust watermark embedding and extracting method and system based on feature point detection - Google Patents

Anti-screen robust watermark embedding and extracting method and system based on feature point detection Download PDF

Info

Publication number
CN116167905A
CN116167905A CN202310182215.9A CN202310182215A CN116167905A CN 116167905 A CN116167905 A CN 116167905A CN 202310182215 A CN202310182215 A CN 202310182215A CN 116167905 A CN116167905 A CN 116167905A
Authority
CN
China
Prior art keywords
image
watermark
matrix
feature
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310182215.9A
Other languages
Chinese (zh)
Inventor
马宾
杜凯欣
马睿和
王春鹏
李健
张立伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202310182215.9A priority Critical patent/CN116167905A/en
Publication of CN116167905A publication Critical patent/CN116167905A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0052Embedding of the watermark in the frequency domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a method and a system for embedding and extracting anti-screen-shot robust watermarks based on feature point detection, relates to the technical field of digital media copyright protection, and provides an image correction algorithm based on PCA and SURF when a carrier image is shot and the watermarks are extracted, so that the generated distortion can be well corrected, and the robustness of the screen-shot watermarks is indirectly ensured. The method can correct the photographed image so as to restore the image to a state in the screen, and simultaneously select the characteristic areas before and after correction so as to improve the embedding effect of the watermark. To improve the robustness of the panning watermark. In the aspect of embedding the screen watermark, DCT transformation is carried out on the carrier image, and then DCT coefficients embedded with the watermark are selected by design experiments. Thus ensuring high robustness after watermark embedding and simultaneously ensuring high invisibility.

Description

Anti-screen robust watermark embedding and extracting method and system based on feature point detection
Technical Field
The disclosure relates to the technical field of digital media copyright protection, in particular to a method and a system for embedding and extracting anti-screen robust watermarks based on feature point detection.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The existing privacy protection of the Internet of things or wireless transmission aims at more complicated privacy disclosure, but neglects the most basic privacy disclosure path in daily life. Today, smartphones have higher and higher photographing performance and popularity, and photographing is also the simplest and efficient information transmission mode, especially in daily business activities, authorized staff can steal information only by opening the information on a screen and photographing, and no record is left, so that the mode is difficult to prohibit externally.
The screen watermark is one of digital watermarks, and the application scene of the screen watermark is mainly in the aspect of copyright protection. With the wide application of digital watermarking in the field of copyright protection, the existing anti-printing scanning watermarking algorithm and anti-scanning shooting watermarking algorithm are researched and applied to a certain extent. But digital watermarks for these two cross-channel transmissions are not ideal for combating panning effects. Unlike the print scanning process and the scan photographing process, the deformation to be resisted in the panning process is not a single geometric deformation but an affine transformation. At present, a simpler screen watermarking scheme is proposed for a screen shooting process, and the scheme is to embed information by covering a slightly bright or a slightly dark area on a computer screen. However, the method is equivalent to adding a new attack path to the secretly-shooting person on the premise of distorting watermark information in the process of screen shooting, namely, physical attack is carried out on the screen, so that the robustness of the method is not reliable. Most of the robust watermarking algorithms at present are mainly concentrated in research on digital channels and robustness research on watermarking algorithms such as anti-printing scanning and anti-scanning shooting, and the robustness of screen watermarking is a little focused on, but certain defects exist in performance. With the rapid popularization of smart phones in the current society, the robustness requirement of screen watermarking is urgent; in the aspect of watermark extraction, due to the characteristic of screen-shot watermark, when the carrier image is shot and the watermark is extracted, the carrier image can generate strong distortion and is difficult to recover. Such deformation can have a significant impact on the extraction of the watermark.
Disclosure of Invention
In order to solve the problems, the disclosure provides a method and a system for embedding and extracting anti-screen-shot robust watermarks based on feature point detection, which are based on PCA and scale-invariant feature transformation algorithm, and can correct the generated distortion when shooting a carrier image and extracting the watermarks, thereby indirectly ensuring the robustness of the screen-shot watermarks.
According to some embodiments, the present disclosure employs the following technical solutions:
the anti-screen robust watermark embedding method based on feature point detection comprises the following steps:
acquiring an original image to be embedded and a watermark image, and performing non-overlapping cutting to obtain image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
constructing coefficient matrixes by using DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and carrying out secondary SVD decomposition on the feature matrix;
processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition;
and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
According to some embodiments, the present disclosure employs the following technical solutions:
the anti-panning robust watermark extraction method based on feature point detection comprises the following steps:
acquiring a screen shot image embedded with a watermark and an original image embedded with the watermark image;
performing feature point positioning on the screen shot image and the original image embedded with the watermark image;
acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
non-overlapping cutting is carried out on the images subjected to screen shot correction, and then each image block is segmented;
DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
According to some embodiments, the present disclosure employs the following technical solutions:
an anti-screen robust watermark embedding system based on feature point detection, comprising:
the image acquisition module is used for acquiring an original image to be embedded and a watermark image to be subjected to non-overlapping cutting, and cutting the original image and the watermark image into image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
the embedding module is used for constructing coefficient matrixes by utilizing DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and then carrying out secondary SVD decomposition on the feature matrix; processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition; and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
According to some embodiments, the present disclosure employs the following technical solutions:
an anti-panning robust watermark extraction system based on feature point detection, comprising:
the image acquisition module is used for acquiring the screen shot image embedded with the watermark and the original image embedded with the watermark image;
the correction module is used for positioning characteristic points of the screen shot image and the original image embedded with the watermark image; acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
the extraction module is used for carrying out non-overlapping cutting on the images subjected to screen shot correction, and then dividing each image block into blocks; DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
According to some embodiments, the present disclosure employs the following technical solutions:
an electronic device comprising a processor and a computer-readable storage medium, the processor configured to implement instructions; the computer readable storage medium is configured to store a plurality of instructions adapted to be loaded and executed by the processor to perform the anti-panning robust watermark embedding method based on feature point detection and the anti-panning robust watermark extraction method based on feature point detection.
Compared with the prior art, the beneficial effects of the present disclosure are:
in the aspect of watermark extraction, due to the characteristic of screen-shot watermark, when the carrier image is shot and the watermark is extracted, the carrier image can generate strong distortion and is difficult to recover. Such deformation can have a significant impact on the extraction of the watermark. Therefore, based on the image correction algorithm based on PCA and SURF, the method has a good correction effect on the generated distortion, and indirectly ensures the robustness of the screen watermark. The algorithm can correct the photographed image so as to restore the image into a state in the screen, and simultaneously, the characteristic areas before and after correction are selected so as to improve the embedding effect of the watermark. To improve the robustness of the panning watermark.
In the aspect of embedding the screen watermark, DCT transformation is carried out on the carrier image, and then DCT coefficients embedded with the watermark are selected by design experiments. Thus ensuring high robustness after watermark embedding and simultaneously ensuring high invisibility.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure, and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flowchart of an embedding method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of an extraction method according to an embodiment of the disclosure;
FIG. 3 is a flow chart of acquiring an embedded watermark image according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of acquiring a post-embedding re-extracted watermark image in accordance with an embodiment of the present disclosure;
fig. 5 is a flowchart of DCT transforming a panning image according to an embodiment of the present disclosure.
The specific embodiment is as follows:
the disclosure is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Example 1
An embodiment of the present disclosure provides a method for embedding a robust watermark for anti-panning based on feature point detection, including:
step 1: acquiring an original image to be embedded and a watermark image, and performing non-overlapping cutting to obtain image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
step 2: constructing coefficient matrixes by using DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and carrying out secondary SVD decomposition on the feature matrix;
step 3: processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition;
step 4: and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
As an embodiment, in step 1, the process of acquiring the original image to be embedded and the watermark image to perform non-overlapping cutting is: the original image and the watermark image are subjected to 64 x 64 non-overlapping cutting, and the number of blocks of the two images is the same. Assume that the carrier image O is divided into m identical-size images o= { O after non-overlapping cutting 1 ,o 2 ,...,o m The watermark image W is equally divided into m identical sized images w= { W 1 ,w 2 ,...w m }。
Each block of the original image is divided into 8 x 8 size blocks, each block being cut into n 8 x 8 size blocks:
O={o 1 ,o 2 ,...,o m }
={o 11 ,o 12 ,...,o 1n ,...,o mn }
the DCT transformation is performed on a total of m×n 8×8 blocks, and in order to improve the invisibility of the watermark embedded image, the DCT coefficient of each block needs to be screened according to a specific rule to reduce the influence on the original image after embedding the watermark.
Three coefficient selection rules
A) Before DCT transformation is selected as a transformation rule, SVD transformation is selected, watermark information is embedded by carrying out +1 or +1 on specific coefficients in an S matrix after SVD decomposition on an image, and then the modified image is subjected to inverse SVD transformation. And modifying all three coefficient matrixes subjected to SVD according to the watermark information.
B) The DCT transform is introduced into the coefficient selection rules. After DCT transformation is carried out on each 64 x 64 size block, a 1/4 size coefficient matrix of the upper left corner of each block is selected to construct a feature matrix, DWT transformation is carried out on watermark images, and LL components of watermark information are selected to be embedded.
C) Since the DC coefficients contain most of the high frequency information in the image, the new coefficient selection rules will avoid DC coefficients in the image matrix, and the coefficient selection is performed by adopting zig-zag scanning, and 952 coefficients are selected in total for the construction of the feature matrix. Meanwhile, the LL component after the previous watermark information DWT is abandoned, the LH component is used for replacing the LL component, and then watermark embedding is carried out.
The discrete cosine transform (Discrete Cosine Transform, DCT) is the most common image transformation technique in digital watermarking technology, and can effectively transform a spatial domain signal into a frequency domain component, thereby realizing the effect of low-medium-high frequency coefficient separation. Therefore, proper frequency bands can be reasonably selected for subsequent operation according to different robustness requirements.
The DCT transform formula is shown as follows:
Figure BDA0004102707160000081
Figure BDA0004102707160000082
Figure BDA0004102707160000083
wherein I is a spatial domain image to be transformed, the size is M multiplied by N, y (u, v) is a corresponding DCT coefficient, and the calculation formula of the same DCT inverse transformation is as follows:
Figure BDA0004102707160000084
after the image is subjected to DCT, the obtained matrix size is the same as the original image size. In the transformed DCT coefficient matrix, the upper left corner represents the DC component, the rest of the components are AC components, and the coefficients from the upper left corner to the lower right corner correspond to the energy distribution from low frequency to high frequency, respectively. Since energy is mostly concentrated in low frequency coefficients, these coefficients are relatively stable in distortion, and most digital watermarking algorithms based on DCT transform operate correspondingly in medium and low frequency coefficients.
Furthermore, in the rule of selecting coefficients, the watermark carrier image needs to be segmented, and the DCT coefficients are selected particularly because the DCT transforms have good decorrelation, so that the DCT transforms are used for processing each segmented image after cutting. The DCT coefficient selection in the algorithm has a certain influence on the invisibility of the watermark image, the error rate after watermark extraction and the embedding capacity of the algorithm, so that several different DCT coefficient selection rules are formulated, and the influence of which rule on the algorithm is most positive is verified in the subsequent experiments.
The DCT transform is introduced into the coefficient selection rules. After DCT transformation is carried out on each 64 x 64 size block, a 1/4 size coefficient matrix of the upper left corner of each block is selected to construct a feature matrix, DWT transformation is carried out on watermark images, and LL components of watermark information are selected to be embedded.
Since the DC coefficients contain most of the high frequency information in the image, the new coefficient selection rules will avoid DC coefficients in the image matrix, and the coefficient selection is performed by adopting zig-zag scanning, and 952 coefficients are selected in total for the construction of the feature matrix. Meanwhile, the selection of the LL component after the DWT of the previous watermark information is abandoned, the LH component is used for replacing the LL component, and then watermark embedding is carried out.
Constructing a new matrix by using the filtered coefficients, and marking the new coefficient matrix as H i (i=1. M x n). For H i SVD conversion is carried out:
H i =U i ×S i ×V i T
the S matrix after SVD conversion of each coefficient matrix is selected, and coefficients of (1, 1) positions in each matrix are extracted to form a feature matrix F, and the feature matrix is needed to realize watermark embedding and extraction. Then, carrying out secondary SVD decomposition on the feature matrix F:
Figure BDA0004102707160000091
processing the watermark image, performing DWT (discrete wavelet transform) on the watermark image, extracting a transformed LH component, and performing SVD (SVD) decomposition on the LH component:
Figure BDA0004102707160000092
s after SVD conversion of LH component of watermark lh S after matrix and feature matrix are subjected to secondary SVD decomposition F Matrix addition, α is the embedding strength.
S'=S F +α×S lh
U using newly obtained S' matrix and feature matrix F
Figure BDA0004102707160000101
The matrix is subjected to inverse SVD conversion to obtain a new feature matrix:
Figure BDA0004102707160000102
coefficients and H in the newly obtained feature matrix F i S after SVD conversion i And sequentially replacing (1, 1) in the matrix to obtain a new coefficient matrix, and sequentially replacing the coefficients in the newly obtained coefficient matrix with DCT coefficients in m x n blocks to obtain a new block.
And carrying out IDCT conversion on each new partition, and merging the partitions again to obtain a new watermark-containing image.
Example 2
An embodiment of the present disclosure provides a robust watermark extraction method for anti-panning based on feature point detection, including:
step one: acquiring a screen shot image embedded with a watermark and an original image embedded with the watermark image;
step two: performing feature point positioning on the screen shot image and the original image embedded with the watermark image;
step three: acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
step four: non-overlapping cutting is carried out on the images subjected to screen shot correction, and then each image block is segmented;
step five: DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
Specifically, as an embodiment, the method for performing feature point positioning on the screen shot image and the original image embedded with the watermark image includes:
positioning feature points by using a SURF feature point detection algorithm, specifically, firstly constructing a Hassian matrix to detect candidate feature points, and then constructing a scale space by box type filtering; and then, firstly acquiring the preliminary characteristic points, obtaining the characteristic points of the sub-pixel level by adopting a three-dimensional linear interpolation method, and then determining the main direction of the characteristic points to generate descriptors.
Specifically, before constructing a scale space, firstly, performing gaussian filtering on an image, and then constructing a Hessian matrix on the filtered image:
Figure BDA0004102707160000111
l in the above xx (x, σ) is the second derivative of the gaussian in the x-direction, and when the discriminant of the Hessian matrix takes a local maximum, the key point is located by determining the current point as a brighter or darker point than the other points in the surrounding neighborhood. In the above discriminant, L (x, σ) is a gaussian convolution of the original image, and since the gaussian kernel obeys normal distribution and the coefficients from the center point to the outside are lower, box filtering is used to approximately replace gaussian filtering in order to increase the operation speed SURF:
det(H)=Dxx*Dyy-(0.9*Dxy) 2
the above equation is followed by a weighting factor of 0.9 to balance the error due to the approximate calculation using box filtering.
The box type filtering converts the filtering of the image into the addition and subtraction operation problem of pixel sums among different areas on the calculated image, and the filtering can be completed by using an integral graph only by a few simple searches. In this way the efficiency of the construction of the dimensional space is greatly improved.
After the construction of the scale space is completed, feature point positioning is performed next, as shown in fig. 3-4, each pixel point processed by the Hessian matrix is compared with 26 points in the upper and lower directions in the scale space, and if the maximum value or the minimum value in the 26 points is reserved, the pixel point is used as a preliminary feature point.
The characteristic points of the sub-pixel level are obtained by adopting a three-dimensional linear interpolation method, meanwhile, the points smaller than a specific threshold value are also removed, the addition of the extreme value is that the number of the detected characteristic points is reduced, and finally, only a plurality of strongest points can be detected.
Determining the main direction of the feature points: in order to ensure rotational invariance, in the SURF algorithm, the gradient histogram is not counted, but Harr wavelet characteristics in the characteristic point field are counted. In the neighborhood of radius 6s (s is the scale value of the feature point) is calculated by taking the feature point as the center, the sum of Haar wavelet responses (Haar wavelet side length takes 4 s) of all points in the 60-degree fan in the horizontal direction (x direction) and the vertical direction (y direction) is counted, and Gaussian weight coefficients are given to the response values, so that the response contribution close to the feature point is large, and the corresponding contribution far from the feature point is small. Responses within the 60 deg. range are added to form a new vector, the whole circular area is traversed, and the longest direction of the vector is selected as the main direction of the feature point. And then calculating each characteristic point one by one to obtain the main direction of each characteristic point.
Next, a feature descriptor is constructed, in SIFT, the step is to extract 44 area blocks around the feature point, count eight gradient directions per small block, and use the 4×4× 8=128-dimensional vector as the SIFT feature descriptor, while in SURF algorithm of the present application, although a rectangular area of 4*4 is also taken around the feature point, the taken rectangular area direction is along the main direction of the feature point. Each sub-region counts Haar wavelet characteristics of 25 pixels in the horizontal direction and in the vertical direction, the horizontal direction or the vertical direction are calculated relative to a main direction, haarX, haarY, haarX and HaarY in each sub-region are calculated, and meanwhile, a vector with the length of 4 is obtained by weighted summation according to a two-dimensional Gaussian function, so that a total of 64-dimensional vectors can be obtained for 16 regions, and meanwhile, after the 64-dimensional vectors are obtained, the characteristics of each region and the Gaussian template are convolved, and finally, the normalization processing is carried out to obtain the illumination invariance. After the characteristic points are obtained through the process, the characteristic areas are divided according to the distribution condition of the characteristic points in the image, and the size of the characteristic areas is the same as that of watermark information to be embedded.
As an embodiment, the dimension reduction process is as follows: firstly, carrying out decentralization on all the features, then solving a covariance matrix of the sample under the multidimensional features, then solving the feature values and corresponding feature vectors of the covariance matrix, and sequencing the feature values in sequence from big to small; and projecting the original features onto the selected feature vectors to obtain the feature after dimension reduction.
Specifically, assume that the sample is composed of M samples { X } 1 ,X 2 ,...,X M Each sample has N-dimensional features X i =(x 1 i ,x 2 i ,...,x N i ) T Each feature x j With respective characteristic values. The first step in the PCA algorithm is to center all features, i.e., remove the mean. Given sample X 1 =(x 1 1 ,x 2 1 ,...,x N 1 ) T The average value is:
Figure BDA0004102707160000131
this step is to subtract the corresponding mean value from each feature in a given sample:
Figure BDA0004102707160000132
and a second step of: covariance matrix C
Figure BDA0004102707160000133
In the above, the characteristic x is on the diagonal line 1 1 And x 2 1 Is covariance on the off-diagonal. Covariance greater than 0 represents x 1 1 And x 2 1 If one is increased, the other is also increased; a value less than 0 represents one increment and the other decrement; at covariance 0, the two are independent. The larger the absolute value of the covariance, the greater the effect of the two on each other and vice versa. Therein cov (x) 1 1 ,x 1 1 ) The solving process is as follows:
Figure BDA0004102707160000134
through the calculation of the formula, the covariance matrix C of the M samples under the N-dimensional characteristic is obtained.
And a third step of: and solving the eigenvalue lambda and the corresponding eigenvector u of the covariance matrix C, wherein one eigenvalue corresponds to one eigenvector, sequencing the eigenvalues lambda in the order from big to small, selecting the largest top k eigenvectors, and taking out the k eigenvectors corresponding to the eigenvalues lambda.
Fourth step: and projecting the original features onto the selected feature vectors to obtain new k-dimensional features after dimension reduction. The projection of the first k largest eigenvalues and corresponding eigenvectors is dimension reduction. For each sample X i The original characteristics are that
Figure BDA0004102707160000141
The new feature after projection is +.>
Figure BDA0004102707160000142
The calculation formula of the new feature is as follows:
Figure BDA0004102707160000143
carrying out screen shooting and correction on the watermark-containing image, and then carrying out watermark extraction, specifically:
1) And (3) carrying out 64 x 64 non-overlapping cutting on the image subjected to screen shot correction, and then carrying out 8 x 8-sized segmentation on each segmented block.
Figure BDA0004102707160000144
2) And obtaining m x n blocks in total, performing DCT (discrete cosine transform) on each block, selecting coefficients according to coefficient selection rules formulated during embedding, recombining coefficient matrixes, and performing SVD (singular value decomposition) on each coefficient matrix.
H i '=U i '×S i '×V i T'
The value of (1, 1) in the S matrix of each coefficient matrix is selected to form a feature matrix F'.
3) SVD-decomposing the formed feature matrix F ' to obtain a matrix S ', and SVD-transforming the feature matrix F stored during embedding to obtain S ' o . Performing inverse operation on the embedded rule:
S′ w =(S″-S o ) Alpha 3-17
4) With newly obtained S' w And (3) performing inverse SVD (singular value decomposition) on the watermark information by the matrix to obtain a watermark image which is re-extracted after being embedded.
Example 3
In one embodiment of the present disclosure, an anti-panning robust watermark embedding system based on feature point detection is provided, including:
the image acquisition module is used for acquiring an original image to be embedded and a watermark image to be subjected to non-overlapping cutting, and cutting the original image and the watermark image into image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
the embedding module is used for constructing coefficient matrixes by utilizing DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and then carrying out secondary SVD decomposition on the feature matrix; processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition; and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
Example 4
In one embodiment of the present disclosure, there is provided an anti-panning robust watermark extraction system based on feature point detection, including:
the image acquisition module is used for acquiring the screen shot image embedded with the watermark and the original image embedded with the watermark image; the correction module is used for positioning characteristic points of the screen shot image and the original image embedded with the watermark image; acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
the extraction module is used for carrying out non-overlapping cutting on the images subjected to screen shot correction, and then dividing each image block into blocks; DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
Example 5
In one embodiment of the present disclosure, an electronic device is provided that includes a processor for implementing instructions and a computer-readable storage medium; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded and executed by the processor for the anti-panning robust watermark embedding method step based on feature point detection and the anti-panning robust watermark extraction method step based on feature point detection.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the specific embodiments of the present disclosure have been described above with reference to the drawings, it should be understood that the present disclosure is not limited to the embodiments, and that various modifications and changes can be made by one skilled in the art without inventive effort on the basis of the technical solutions of the present disclosure while remaining within the scope of the present disclosure.

Claims (10)

1. The anti-screen robust watermark embedding method based on feature point detection is characterized by comprising the following steps of:
acquiring an original image to be embedded and a watermark image, and performing non-overlapping cutting to obtain image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
constructing coefficient matrixes by using DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and carrying out secondary SVD decomposition on the feature matrix;
processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition;
and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
2. The anti-panning robust watermark embedding method based on feature point detection as claimed in claim 1, wherein the process of obtaining the original image to be embedded and the watermark image for non-overlapping cutting is: the original image and the watermark image are subjected to 64 x 64 non-overlapping cutting, and the number of blocks of the two images is the same.
3. The anti-panning robust watermark embedding method based on feature point detection as claimed in claim 1, wherein the processing of the watermark image to obtain a new feature matrix comprises: performing DWT on the watermark image, extracting a transformed LH component, performing SVD on the LH component, adding an S matrix after SVD on the watermark LH component and an S matrix after secondary SVD on the feature matrix, and performing inverse SVD on the newly obtained S feature to obtain a new feature matrix.
4. The anti-panning robust watermark extraction method based on feature point detection is characterized by comprising the following steps of:
acquiring a screen shot image embedded with a watermark and an original image embedded with the watermark image;
performing feature point positioning on the screen shot image and the original image embedded with the watermark image;
acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
non-overlapping cutting is carried out on the images subjected to screen shot correction, and then each image block is segmented;
DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
5. The method for extracting the robust watermark of anti-panning based on feature point detection as claimed in claim 4, wherein the method for performing feature point positioning on the original image of the panning image and the embedded watermark image is as follows:
positioning feature points by using a SURF feature point detection algorithm, specifically, firstly constructing a Hassian matrix to detect candidate feature points, and then constructing a scale space by box type filtering; and then, firstly acquiring the preliminary characteristic points, obtaining the characteristic points of the sub-pixel level by adopting a three-dimensional linear interpolation method, and then determining the main direction of the characteristic points to generate descriptors.
6. The robust watermark extraction method for anti-panning based on feature point detection as claimed in claim 5, wherein after the feature points are obtained, feature areas are divided according to the distribution of the feature points in the image, and the size of the feature areas is the same as the size of watermark information to be embedded.
7. The method for extracting the robust watermark of anti-panning based on feature point detection as claimed in claim 4, wherein the dimension reduction process is as follows: firstly, carrying out decentralization on all the features, then solving a covariance matrix of the sample under the multidimensional features, then solving the feature values and corresponding feature vectors of the covariance matrix, and sequencing the feature values in sequence from big to small; and projecting the original features onto the selected feature vectors to obtain the feature after dimension reduction.
8. The anti-screen-shot robust watermark embedding system based on feature point detection is characterized by comprising:
the image acquisition module is used for acquiring an original image to be embedded and a watermark image to be subjected to non-overlapping cutting, and cutting the original image and the watermark image into image blocks with the same number; dividing each image block of an original image, performing DCT (discrete cosine transformation) on the divided small blocks, and screening DCT coefficients of each small block;
the embedding module is used for constructing coefficient matrixes by utilizing DCT coefficients, carrying out SVD (singular value decomposition) on the coefficient matrixes, selecting an S matrix after SVD conversion of each coefficient matrix, constructing a feature matrix, and then carrying out secondary SVD decomposition on the feature matrix; processing the watermark image to obtain a new feature matrix, and sequentially replacing coefficients in the new feature matrix and coefficients after SVD (singular value decomposition) of the feature matrix to obtain a new partition; and carrying out IDCT conversion on each new partition, and merging the partitions again to obtain the watermark-containing image.
9. The anti-panning robust watermark extraction system based on feature point detection is characterized by comprising the following steps:
the image acquisition module is used for acquiring the screen shot image embedded with the watermark and the original image embedded with the watermark image;
the correction module is used for positioning characteristic points of the screen shot image and the original image embedded with the watermark image; acquiring feature point descriptors after feature point positioning, and performing dimension reduction on the feature point descriptors to finish correction of a screen shot image;
the extraction module is used for carrying out non-overlapping cutting on the images subjected to screen shot correction, and then dividing each image block into blocks; DCT transformation is carried out on each small block, coefficients are selected according to coefficient selection rules formulated during embedding, coefficient matrixes are formed again, SVD decomposition is carried out on each coefficient matrix, and values in an S matrix of each coefficient matrix are selected to form a feature matrix; and carrying out feature decomposition on the formed feature matrix to obtain a matrix, and carrying out inverse SVD (singular value decomposition) on watermark information on the newly obtained matrix to obtain a watermark image which is re-extracted after being embedded.
10. An electronic device comprising a processor and a computer-readable storage medium, the processor configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded and executed by a processor for the anti-panning robust watermark embedding method based on feature point detection as claimed in any one of claims 1 to 3 and the anti-panning robust watermark extraction method based on feature point detection as claimed in claims 4 to 7.
CN202310182215.9A 2023-02-24 2023-02-24 Anti-screen robust watermark embedding and extracting method and system based on feature point detection Pending CN116167905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310182215.9A CN116167905A (en) 2023-02-24 2023-02-24 Anti-screen robust watermark embedding and extracting method and system based on feature point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310182215.9A CN116167905A (en) 2023-02-24 2023-02-24 Anti-screen robust watermark embedding and extracting method and system based on feature point detection

Publications (1)

Publication Number Publication Date
CN116167905A true CN116167905A (en) 2023-05-26

Family

ID=86414558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310182215.9A Pending CN116167905A (en) 2023-02-24 2023-02-24 Anti-screen robust watermark embedding and extracting method and system based on feature point detection

Country Status (1)

Country Link
CN (1) CN116167905A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542840A (en) * 2023-07-06 2023-08-04 云南日报报业集团 Robust watermarking method based on news text image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542840A (en) * 2023-07-06 2023-08-04 云南日报报业集团 Robust watermarking method based on news text image
CN116542840B (en) * 2023-07-06 2023-09-22 云南日报报业集团 Robust watermarking method based on news text image

Similar Documents

Publication Publication Date Title
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Birajdar et al. Digital image forgery detection using passive techniques: A survey
Kang et al. Identifying tampered regions using singular value decomposition in digital image forensics
Andalibi et al. Digital image watermarking via adaptive logo texturization
Hashmi et al. Copy move forgery detection using DWT and SIFT features
EP2950267B1 (en) Image denoising method and image denoising apparatus
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
Thajeel et al. State of the art of copy-move forgery detection techniques: a review
Kang et al. Robust median filtering forensics based on the autoregressive model of median filtered residual
Gupta et al. A study on source device attribution using still images
Chen et al. Median filtering detection using edge based prediction matrix
CN112801846A (en) Watermark embedding and extracting method and device, computer equipment and storage medium
CN116167905A (en) Anti-screen robust watermark embedding and extracting method and system based on feature point detection
Roy et al. Digital image forensics
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN117496019B (en) Image animation processing method and system for driving static image
Diaa A Deep Learning Model to Inspect Image Forgery on SURF Keypoints of SLIC Segmented Regions
Ustubıoglu et al. Image forgery detection using colour moments
Ustubioglu et al. A novel keypoint based forgery detection method based on local phase quantization and SIFT
Li et al. A refined analysis for the sample complexity of adaptive compressive outlier sensing
Zhao et al. A comprehensive study on third order statistical features for image splicing detection
Bashar et al. Wavelet-Based Multiresolution Features for Detecting Duplications in Images.
CN114782268A (en) Low-illumination image enhancement method for improving SURF algorithm
Collins et al. CSDD features: Center-surround distribution distance for feature extraction and matching
Meng et al. A survey of research on image data sources forensics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination