CN111754441B - Image copying, pasting and forging passive detection method - Google Patents

Image copying, pasting and forging passive detection method Download PDF

Info

Publication number
CN111754441B
CN111754441B CN202010606073.0A CN202010606073A CN111754441B CN 111754441 B CN111754441 B CN 111754441B CN 202010606073 A CN202010606073 A CN 202010606073A CN 111754441 B CN111754441 B CN 111754441B
Authority
CN
China
Prior art keywords
image
sub
suspected
fake
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010606073.0A
Other languages
Chinese (zh)
Other versions
CN111754441A (en
Inventor
张驯
白万荣
王蓉
魏峰
宋曦
杨凡
田秀霞
周国帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
State Grid Gansu Electric Power Co Ltd
Shanghai Electric Power University
Original Assignee
STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
State Grid Gansu Electric Power Co Ltd
Shanghai Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE, State Grid Gansu Electric Power Co Ltd, Shanghai Electric Power University filed Critical STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
Priority to CN202010606073.0A priority Critical patent/CN111754441B/en
Publication of CN111754441A publication Critical patent/CN111754441A/en
Application granted granted Critical
Publication of CN111754441B publication Critical patent/CN111754441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a passive detection method for image copy-paste forgery, which comprises the following steps: inputting a fake image to be detected, and preprocessing the fake image to be detected; based on a self-selecting sub-image mode, a suspected fake sub-image is obtained from the preprocessed fake image to be detected; extracting features from the suspected counterfeit sub-image using the modified PCNN; combining the features of a plurality of suspected forged sub-images in a double-feature matching mode, and positioning to obtain an image forged region in the forged image to be detected, wherein the image forged region comprises a copying region and a pasting region. Compared with the prior art, the invention adopts a self-selecting sub-image mode, and by improving PCNN, the detection speed can be increased, and the characteristic with high robustness can be extracted, thereby improving the efficiency and accuracy of copy-paste image counterfeiting detection.

Description

Image copying, pasting and forging passive detection method
Technical Field
The invention relates to the technical field of image evidence obtaining counterfeiting detection, in particular to a passive detection method for image copy-paste counterfeiting.
Background
With the development of 5G technology, the internet digital media application is becoming more and more widespread, and the security and authenticity of digital images are becoming increasingly interesting. Existing image editing software, such as Photoshop, is powerful, allowing digital images to be easily counterfeited at extremely low cost without leaving any marks. Copy-and-paste forgery is one of the most common types of forgeries, which is to copy a portion of an image and then paste it to other locations of the image for the purpose of hiding or doubling important information.
Image copy-and-paste forgery detection techniques can be divided into two main categories, active detection and passive detection (i.e., blind detection). The active detection needs to actively preprocess the image when the image is established, such as embedding characteristic information in the image, wherein the characteristic information is watermarks with specific significance or the image, and the integrity of the embedded information can be verified when the detection is performed, so as to judge whether the image is forged or not. The passive detection can achieve the purpose of counterfeiting detection only by means of the statistical information or physical characteristics of the image without adding any characteristic information to the image in advance. In consideration of the actual application scene that most images cannot be subjected to operations such as feature embedding before being counterfeited, embedding feature information increases the workload of counterfeiting detection. Therefore, active detection has strong limitations, and passive detection is gradually becoming a research hotspot.
In recent years, many image copy-and-paste forgery detection methods have been proposed, and can be roughly classified into two types: dot method and block method. The dot method is to detect forgery by extracting sparse feature points of an image, and the block method is to detect forgery by extracting dense feature information of an image block. In the copy-paste forgery detection, feature information extraction is a key for ensuring the accuracy of a detection result, and in the prior art, in order to improve the reliability of feature extraction, a segmentation parameter with strong adaptability is often found in advance so as to divide the whole image into overlapping blocks, so that the workload of feature extraction can be increased, and the detection efficiency can be reduced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a passive detection method for image copy-paste forgery so as to realize the purpose of high-efficiency and accurate detection.
The aim of the invention can be achieved by the following technical scheme: a passive detection method for image copy-paste forgery includes the following steps:
s1, inputting a fake image to be detected, and preprocessing the fake image to be detected;
s2, based on a self-selecting sub-image mode, a suspected fake sub-image is obtained from the preprocessed fake image to be detected;
s3, extracting features from the suspected counterfeit sub-images by adopting an improved PCNN (Pulse Coupled Neural Network );
s4, positioning in the image to be detected to obtain an image forging area by combining the features extracted in the step S3 in a double-feature matching mode, wherein the image forging area comprises a copying area and a pasting area.
Further, the preprocessing process in step S1 specifically includes:
s11, carrying out graying treatment on the fake image to be detected to obtain a corresponding gray image;
s12, gaussian filtering denoising is carried out on the gray level image, and the denoised gray level image is obtained.
Further, the step S2 specifically includes the following steps:
s21, carrying out binarization processing on the denoised gray scale image according to a preset pixel value threshold;
s22, performing morphological opening operation on the binarized image to eliminate tiny targets in the image, separate unnecessary connected domains and smooth the boundary of a larger target;
s23, carrying out contour extraction on the image subjected to morphological opening operation to obtain contours of a plurality of suspected fake areas;
s24, screening the outline of the suspected counterfeit area from the outlines of the suspected counterfeit areas to obtain the preferable outline of the suspected counterfeit area;
and S25, drawing boundary boxes for the preferable outlines of the suspected counterfeit areas to serve as the suspected counterfeit sub-images.
Further, the step S24 specifically includes the following steps:
s241, deleting useless contours from the contours of a plurality of suspected fake areas according to the hierarchical structure, and screening to obtain contours without child contours and father contours;
s242, further deleting useless contours from the contours obtained by screening in the step S241 according to the preset contour side length threshold range so as to obtain the preferable contours of the suspected fake areas.
Further, the threshold range of the contour edge length in the step S242 is specifically 0.5Cav to 1.5Cav, where Cav is an average value of contour edge lengths of all the child contours and the parent contours.
Further, the step S3 specifically includes the following steps:
s31, separating the suspected counterfeit sub-image into B, G and R channels to obtain corresponding B, G and R three-channel image matrixes;
s32, inputting B, G corresponding to the suspected counterfeit sub-image and an R three-channel image matrix into the improved PCNN to extract B, G and R three-channel characteristic information respectively;
and S33, connecting the extracted B, G and R channel characteristic information to obtain the characteristics of the suspected counterfeit sub-image.
Further, the modified PCNN in step S32 includes a connection unit, a feed-in unit, an internal state unit, and a pulse generation unit, where the connection unit and the feed-in unit are respectively connected to the internal state unit, and the internal state unit is connected to the pulse generation unit, and the connection unit is specifically:
wherein L [ n ]]For the previous connection state of the connection unit, ln+1]In order to be in the connection state of the next time,for the damping constant of the connection unit, V L For regularized constant, Y [ n ]]For the output of the previous iteration, W is a weight matrix;
the feed-in unit specifically comprises:
wherein F [ n ]]To the previous feed-in state of the feed-in unit, fn+1]In order to be in the feed-in state of the next time,for feeding the attenuation constant of the unit, V F For regularized constant, Y [ n ]]For the output of the previous iteration, G is the weight matrix, and I is the input image pixel matrix;
the internal state unit specifically comprises:
U[n+1]=F[n+1](1+βL[n+1])
wherein U [ n+1] is the internal state of each iteration, F [ n+1] and L [ n+1] are the feed-in state and the connection state of the corresponding iteration times respectively, and beta is the connection coefficient;
the pulse generating unit specifically comprises:
wherein Y [ n+1]]For the output of the next iteration, U [ n+1]]For the internal state corresponding to the number of iterations, Θ [ n ]]For the dynamic threshold of the previous iteration process, Θ [ n+1]]For the dynamic threshold of the next iteration process,for dynamic threshold decay constant, V Θ For regularized constant, Y [ n ]]Is the output of the previous iteration.
Further, the step S32 specifically includes the following steps:
s321, setting internal neuron parameters of the improved PCNN;
s322, setting the extracted characteristic form as a time signal:
T[n]=∑Y[n]
wherein, T [ n ] is a time signal, Y [ n ] is the output of each iteration of the improved PCNN;
s323, setting iteration times n, inputting R, G and B three-channel image matrixes into the improved PCNN, continuously iterating, recording time signals after each iteration, and finally outputting the iteration times and the time signal waveform diagram as the characteristics extracted for each channel.
Further, the step S33 specifically includes the following steps:
s331, deleting the first iteration information of the waveform diagram characteristics of the two channels G and R;
and S332, connecting the waveform diagram features of the B channel with the waveform diagram features of the G and R channels after deleting the first iteration information according to the sequence of B, G and R, so as to obtain the waveform diagram features of the time signals and the iteration times of the information of the B, G and R channels, namely the features of the suspected fake sub-images.
Further, the step S4 specifically includes the following steps:
s41, screening two most similar features from the features of a plurality of suspected counterfeit sub-images according to the feature similarity calculation in a dual-feature matching mode to obtain two corresponding most similar suspected counterfeit sub-images, wherein the dual-feature matching mode comprises:
a. the smaller the Euclidean distance is, the more similar the two features are to match the peak of the signature graph with the Euclidean distance:
x i ∈X,y i ∈Y
wherein dist (X, Y) is the Euclidean distance of X and Y, X and Y are respectively two features to be matched, X i And y i The ith peak of X and Y, respectively;
b. matching the number of iterations corresponding to the peak of the signature using a Jaccard metric, the greater the Jaccard metric, the more similar the two features,
wherein sim (X, Y) is the Jaccard metric values of X and Y;
the similarity calculation formula specifically comprises:
S(X,Y)=0.5*(1-dist(X,Y))+0.5*sim(X,Y)
wherein S (X, Y) is the calculated value of the similarity between X and Y, and the larger the calculated value of the similarity is, the more similar the two features are;
s42, calibrating in the original fake image to be detected according to the two closest suspected fake sub-images, and positioning the copy area and the paste area.
Compared with the prior art, the invention has the following advantages:
1. according to the method, the suspected counterfeit sub-image can be obtained from the image quickly by self-selecting the sub-image, the segmentation parameter with strong adaptability is not required to be searched, the workload of subsequent feature extraction is greatly reduced, and the detection efficiency is improved.
2. The invention can ensure that the extracted features have good robustness and enhance the anti-rotation, scaling and noise performance of the features by constructing the improved PCNN comprising the connecting unit, the feed-in unit, the internal state unit and the pulse generation unit, thereby improving the accuracy of the detection result.
3. The invention adopts a dual-feature matching mode, combines Euclidean distance and Jaccard measurement calculation, and can obtain two closest suspected counterfeit sub-images, thereby accurately positioning the copy area and the paste area in the image and further ensuring the accuracy of the detection result.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a detection process in an embodiment;
FIG. 3a is a counterfeit image to be detected in an embodiment;
FIG. 3b is a gray scale after denoising in an embodiment;
FIG. 3c is a binarized image of an embodiment;
FIG. 3d is an image after morphological open operation in an embodiment;
FIG. 3e is an extracted contour image in an embodiment;
FIG. 3f is a preferred profile image after screening in an embodiment;
FIG. 3g is a schematic diagram of a bounding box for a preferred outline in an embodiment;
FIG. 3h is a schematic diagram of a suspected counterfeit sub-image in an embodiment;
FIG. 4 is a block diagram of a single neuron of the improved PCNN in accordance with the present invention;
FIG. 5 is a schematic diagram of a process for improving PCNN extraction features in the present invention;
FIG. 6a is a schematic diagram of a suspected counterfeit sub-image and its signature in an embodiment;
FIG. 6b is a diagram of a suspected counterfeit sub-image and its signature after a translation attack in an embodiment;
FIG. 6c is a diagram of a suspected counterfeit sub-image and its signature after a rotational attack in an embodiment;
FIG. 6d is a schematic diagram of a suspected counterfeit sub-image and its signature after a zoom attack in an embodiment;
FIG. 6e is a diagram of a suspected counterfeit sub-image and its signature after a noise addition attack in an embodiment;
FIG. 6f is a diagram of a suspected counterfeit sub-image and its signature after JPEG compression attack in an embodiment;
FIG. 7a is a counterfeit image in an embodiment;
FIG. 7b is a schematic diagram of the detection result in the embodiment;
FIG. 7c is a truth area for forgery in an embodiment;
FIG. 8 is a graph showing the comparison of F1 score obtained by the method of the present invention and the conventional detection method.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples.
Examples
As shown in fig. 1, a passive detection method for image copy-and-paste forgery includes the following steps:
s1, inputting a fake image to be detected, and preprocessing the fake image to be detected;
s2, based on a self-selecting sub-image mode, a suspected fake sub-image is obtained from the preprocessed fake image to be detected;
s3, extracting features from the suspected counterfeit sub-images by adopting an improved PCNN (Pulse Coupled Neural Network );
s4, positioning in the image to be detected to obtain an image forging area by combining the features extracted in the step S3 in a double-feature matching mode, wherein the image forging area comprises a copying area and a pasting area.
The specific detection process is shown in fig. 2, and the implementation platform of the embodiment is Intel (R) Core (TM) i9-9900K [email protected] 3.60GHz, RAM 32.0GB, the experimental environment is opencv3.4.2, and the implementation language is Python.
The preprocessing in step S1 includes:
graying the input color fake image (shown in fig. 3 a) to obtain a corresponding gray image;
a 3 x 3 gaussian mask is selected and the gray map is filtered to effect denoising, resulting in a gray map as shown in fig. 3 b.
The specific process of step S2 includes:
setting a pixel value threshold value to perform binarization processing on the denoised gray image to obtain a binarized image shown in fig. 3c, wherein the pixel value threshold value is set to be 200, and the threshold value of the individual image can be slightly adjusted according to specific conditions;
performing morphological opening operation on the binarized image to eliminate fine targets in the image, separating unnecessary connected domains, smoothing the boundary of a larger target without obviously changing the area and the shape of the larger target, and obtaining the image after the morphological opening operation as shown in fig. 3d, wherein in the embodiment, the opening operation is realized by adopting 7×7 structural elements;
extracting contours from the morphological opening operation image to obtain contours of suspected fake areas in the fake image (shown in fig. 3 e);
setting a standard deletion useless profile to obtain a small amount of suspected counterfeit area preferred profile (as shown in fig. 3 f):
firstly, deleting useless contours according to a hierarchical structure to obtain contours without child contours and father contours;
then further deleting the useless profile according to the profile side length threshold value, wherein the threshold value is set to be 0.5 Cav-1.5 Cav, and the Cav is the average value of the side lengths of all the profiles;
a bounding box (shown in fig. 3 g) is preferably outlined for the obtained small number of suspected counterfeit areas, resulting in a plurality of suspected counterfeit sub-images as shown in fig. 3 h.
The specific process of step S3 is as follows:
1. improving the PCNN model for feature extraction, the single neuron structure of the improved PCNN is shown in fig. 4, comprising:
1. a connection unit:
wherein L [ n ]]For the previous connection state of the connection unit, ln+1]In order to be in the connection state of the next time,for the damping constant of the connection unit, V L For regularized constant, Y [ n ]]For the output of the previous iteration, W is a weight matrix;
2. feed-in unit:
wherein F [ n ]]To the previous feed-in state of the feed-in unit, fn+1]In order to be in the feed-in state of the next time,for feeding the attenuation constant of the unit, V F For regularized constant, Y [ n ]]For the output of the previous iteration, G is the weight matrix, and I is the input image pixel matrix;
3. internal state:
U[n+1]=F[n+1](1+βL[n+1])
wherein U [ n+1] is the internal state of each iteration, F [ n+1] and L [ n+1] are the feed-in state and the connection state of the corresponding iteration times respectively, and beta is the connection coefficient;
4. a pulse generation unit:
wherein Y [ n+1]]For the output of the next iteration, U [ n+1]]For the internal state corresponding to the number of iterations, Θ [ n ]]For the dynamic threshold of the previous iteration process, Θ [ n+1]]For the dynamic threshold of the next iteration process,for dynamic threshold decay constant, V Θ For regularized constant, Y [ n ]]Output for the previous iteration;
2. separating all suspected counterfeit sub-images obtained in the step S2 into B, G and R three channels (shown in FIG. 5) respectively and correspondingly;
3. inputting B, G and R three-channel image matrices of all suspected counterfeit sub-images into the modified PCNN for feature extraction (as shown in fig. 5):
firstly, setting values of related parameters in an improved PCNN model, wherein the values of the parameters are shown in a table 1;
TABLE 1 PCNN internal neuron parameters setting
L0, F0, U0, theta 0 and Y0 are zero matrices of w x h, where w and h are the width and height of the suspected counterfeit sub-image, respectively;
the extracted characteristic form is then set as a time signal:
T[n]=∑Y[n]
wherein, T [ n ] is a time signal, Y [ n ] is the output of each iteration of the improved PCNN;
setting the iteration number n as 21, inputting R, G and B three-channel pixel matrixes into the improved PCNN, continuously iterating, recording a time signal after each iteration, and finally outputting a waveform characteristic diagram of the iteration number and the time signal as the characteristic extracted from each channel;
4. the extracted characteristic information of the B, G and R channels is connected, and the characteristics of each suspected counterfeit sub-image are output (as shown in fig. 5):
deleting the first iteration information of the waveform diagram characteristics of the two channels G and R;
and connecting the characteristics of the B channel and the deleted G and R channels according to the sequence of B, G and R, and outputting a waveform chart characteristic of the iteration times and time signals containing B, G and R channel information, wherein the waveform chart characteristic is extracted from the suspected sub-image.
The specific process of step S4 includes:
1. two feature matching criteria are adopted to find the two most similar sub-images, and the two feature matching comprises:
1. the smaller the Euclidean distance is, the more similar the two features are to match the peak of the signature graph with the Euclidean distance:
x i ∈X,y i ∈Y
wherein dist (X, Y) is the Euclidean distance of X and Y, X and Y are respectively two features to be matched, X i And y i The ith peak of X and Y, respectively;
2. matching the number of iterations corresponding to the peak of the signature using a Jaccard metric, the greater the Jaccard metric, the more similar the two features,
wherein sim (X, Y) is the Jaccard metric values of X and Y;
the two most similar features can be obtained through screening by calculating the similarity, so that two corresponding most similar suspected fake sub-images are obtained, and the similarity calculation formula is as follows:
S(X,Y)=0.5*(1-dist(X,Y))+0.5*sim(X,Y)
wherein S (X, Y) is the calculated value of the similarity between X and Y, and the larger the calculated value of the similarity is, the more similar the two features are;
2. and calibrating the two most similar sub-images in the original fake image, and further positioning a copy area and a paste area in the copy-paste fake image.
In this embodiment, the standard image copy-and-paste forgery data set CoMoFoD is used to verify the method of the present invention.
First, the robustness of the improved PCNN extraction features is verified. Copy-on-paste forgery images often have copy-on areas that are subject to different attacks, such as translation, rotation, scaling, noise addition, and JPEG compression, before being pasted to the paste area. Taking a forged picture in the data set as an example, as shown in fig. 6a to 6f, sub-images (a 1) of a frame copy region and sub-images (b 1) to (f 1) corresponding to different pasting regions are respectively obtained, (wherein, (b 1) is a translation attack, (c 1) is a rotation attack, (d 1) is a scaling attack, (e 1) is a noise adding attack, and (f 1) is a JPEG compression attack), and the sub-images are respectively subjected to feature extraction by adopting an improved PCNN, and corresponding feature waveform diagrams are (a 2) to (f 2) in sequence. As is apparent from the waveform diagrams, although the frame pasting region sub-images (b 1) to (f 1) are attacked by different types, the characteristic waveform diagrams (b 2) to (f 2) extracted by improving the PCNN and the characteristic waveform diagram (a 2) extracted by the frame copying region sub-images are still very similar, which fully verifies that the improved PCNN characteristic extraction method in the invention has very high robustness.
Next, as shown in fig. 7a to 7c, the forgery detection effect of the present invention is further verified, fig. 7a is a forgery image, fig. 7b is a detection effect of the present invention, and fig. 7c is a forgery truth area, and it can be seen by comparing that the present invention can accurately locate the copy and paste position in the copy-paste forgery image.
Finally, the counterfeiting detection effect of the invention is evaluated, and the common F1 score is adopted as an evaluation standard:
wherein, p is the accuracy rate, which means the ratio of the true positive class in the detected positive class; r is the recall and refers to the ratio of the detected positive classes among all the true positive classes.
Comparing with several current conventional methods SURF (Speeded Up Robust Features, acceleration robust feature), SIFT (Scale-invariant feature transform, scale invariant feature transform), DCT (Discrete Cosine Transform ) and FMZM, the comparison results are shown in fig. 8, and the comparison results correspond to four conventional methods and the present invention on the abscissa, respectively, and the ordinate is the F1 fraction value, and five different lines correspond to five different attack types of translation, rotation, scaling, noise addition and JPEG compression. From the figure, it can be seen that the F1 score of the method of the invention is highest at each attack type, demonstrating the effectiveness of the method of the invention.

Claims (6)

1. The passive detection method for image copy-paste forgery is characterized by comprising the following steps:
s1, inputting a fake image to be detected, and preprocessing the fake image to be detected;
s2, based on a self-selecting sub-image mode, a suspected fake sub-image is obtained from the preprocessed fake image to be detected;
s3, extracting features from the suspected counterfeit sub-images by adopting the improved PCNN;
s4, positioning in the image to be detected to obtain an image forging area by combining the features extracted in the step S3 in a double-feature matching mode, wherein the image forging area comprises a copying area and a pasting area;
the step S2 specifically includes the following steps:
s21, carrying out binarization processing on the denoised gray scale image according to a preset pixel value threshold;
s22, performing morphological opening operation on the binarized image to eliminate tiny targets in the image, separate unnecessary connected domains and smooth the boundary of a larger target;
s23, carrying out contour extraction on the image subjected to morphological opening operation to obtain contours of a plurality of suspected fake areas;
s24, screening the outline of the suspected counterfeit area from the outlines of the suspected counterfeit areas to obtain the preferable outline of the suspected counterfeit area;
s25, respectively drawing boundary boxes for the preferable outlines of the suspected fake areas to serve as suspected fake sub-images;
the step S3 specifically comprises the following steps:
s31, separating the suspected counterfeit sub-image into B, G and R channels to obtain corresponding B, G and R three-channel image matrixes;
s32, inputting B, G corresponding to the suspected counterfeit sub-image and an R three-channel image matrix into the improved PCNN to extract B, G and R three-channel characteristic information respectively;
s33, connecting the extracted B, G and R channel characteristic information to obtain the characteristics of the suspected counterfeit sub-image;
the modified PCNN in step S32 includes a connection unit, a feed-in unit, an internal state unit, and a pulse generation unit, where the connection unit and the feed-in unit are respectively connected to the internal state unit, and the internal state unit is connected to the pulse generation unit, and the connection unit specifically includes:
wherein L [ n ]]For the previous connection state of the connection unit, ln+1]In order to be in the connection state of the next time,for the damping constant of the connection unit, V L For regularized constant, Y [ n ]]For the output of the previous iteration, W is a weight matrix;
the feed-in unit specifically comprises:
wherein F [ n ]]To the previous feed-in state of the feed-in unit, fn+1]In order to be in the feed-in state of the next time,for feeding the attenuation constant of the unit, V F For regularized constant, Y [ n ]]For the output of the previous iteration, G is the weight matrix, and I is the input image pixel matrix;
the internal state unit specifically comprises:
U[n+1]=F[n+1](1+βL[n+1])
wherein U [ n+1] is the internal state of each iteration, F [ n+1] and L [ n+1] are the feed-in state and the connection state of the corresponding iteration times respectively, and beta is the connection coefficient;
the pulse generating unit specifically comprises:
wherein Y [ n+1]]For the output of the next iteration, U [ n+1]]For the internal state corresponding to the number of iterations, Θ [ n ]]For the dynamic threshold of the previous iteration process, Θ [ n+1]]For the dynamic threshold of the next iteration process,for dynamic threshold decay constant, V Θ For regularized constant, Y [ n ]]Output for the previous iteration;
the step S4 specifically includes the following steps:
s41, screening two most similar features from the features of a plurality of suspected counterfeit sub-images according to the feature similarity calculation in a dual-feature matching mode to obtain two corresponding most similar suspected counterfeit sub-images, wherein the dual-feature matching mode comprises:
a. the smaller the Euclidean distance is, the more similar the two features are to match the peak of the signature graph with the Euclidean distance:
x i ∈X,y i ∈Y
wherein dist (X, Y) is the Euclidean distance of X and Y, X and Y are respectively two features to be matched, X i And y i The ith peak of X and Y, respectively;
b. matching the number of iterations corresponding to the peak of the signature using a Jaccard metric, the greater the Jaccard metric, the more similar the two features,
wherein sim (X, Y) is the Jaccard metric values of X and Y;
the similarity calculation formula specifically comprises:
S(X,Y)=0.5*(1-dist(X,Y))+0.5*sim(X,Y)
wherein S (X, Y) is the calculated value of the similarity between X and Y, and the larger the calculated value of the similarity is, the more similar the two features are;
s42, calibrating in the original fake image to be detected according to the two closest suspected fake sub-images, and positioning the copy area and the paste area.
2. The passive detection method for image copy-and-paste forgery according to claim 1, wherein the preprocessing process in step S1 specifically includes:
s11, carrying out graying treatment on the fake image to be detected to obtain a corresponding gray image;
s12, gaussian filtering denoising is carried out on the gray level image, and the denoised gray level image is obtained.
3. The passive detection method for image copy-and-paste forgery as claimed in claim 1, wherein said step S24 specifically comprises the steps of:
s241, deleting useless contours from the contours of a plurality of suspected fake areas according to the hierarchical structure, and screening to obtain contours without child contours and father contours;
s242, further deleting useless contours from the contours obtained by screening in the step S241 according to the preset contour side length threshold range so as to obtain the preferable contours of the suspected fake areas.
4. The passive detection method for image copy-and-paste forgery as claimed in claim 3, wherein the threshold range of the contour side length in the step S242 is specifically 0.5Cav to 1.5Cav, wherein Cav is an average value of contour side lengths of all the child contours and the parent contours.
5. The passive detection method for image copy-and-paste forgery as claimed in claim 1, wherein said step S32 specifically comprises the steps of:
s321, setting internal neuron parameters of the improved PCNN;
s322, setting the extracted characteristic form as a time signal:
T[n]=∑Y[n]
wherein, T [ n ] is a time signal, Y [ n ] is the output of each iteration of the improved PCNN;
s323, setting iteration times n, inputting R, G and B three-channel image matrixes into the improved PCNN, continuously iterating, recording time signals after each iteration, and finally outputting the iteration times and the time signal waveform diagram as the characteristics extracted for each channel.
6. The passive detection method for image copy-and-paste forgery as claimed in claim 5, wherein said step S33 specifically comprises the steps of:
s331, deleting the first iteration information of the waveform diagram characteristics of the two channels G and R;
and S332, connecting the waveform diagram features of the B channel with the waveform diagram features of the G and R channels after deleting the first iteration information according to the sequence of B, G and R, so as to obtain the waveform diagram features of the time signals and the iteration times of the information of the B, G and R channels, namely the features of the suspected fake sub-images.
CN202010606073.0A 2020-06-29 2020-06-29 Image copying, pasting and forging passive detection method Active CN111754441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010606073.0A CN111754441B (en) 2020-06-29 2020-06-29 Image copying, pasting and forging passive detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010606073.0A CN111754441B (en) 2020-06-29 2020-06-29 Image copying, pasting and forging passive detection method

Publications (2)

Publication Number Publication Date
CN111754441A CN111754441A (en) 2020-10-09
CN111754441B true CN111754441B (en) 2023-11-21

Family

ID=72677998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010606073.0A Active CN111754441B (en) 2020-06-29 2020-06-29 Image copying, pasting and forging passive detection method

Country Status (1)

Country Link
CN (1) CN111754441B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270330A (en) * 2020-11-05 2021-01-26 国网甘肃省电力公司电力科学研究院 Intelligent detection method for concerned target based on Mask R-CNN neural network
CN112651319B (en) * 2020-12-21 2023-12-05 科大讯飞股份有限公司 Video detection method and device, electronic equipment and storage medium
CN113033530B (en) * 2021-05-31 2022-02-22 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345758A (en) * 2013-07-25 2013-10-09 南京邮电大学 Joint photographic experts group (JPEG) image region copying and tampering blind detection method based on discrete cosine transformation (DCT) statistical features
CN106327481A (en) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 Image tampering detection method and image tampering detection device based on big data
CN107067389A (en) * 2017-01-05 2017-08-18 佛山科学技术学院 A kind of blind evidence collecting method of distorted image based on Sobel rim detections Yu image block brightness
CN107993230A (en) * 2017-12-18 2018-05-04 辽宁师范大学 Distorted image detection method based on triangle gridding comprehensive characteristics
CN108122225A (en) * 2017-12-18 2018-06-05 辽宁师范大学 Digital image tampering detection method based on self-adaptive features point
CN108335290A (en) * 2018-01-23 2018-07-27 中山大学 A kind of image zone duplicating and altering detecting method based on LIOP features and Block- matching
CN109360199A (en) * 2018-10-15 2019-02-19 南京工业大学 The blind checking method of image repeat region based on Wo Sesitan histogram Euclidean measurement
CN111008955A (en) * 2019-11-06 2020-04-14 重庆邮电大学 Multi-scale image block matching rapid copying pasting tampering detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345758A (en) * 2013-07-25 2013-10-09 南京邮电大学 Joint photographic experts group (JPEG) image region copying and tampering blind detection method based on discrete cosine transformation (DCT) statistical features
CN106327481A (en) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 Image tampering detection method and image tampering detection device based on big data
CN107067389A (en) * 2017-01-05 2017-08-18 佛山科学技术学院 A kind of blind evidence collecting method of distorted image based on Sobel rim detections Yu image block brightness
CN107993230A (en) * 2017-12-18 2018-05-04 辽宁师范大学 Distorted image detection method based on triangle gridding comprehensive characteristics
CN108122225A (en) * 2017-12-18 2018-06-05 辽宁师范大学 Digital image tampering detection method based on self-adaptive features point
CN108335290A (en) * 2018-01-23 2018-07-27 中山大学 A kind of image zone duplicating and altering detecting method based on LIOP features and Block- matching
CN109360199A (en) * 2018-10-15 2019-02-19 南京工业大学 The blind checking method of image repeat region based on Wo Sesitan histogram Euclidean measurement
CN111008955A (en) * 2019-11-06 2020-04-14 重庆邮电大学 Multi-scale image block matching rapid copying pasting tampering detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Evaluation of Popular Copy-Move Forgery Detection Approaches;Vincent Christlein et al.;《IEEE》;第1-26页 *
基于SIFT 和感知哈希的图像复制粘贴篡改检测方法;马伟鹏等;《图形图像》;第56-59页 *

Also Published As

Publication number Publication date
CN111754441A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
Yuan et al. A robust and efficient approach to license plate detection
Bappy et al. Exploiting spatial structure for localizing manipulated image regions
CN111754441B (en) Image copying, pasting and forging passive detection method
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Mushtaq et al. Digital image forgeries and passive image authentication techniques: a survey
Sridevi et al. Comparative study of image forgery and copy-move techniques
CN112907598B (en) Method for detecting falsification of document and certificate images based on attention CNN
Hussain et al. Evaluation of image forgery detection using multi-scale weber local descriptors
AlSawadi et al. Copy-move image forgery detection using local binary pattern and neighborhood clustering
CN103164856B (en) Video copy and paste blind detection method based on dense scale-invariant feature transform stream
Alamro et al. Copy-move forgery detection using integrated DWT and SURF
Deep Kaur et al. An analysis of image forgery detection techniques
Das et al. A robust method for detecting copy-move image forgery using stationary wavelet transform and scale invariant feature transform
CN102609947B (en) Forgery detection method for spliced and distorted digital photos
Deore et al. A survey on offline signature recognition and verification schemes
Jarusek et al. Photomontage detection using steganography technique based on a neural network
Pawade et al. Comparative study of different paper currency and coin currency recognition method
Velliangira et al. A novel forgery detection in image frames of the videos using enhanced convolutional neural network in face images
Itier et al. Color noise correlation-based splicing detection for image forensics
Zhou et al. Image copy-move forgery passive detection based on improved PCNN and self-selected sub-images
Diaa A Deep Learning Model to Inspect Image Forgery on SURF Keypoints of SLIC Segmented Regions
Birajdar et al. Subsampling-based blind image forgery detection using support vector machine and artificial neural network classifiers
Chen et al. Color image splicing localization algorithm by quaternion fully convolutional networks and superpixel-enhanced pairwise conditional random field
CN111931689B (en) Method for extracting video satellite data identification features on line
Roy Automatics number plate recognition using convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant