CN117593311A - Depth synthetic image detection enhancement method and device based on countermeasure generation network - Google Patents

Depth synthetic image detection enhancement method and device based on countermeasure generation network Download PDF

Info

Publication number
CN117593311A
CN117593311A CN202410081459.2A CN202410081459A CN117593311A CN 117593311 A CN117593311 A CN 117593311A CN 202410081459 A CN202410081459 A CN 202410081459A CN 117593311 A CN117593311 A CN 117593311A
Authority
CN
China
Prior art keywords
image
true
false
depth
authenticity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410081459.2A
Other languages
Chinese (zh)
Other versions
CN117593311B (en
Inventor
巴钟杰
郑乔木
程鹏
王庆龙
黄鹏
秦湛
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202410081459.2A priority Critical patent/CN117593311B/en
Publication of CN117593311A publication Critical patent/CN117593311A/en
Application granted granted Critical
Publication of CN117593311B publication Critical patent/CN117593311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a depth synthetic image detection enhancement method and device based on an countermeasure generation network, which belong to the technical field of image detection and comprise the following steps: collecting depth composite image datasetsThe method comprises the steps of carrying out a first treatment on the surface of the Training an authenticity detection model according to the data set; training an countermeasure generation network to enhance detection of depth composite data by an authenticity detection modelPerformance; constructing a new data set containing gradient information according to the true-false detection modelThe method comprises the steps of carrying out a first treatment on the surface of the From new data setsRetraining the third party true and false detection model; the depth synthesis server uses the countermeasure generation network to enhance the depth synthesis picture, so that the third party authenticity detection model detects the depth synthesis picture with high detection rate. The method and the device for detecting and enhancing the depth synthesized image based on the countermeasure generation network can improve the detection rate of the depth synthesized image.

Description

Depth synthetic image detection enhancement method and device based on countermeasure generation network
Technical Field
The invention relates to the technical field of image detection, in particular to a depth synthetic image detection enhancement method and device based on an countermeasure generation network.
Background
In the field of depth synthetic images, the current passive detection means are mainly used for identifying by constructing a depth neural network to find fake marks left in audio by a fake algorithm, the fake detection models usually need a large amount of real-fake data to perform supervised training, and the active detection means isomorphically produce images by the models, and a specific watermark is added to separate the depth synthetic images from real images before the images are distributed to users.
In the field of image passive detection, related technology provides a large number of image true and false detection models with certain generalization to cope with unknown depth synthesis models. However, since the different depth synthesis models have large differences in their structures, the generated images also have large differences in domain, so that the image authenticity detection model has limited detection capability for pictures generated by the unknown depth synthesis model. The existing image authenticity detection model capable of achieving high generalization is limited to the similarity of a depth synthesis model backbone on one hand and depends on the dependence of an image dataset generated by a corresponding target unknown depth synthesis model on the other hand.
In the field of image active detection, related technology provides a method for watermark embedding and watermark extraction, so that the identification of a depth synthetic image is clear and definite. However, the active watermark detection method follows a fixed signal processing flow, which has the problems of long processing flow period, single watermark embedded content, etc., in addition, in view of the reversible operation of the watermark embedding operation, the embedding operation must be kept secret and cannot be distributed to a third party, otherwise, the watermark content can be easily erased.
Disclosure of Invention
The invention aims to provide a depth synthetic image detection enhancement method and device based on an countermeasure generation network, which are used for constructing a depth synthetic image detection model by using abundant image falsification data and corresponding real data, enhancing the detection accuracy of a depth synthetic image under the condition of ensuring that the images are consistent with naked eyes based on the countermeasure generation technology and the countermeasure sample generation technology, and improving the detection capability of the depth synthetic image detection model of different training data sets of different structures of a third party on the depth synthetic image by using a distillation learning technology, so that the depth synthetic image detection model can be obtained after a specified data set is distilled by any user, and has high detection rate on the enhanced depth synthetic image.
In order to achieve the above object, the present invention provides a depth synthetic image detection enhancement method based on an countermeasure generation network, the depth synthetic image detection method being divided into a depth synthetic image enhancement stage and a third party detection model training stage;
wherein the depth synthesis image enhancement stage comprises the steps of:
s1, acquiring a depth synthesis image data set D 1
S2, from the image dataset D 1 Randomly extracting N image data to form image data I 1 Image data I 1 Performing image augmentation operation to obtain N augmented images;
s3, inputting the N amplified images into an authenticity image classifier M 1 The determination of the true or false category is performed separately,calculating to obtain a first true-false classification loss parameter;
s4, according to the first authenticity classification loss parameters, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 1 Optimizing and updating corresponding parameters to obtain an image classifier M 11
S5, performing true and false image classifier M obtained in step S4 11 Training a new automatic encoder G 1
S6, image data I 1 Respectively input true and false image classifier M 11 And an automatic encoder G 1 Obtaining the true and false confidenceAnd embedded disturbance P 1 For embedded disturbance P 1 According to the upper bound->Lower bound->Embedding image data I after truncation 1 Obtaining image I 2
Image I 2 Cut-off to 0-1 and input true and false image classifier M 11 Calculating to obtain true and false confidence
S7, according to the embedded disturbance P 1 The true and false confidence corresponding to the front and back pictures calculates the detection enhancement index parameter, the second true and false classification loss parameter and the disturbance amplitude loss parameter, takes the total loss reduction as an optimization target, and adopts an optimization algorithm to the automatic encoder G 1 And (5) optimizing and updating corresponding parameters.
Preferably, the third party detection model training phase comprises the steps of:
s8, combining the true and false image classifier M in the image enhancement stage according to depth 11 Image dataset D 1 Constructing a new image dataset D for distillation learning 2 For each image I in the dataset t Average meterCalculating true-false confidence for the pictureGradient matrix Grad of the confidence level for the picture t Construction->
S9, selecting a new true and false image classifier M 2 A new image dataset D obtained according to step S8 2 Image classifier M for true or false 2 Training, wherein the training round is k, and calculating a third true-false classification loss parameter, a gradient matrix loss parameter and a confidence coefficient loss parameter, wherein k is>1;
S10, according to the third authenticity classification loss parameter, the gradient matrix loss parameter and the confidence coefficient loss parameter, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 2 And (5) optimizing and updating corresponding parameters.
Preferably, in step S1, the image dataset D 1 The fake image data and the real image data corresponding to different generation models are included.
Preferably, in step S2, N is a positive integer, where the N pieces of image data include at least one counterfeit image data and at least one real image data.
Preferably, in step S3, a first authenticity classification loss parameter is used to represent an authenticity image classifier M 1 The difference between the resulting authenticity category and the actual authenticity label is predicted.
Preferably, in step S5, the automatic encoder G 1 Comprising an encoder E 1 And decoder F 1 Two parts, wherein encoder E 1 Feature extraction of input image, decoder F 1 Decoding the characteristics into disturbance variables with consistent original image sizes;
the encoder E1 is composed of 6 layers of convolution layers and a Relu activation function, and the encoder F1 is composed of 6 layers of deconvolution layers and a Relu activation function.
Preferably, in step S6, the image data I 1 Data for training counterfeit image models with confidence in authenticity ofTrue and false image classifier M 11 Judging the probability of false picture;
confidence of authenticityConfidence of authenticity->The calculation formula of (2) is as follows:
the Clip function receives three parameters, wherein the first parameter is a variable needing a constraint interval, the second parameter is a lower bound, and the third parameter is an upper bound;
wherein,a numerical value representing an upper bound of greater than 0; />A value representing a lower bound of less than 0; />Is an image classifier for true or false; />For image datasetD 1 Is a subset of the set of (c).
Preferably, in step S8,the calculation formula is as follows:
wherein,is the input of the true and false detection model.
The invention also provides a depth synthetic image detection enhancement device based on the countermeasure generation network, which comprises:
a first acquisition module for acquiring a depth synthesis image dataset D 1
Image augmentation module, from image dataset D 1 Randomly extracting N image data to form image data I 1 Image data I 1 Inputting to perform image augmentation operation to obtain N augmented images;
a first calculation module for inputting the N amplified images into the true-false image classifier M 1 Respectively determining the true and false types, and calculating to obtain a first true and false classification loss parameter;
a first optimizing module for classifying the loss parameters according to the first authenticity, taking the total loss reduction as an optimizing target, and adopting an optimizing algorithm to classify the authenticity image classifier M 1 Optimizing and updating corresponding parameters to obtain an image classifier M 11
The first detection module is used for repeatedly executing the steps in the image amplification module-the first optimization module for presetting the rounds to obtain a trained first true and false detection model, and the first true and false detection model is adopted for true and false detection of the depth synthesized data;
a second calculation module for classifying true and false images M 11 Training a new automatic encoder G1;
a third calculation module for inputting the image data I1 into the true and false image classifier M 11 With an automatic encoder G1 to obtain the true-false confidenceWith a specific embedded disturbance P1, the disturbance is according to the upper bound +.>Lower bound->Embedding the initial image I1 after cutting to obtain a graphThe image I2 is cut off to 0-1 and then the true-false picture classification model M is input 11 Calculated true and false confidence degree->
The second optimizing module is used for calculating a detection enhancement index parameter and a second true-false classification loss parameter according to the true-false confidence coefficient corresponding to the pictures before and after embedding the disturbance P1, reducing the total loss as an optimizing target, and optimizing and updating the corresponding parameters of the automatic encoder G1 by adopting an optimizing algorithm;
a second acquisition module for classifying the true and false images M according to the true and false images M in the first detection module 11 Image dataset D 1 Constructing a new image dataset D for distillation learning 2 For each image It in the dataset, a true-false confidence is computed for that pictureAnd the confidence level is used for constructing a gradient matrix Gradt of the picture>
A fourth calculation module for selecting a new true-false image classifier M 2 Training the true and false image classifier according to the image data set obtained by the second acquisition module, wherein the training turns are k, k>1, calculating a third true and false classification loss parameter, a gradient matrix loss parameter and a confidence loss parameter;
the third optimizing module is used for optimizing the true and false image classifier M by adopting an optimizing algorithm according to the third true and false classifying loss parameter, the gradient matrix loss parameter and the confidence coefficient loss parameter by taking the total loss reduction as an optimizing target 2 Optimizing and updating corresponding parameters;
and the second detection module is used for repeatedly executing the preset rounds in the fourth calculation module and the third optimization module to obtain a trained second true and false detection model, and adopting the second true and false detection model to carry out true and false detection on the depth synthesized data.
Therefore, the method and the device for detecting and enhancing the depth synthetic image based on the countermeasure generation network have the following technical effects:
(1) The detection generalization of the initial detection model in other target fields is improved by utilizing the existing multi-model depth synthesis picture data set, so that the model is ensured to have stronger universality for most of the existing multi-model depth synthesis data sets.
(2) The challenge-sample generation technique is based on the challenge-generation technique to enhance the detection success rate of the modeled images with minimal disturbance. The method is suitable for adding when the depth synthesis technology service provider sends the picture to the user, so that the detection success rate of the depth synthesis picture is irrelevant to the generation technology.
(3) Based on the constructed data set containing the gradient and the soft label, a third party can finely adjust the detection model of the free detection unlimited structure based on distillation learning according to the data set, so that the detection success rate of the third party detection model on the processed picture is improved.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of a depth synthetic image detection enhancement method based on an countermeasure generation network of the present invention;
FIG. 2 is an initial accuracy of the resnet on the test set;
FIG. 3 is an initial accuracy of efficientnet on a test set;
FIG. 4 is an initial accuracy of xception on a test set;
FIG. 5 is a graph showing the accuracy of the resnet model before and after perturbation on the depth counterfeit graph in the test dataset;
FIG. 6 is a graph of accuracy comparison of an efficientnet model before and after perturbation on a depth falsification map in a test dataset;
FIG. 7 is a comparison of accuracy of the xception model before and after perturbation on the deep forgery plot in the test dataset.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
Example 1
As shown in FIG. 1, in the depth synthetic image detection enhancement method based on the countermeasure generation network, the depth synthetic image detection method is divided into a depth synthetic image enhancement stage and a third party detection model training stage;
wherein the depth synthesis image enhancement stage comprises the steps of:
s1, acquiring multiple depth synthetic image data sets D 1
The image data set D1 comprises fake image data and real image data corresponding to different generation models;
s2, from the image dataset D 1 Randomly extracting N image data to form image data I 1 Image data I 1 Performing image augmentation operation to obtain N augmented images;
n is a positive integer, wherein the N image data includes at least one counterfeit image data, at least one genuine image data; the number of the generated models corresponding to the source image data is equal to the number corresponding to the target image data in the N image data, and the ratio of the number of the forged image data to the number of the real image data in the image data corresponding to each generated model is fixed;
s3, inputting the N amplified images into an authenticity image classifier M 1 Respectively determining the true and false types, and calculating to obtain a first true and false classification loss parameter;
in step S3, a first authenticity classification loss parameter is used to represent an authenticity image classifier M 1 Predicting the difference between the obtained authenticity category and the actual authenticity label;
s4, according to the first authenticity classification loss parameters, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 1 Optimizing and updating corresponding parameters to obtain an image classifier M 11
S5For the true and false image classifier M obtained in the step S4 11 Training a new automatic encoder G 1
Automatic encoder G 1 Comprising an encoder E 1 And decoder F 1 Two parts, wherein encoder E 1 Feature extraction of input image, decoder F 1 Decoding the characteristics into disturbance variables with consistent original image sizes;
the encoder E1 is composed of 6 layers of convolution layers and a Relu activation function, and the encoder F1 is composed of 6 layers of deconvolution layers and a Relu activation function; the input is a source data image, and the disturbance consistent with the original image in size is output.
S6, image data I 1 Respectively input true and false image classifier M 11 And an automatic encoder G 1 Obtaining the true and false confidence coefficient theta 1 And embedded disturbance P 1 For embedded disturbance P 1 According to the upper boundaryLower bound->Embedding image data I after truncation 1 Obtaining image I 2
Image I 2 Cut-off to 0-1 and input true and false image classifier M 11 Calculating to obtain the true and false confidence coefficient theta 2
Wherein the image data I 1 Data for training fake image model, and the true and false confidence degree is true and false image classifying model M 11 Judging the probability of false picture;
confidence of authenticityConfidence of authenticity->The calculation formula of (2) is as follows:
the Clip function receives three parameters, wherein the first parameter is a variable needing a constraint interval, the second parameter is a lower bound, and the third parameter is an upper bound;
wherein,a numerical value representing an upper bound of greater than 0; />A value representing a lower bound of less than 0; />Is an image classifier for true or false; />For image dataset D 1 Is a subset of the set of (c).
S7, according to the embedded disturbance P 1 The true and false confidence corresponding to the front and back pictures calculates the detection enhancement index parameter, the second true and false classification loss parameter and the disturbance amplitude loss parameter, takes the total loss reduction as an optimization target, and adopts an optimization algorithm to the automatic encoder G 1 Optimizing and updating corresponding parameters;
and (3) calculating a detection enhancement index formula by the true confidence coefficient:
disturbance amplitude loss parameter formula:
where N is the number of images input,is->The individual images are not scrambledAfter moving, through true and false image classifier M 11 Confidence obtained,/->Is->After disturbance is added to each image, the images are processed by an authenticity image classifier M 11 The confidence level obtained. />Is an automatic encoder G 1 Is provided.
The optimization algorithm is Adam;
adding the detection enhancement index parameter and the second true-false classification loss parameter to the disturbance amplitude loss parameter to obtain a total loss value;
or, classifying the true or false loss parameters and weight factorsAfter multiplication, adding the obtained product with the disturbance amplitude loss parameter and the detection enhancement index parameter to obtain a total loss value;
wherein the weight factorCalculated from the following formula:
wherein,pis a variable that increases linearly with the number of training rounds,gradually increasing from 0 to 1 during training;
using an optimization algorithm Adam to optimize and update corresponding parameters of the automatic encoder G1 by taking the total loss value as an optimization target;
the third party detection model training phase comprises the following steps:
s8, combining the true and false image classifier M in the image enhancement stage according to depth 11 Image dataset D 1 Constructing a new image dataset D for distillation learning 2 For each image I in the dataset t The true and false confidence for the picture is calculatedGradient matrix Grad of the confidence level for the picture t Construction->
The calculation formula is as follows:
wherein,for true or false image classifier->The method is used for inputting a true and false detection model;
s9, selecting a new true and false image classifier M 2 A new image dataset D obtained according to step S8 2 Image classifier M for true or false 2 Training, wherein the training round is k, and calculating a third true-false classification loss parameter, a gradient matrix loss parameter and a confidence coefficient loss parameter, wherein k is>1;
True and false image classifier M 11 True and false image classifier M 2 The model backbone of (1) is a backbone 50, efficienteneV 2, xattention and variant backbone model thereof;
s10, according to the third authenticity classification loss parameter, the gradient matrix loss parameter and the confidence coefficient loss parameter, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 2 And (5) optimizing and updating corresponding parameters.
In step S10, the optimization algorithm is Adam; the specific operation is the same as in step S7.
The technical effects of the method according to the present invention will be described below by way of specific examples.
In order to prove the effectiveness of the method, firstly, a training data set generated by ProGAN is adopted, wherein the training data set comprises pictures of airplane, bicycle, birds, cat and the like, each type comprises 12602 pictures of true pictures, 12602 pictures of depth synthesis, and 504079 pictures of the training data set; the same test set 216034 is generated, and according to 216034 training data sets, three deep synthesis authenticity detection models with different network structures are trained, wherein the models comprise Resnet-50, efficientnet-b5 and xception, as shown in fig. 2, 3 and 4, the training convergence of the models is proved, and the models have good performance on a ProGAN test training set.
Further, the method uses a Resnet-50 depth synthesis true pseudo detection model as a target model of an anti-sample generation step, a specific data set is used as a training set, an automatic encoder for generating disturbance is obtained through training, the enhancement effect of the disturbance generated by the automatic encoder on the three models is tested, the method cross-data set and the cross-model capability are further measured, the test data set comprises a deep synthesis data set (not containing a true image) generated by thirteen different methods such as biggan, CRN (customized Re fi part Network), cyclegan, deepfake, gaugan, imle (Implicit Maximum Likelihood Estimation, progan, san (Second-order Attention Network), seeingdark, stargan, stylegan, stylegan2, whischfastreals and the like, wherein the detection enhancement performance of the obtained automatic encoder on fake pictures is shown in figure 5 by using the whischfastreals data set as the training set, the target true pseudo detection model and the target model during training are Resnet50, as shown in FIG. 6 and FIG. 7, the target authenticity detection model is different from the target model during training, and the accuracy rate on the progan training set homologous to the training of the depth synthesis authenticity detector is respectively improved by 2 percent, 12 percent, and 2 percent and 26 percent on all data sets respectively, so that the added disturbance has the effect of improving the detection capability of the depth synthesis authenticity detector across the data sets and across the model structure.
To check the effect of the disturbance addition on the look and feel of the picture, the amplitude of the disturbance is calculated, and the amplitude of the disturbance is seen to be lower than 0.001. Before and after disturbance is added, the picture has no obvious noise point and has no strong influence on the look and feel.
Example two
The invention also provides a depth synthetic image detection enhancement device based on the countermeasure generation network, which comprises:
a first acquisition module for acquiring a depth synthesis image dataset D 1
Image augmentation module, from image dataset D 1 Randomly extracting N image data to form image data I 1 Image data I 1 Inputting to perform image augmentation operation to obtain N augmented images;
a first calculation module for inputting the N amplified images into the true-false image classifier M 1 Respectively determining the true and false types, and calculating to obtain a first true and false classification loss parameter;
a first optimizing module for classifying the loss parameters according to the first authenticity, taking the total loss reduction as an optimizing target, and adopting an optimizing algorithm to classify the authenticity image classifier M 1 Optimizing and updating corresponding parameters to obtain an image classifier M 11
The first detection module is used for repeatedly executing the steps in the image amplification module-the first optimization module for presetting the rounds to obtain a trained first true and false detection model, and the first true and false detection model is adopted for true and false detection of the depth synthesized data;
a second calculation module for classifying true and false images M 11 Training a new automatic encoder G1;
a third calculation module for inputting the image data I1 into the true and false image classifier M 11 With an automatic encoder G1 to obtain the true-false confidenceWith a specific embedded disturbance P1, the disturbance is according to the upper bound +.>Lower bound->After cutting, embedding an initial image I1 to obtain an image I2, and similarly cutting the image I2 to be between 0 and 1 and inputting an authenticity picture classification model M 11 Calculated true and false confidence degree->
The second optimizing module is used for calculating a detection enhancement index parameter and a second true-false classification loss parameter according to the true-false confidence coefficient corresponding to the pictures before and after embedding the disturbance P1, reducing the total loss as an optimizing target, and optimizing and updating the corresponding parameters of the automatic encoder G1 by adopting an optimizing algorithm;
a second acquisition module for classifying the true and false images M according to the true and false images M in the first detection module 11 Image dataset D 1 Constructing a new image dataset D for distillation learning 2 For each image It in the dataset, a true-false confidence is computed for that pictureAnd the confidence level is used for constructing a gradient matrix Gradt of the picture>
A fourth calculation module for selecting a new true-false image classifier M 2 Training the true and false image classifier according to the image data set obtained by the second acquisition module, wherein the training turns are k, k>1, calculating a third true and false classification loss parameter, a gradient matrix loss parameter and a confidence loss parameter;
the third optimizing module is used for reducing the total loss as an optimizing target according to the third true-false classification loss parameter, the gradient matrix loss parameter and the confidence coefficient loss parameterImage classifier M for true and false by adopting optimization algorithm 2 Optimizing and updating corresponding parameters;
and the second detection module is used for repeatedly executing the preset rounds in the fourth calculation module and the third optimization module to obtain a trained second true and false detection model, and adopting the second true and false detection model to carry out true and false detection on the depth synthesized data.
Therefore, the depth synthesis image detection enhancement method and device based on the countermeasure generation network are adopted, the depth synthesis image detection model is constructed by using rich image forging data and corresponding real data, the detection accuracy of the depth synthesis image is enhanced under the condition that the picture naked eyes are consistent based on the countermeasure generation technology and the countermeasure sample generation technology, the detection capability of the depth synthesis image detection model of different training data sets of different structures of a third party on the depth synthesis image is improved through the distillation learning technology, and therefore the purpose that a depth synthesis image detection model can be obtained after a given data set is distilled by any user is achieved, and the model has high detection rate on the enhanced depth synthesis image.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (9)

1. The depth synthetic image detection enhancement method based on the countermeasure generation network is characterized by comprising a depth synthetic image enhancement stage and a third party detection model training stage;
wherein the depth synthesis image enhancement stage comprises the steps of:
s1, acquiring a depth synthesis image data set D 1
S2, from the image dataset D 1 Middle random drawingTaking N image data to form image data I 1 Image data I 1 Performing image augmentation operation to obtain N augmented images;
s3, inputting the N amplified images into an authenticity image classifier M 1 Respectively determining the true and false types, and calculating to obtain a first true and false classification loss parameter;
s4, according to the first authenticity classification loss parameters, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 1 Optimizing and updating corresponding parameters to obtain an image classifier M 11
S5, performing true and false image classifier M obtained in step S4 11 Training a new automatic encoder G 1
S6, image data I 1 Respectively input true and false image classifier M 11 And an automatic encoder G 1 Obtaining the true and false confidenceAnd embedded disturbance P 1 For embedded disturbance P 1 According to the upper bound->Lower bound->Embedding image data I after truncation 1 Obtaining image I 2
Image I 2 Cut-off to 0-1 and input true and false image classifier M 11 Calculating to obtain true and false confidence
S7, according to the embedded disturbance P 1 The true and false confidence corresponding to the front and back pictures calculates the detection enhancement index parameter, the second true and false classification loss parameter and the disturbance amplitude loss parameter, takes the total loss reduction as an optimization target, and adopts an optimization algorithm to the automatic encoder G 1 And (5) optimizing and updating corresponding parameters.
2. The method of claim 1, wherein the third party detection model training phase comprises the steps of: the third party detection model training phase comprises the following steps:
s1, an authenticity image classifier M in an enhancement stage according to a depth synthetic image 11 Image dataset D 1 Constructing a new image dataset D for distillation learning 2 For each image I in the dataset t The true and false confidence for the picture is calculatedGradient matrix Grad of the confidence level for the picture t Construction->
S2, selecting a new true and false image classifier M 2 A new image dataset D obtained according to step S8 2 Image classifier M for true or false 2 Training, wherein the training round is k, and calculating a third true-false classification loss parameter, a gradient matrix loss parameter and a confidence coefficient loss parameter, wherein k is>1;
S3, according to the third authenticity classification loss parameter, the gradient matrix loss parameter and the confidence coefficient loss parameter, taking the total loss reduction as an optimization target, and adopting an optimization algorithm to classify the authenticity image classifier M 2 And (5) optimizing and updating corresponding parameters.
3. A depth synthetic image detection enhancing method based on countermeasure generation network according to claim 2, wherein in step S1, the image dataset D 1 The fake image data and the real image data corresponding to different generation models are included.
4. A depth synthetic image detection enhancing method based on an countermeasure generation network according to claim 3, wherein in step S2, N is a positive integer, and wherein the N pieces of image data include at least one counterfeit image data and at least one real image data.
5. The method of claim 4, wherein in step S3, a first authenticity classification loss parameter is used to represent an authenticity image classifier M 1 The difference between the resulting authenticity category and the actual authenticity label is predicted.
6. The method of enhancing depth composite image detection based on an countermeasure generation network according to claim 5, wherein in step S5, the automatic encoder G 1 Comprising an encoder E 1 And decoder F 1 Two parts, wherein encoder E 1 Feature extraction of input image, decoder F 1 Decoding the characteristics into disturbance variables with consistent original image sizes;
the encoder E1 is composed of 6 layers of convolution layers and a Relu activation function, and the encoder F1 is composed of 6 layers of deconvolution layers and a Relu activation function.
7. The method of enhancing depth composite image detection based on countermeasure generation network according to claim 6, wherein in step S6, the image data I is 1 Data for training fake image model, and the true and false confidence degree is true and false image classifier M 11 Judging the probability of false picture;
confidence of authenticityConfidence of authenticity->The calculation formula of (2) is as follows:
the Clip function receives three parameters, wherein the first parameter is a variable needing a constraint interval, the second parameter is a lower bound, and the third parameter is an upper bound;
wherein,a numerical value representing an upper bound of greater than 0; />A value representing a lower bound of less than 0; />Is an image classifier for true or false; />For image datasetD 1 Is a subset of the set of (c).
8. The method for enhancing depth composite image detection based on an countermeasure generation network according to claim 7, wherein in step S8,the calculation formula is as follows:
wherein,is the input of the true and false detection model.
9. An apparatus for implementing a depth synthetic image detection enhancement method based on an countermeasure generation network as claimed in any of claims 1 to 8.
CN202410081459.2A 2024-01-19 2024-01-19 Depth synthetic image detection enhancement method and device based on countermeasure generation network Active CN117593311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410081459.2A CN117593311B (en) 2024-01-19 2024-01-19 Depth synthetic image detection enhancement method and device based on countermeasure generation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410081459.2A CN117593311B (en) 2024-01-19 2024-01-19 Depth synthetic image detection enhancement method and device based on countermeasure generation network

Publications (2)

Publication Number Publication Date
CN117593311A true CN117593311A (en) 2024-02-23
CN117593311B CN117593311B (en) 2024-06-21

Family

ID=89917040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410081459.2A Active CN117593311B (en) 2024-01-19 2024-01-19 Depth synthetic image detection enhancement method and device based on countermeasure generation network

Country Status (1)

Country Link
CN (1) CN117593311B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258667A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Image recognition method and apparatus, and non-volatile readable storage medium and computer device
CN113094566A (en) * 2021-04-16 2021-07-09 大连理工大学 Deep confrontation multi-mode data clustering method
CN115019370A (en) * 2022-06-21 2022-09-06 深圳大学 Depth counterfeit video detection method based on double fine-grained artifacts
CN117079354A (en) * 2023-07-10 2023-11-17 华中科技大学 Deep forgery detection classification and positioning method based on noise inconsistency
CN117218707A (en) * 2023-10-07 2023-12-12 南京信息工程大学 Deep face detection method based on positive disturbance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258667A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Image recognition method and apparatus, and non-volatile readable storage medium and computer device
CN113094566A (en) * 2021-04-16 2021-07-09 大连理工大学 Deep confrontation multi-mode data clustering method
CN115019370A (en) * 2022-06-21 2022-09-06 深圳大学 Depth counterfeit video detection method based on double fine-grained artifacts
CN117079354A (en) * 2023-07-10 2023-11-17 华中科技大学 Deep forgery detection classification and positioning method based on noise inconsistency
CN117218707A (en) * 2023-10-07 2023-12-12 南京信息工程大学 Deep face detection method based on positive disturbance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨志东;鲁敬;王科清;刘淑丽;郭翼翔;: "数字水印技术在电力信息安全保障中的应用", 信息技术, no. 12, 25 December 2016 (2016-12-25) *
苗壮等: "基于双重对抗自编码网络的红外目标建模方法", 光学学报, no. 11, 10 June 2020 (2020-06-10) *

Also Published As

Publication number Publication date
CN117593311B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN108228915B (en) Video retrieval method based on deep learning
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN109165688A (en) A kind of Android Malware family classification device construction method and its classification method
Huang et al. A visual–textual fused approach to automated tagging of flood-related tweets during a flood event
US20080091627A1 (en) Data Learning System for Identifying, Learning Apparatus, Identifying Apparatus and Learning Method
CN106530200A (en) Deep-learning-model-based steganography image detection method and system
CN110490265B (en) Image steganalysis method based on double-path convolution and feature fusion
CN112613552A (en) Convolutional neural network emotion image classification method combining emotion category attention loss
CN110826056B (en) Recommended system attack detection method based on attention convolution self-encoder
CN111695597A (en) Credit fraud group recognition method and system based on improved isolated forest algorithm
CN102938054A (en) Method for recognizing compressed-domain sensitive images based on visual attention models
CN112884802B (en) Attack resistance method based on generation
CN106599834A (en) Information pushing method and system
Lu et al. Steganalysis of content-adaptive steganography based on massive datasets pre-classification and feature selection
CN115456043A (en) Classification model processing method, intent recognition method, device and computer equipment
CN114241564A (en) Facial expression recognition method based on inter-class difference strengthening network
CN113010705A (en) Label prediction method, device, equipment and storage medium
Kaur et al. Feature selection using mutual information and adaptive particle swarm optimization for image steganalysis
CN111737688B (en) Attack defense system based on user portrait
CN112966728A (en) Transaction monitoring method and device
CN117593311B (en) Depth synthetic image detection enhancement method and device based on countermeasure generation network
CN116645562A (en) Detection method for fine-grained fake image and model training method thereof
CN110278189B (en) Intrusion detection method based on network flow characteristic weight map
CN114926885A (en) Strong generalization depth counterfeit face detection method based on local anomaly
Lou et al. Message estimation for universal steganalysis using multi-classification support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant