CN114202592A - Recoloring image evidence obtaining method based on spatial correlation - Google Patents

Recoloring image evidence obtaining method based on spatial correlation Download PDF

Info

Publication number
CN114202592A
CN114202592A CN202111392793.2A CN202111392793A CN114202592A CN 114202592 A CN114202592 A CN 114202592A CN 202111392793 A CN202111392793 A CN 202111392793A CN 114202592 A CN114202592 A CN 114202592A
Authority
CN
China
Prior art keywords
recoloring
image
images
training
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111392793.2A
Other languages
Chinese (zh)
Inventor
陈诺
张玉书
祁树仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111392793.2A priority Critical patent/CN114202592A/en
Publication of CN114202592A publication Critical patent/CN114202592A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a recoloring image evidence obtaining method based on spatial correlation, which comprises the following steps: the method comprises the following steps: constructing a training set and a test set by using the images processed by the plurality of recoloring algorithms and the corresponding natural images; step two: extracting spatial correlation characteristics, and calculating a co-occurrence matrix in four directions for each color channel of the image; step three: constructing a feature learning network; step four: training the designed model by using the constructed training set and the corresponding label set; step five: and predicting the images of the test set by using the saved optimal model weight, and distinguishing the natural images from the recoloring images. The invention trains the preset network model to obtain the recoloring image detection model after training, and accurately detects whether one picture is recoloring and falsified by utilizing the recoloring image detection model, thereby playing an important role in various practical application occasions related to image safety.

Description

Recoloring image evidence obtaining method based on spatial correlation
Technical Field
The invention relates to the technical field of image processing and information security, in particular to a recoloring image evidence obtaining method based on spatial correlation.
Background
Images are widely used for distributing news and recording events as an important information medium. People have been used to practice for a long time. However, with the rapid spread of image editing software such as Adobe Photoshop and GIMP, anyone can easily edit digital images at a very low cost, and the generated content is quite difficult for human beings to distinguish. In addition to the most common editing operations, such as copy-paste, stitching, repair, etc., a new type of image editing technique, image recoloring, has emerged.
Unlike the common operation of changing the image content by adding or deleting regions of interest, the purpose of image recoloring is to change the theme or style of an image by tampering with color values without compromising detail. This visually sound tampering operation makes the visual system difficult to perceive. False positives may result when certain objects and scenes must be identified or tracked.
Fortunately, image forensic technology has been vigorously developed over the past few decades. According to the working mechanism and principle of the image forgery detection method, the image forgery detection method can be divided into two major categories, namely an active evidence obtaining technology and a passive evidence obtaining technology. Active forensics, such as digital watermarking, pre-embeds identity information into images. In the forensics phase, if an undamaged version of the information cannot be extracted, it can be assumed that the image has been tampered with. However, it requires embedding a watermark in an image for distribution, which limits its practical application. Passive evidence collection, also called blind evidence collection, does not rely on any prior information. It relies entirely on the multimedia content being analyzed and attempts to reveal anomalies that may indicate tampering. For example, camera source recognition utilizes features such as lens distortion, Color Filter Array (CFA), pattern noise, etc.; the forgery detection utilizes specific artifacts left by the JPEG compression, contrast enhancement, resampling, copy-paste, splicing, repair, and other tampering methods. On the whole, the existing evidence-taking method achieves better performance in the aspects of detecting traditional image processing, such as copying-pasting, splicing and the like. However, these methods cannot be used for the detection of a heavy color image due to the difference in the tampering mechanism. Although changing the color of an image is one of the most common tasks in image processing, there is currently little forensic work specifically directed to image recoloring. Yan et al [ Yan, Yanyang, Wenqi Ren, and Xiaochun Cao. "corrected image detection device a deep characterization model." IEEE Transactions on Information principles and Security 14.1(2018):5-17.] first attempt to distinguish whether an image is Recolored or not using two correspondences (inter-channel correlation and illumination correspondence) and the original input image. However, the hard coding operation of the difference image (e.g., R-G) is not necessarily the best correlation representation, and the method only verifies its validity on the traditional image re-coloring method. Furthermore, there is currently no work to specifically deal with deep learning based recoloring scenes.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a recoloring image evidence obtaining method based on spatial correlation, a preset network model is trained to obtain a recoloring image detection model after training, whether a picture is recoloring and falsified or not can be accurately detected by using the recoloring image detection model, and the recoloring image evidence obtaining method can play an important role in various practical application occasions related to image safety.
In order to solve the technical problem, the invention provides a recoloring image evidence obtaining method based on spatial correlation, which comprises the following steps:
the method comprises the following steps: constructing a training set and a test set by using the images processed by the plurality of recoloring algorithms and the corresponding natural images;
step two: and extracting spatial correlation characteristics. Computing a co-occurrence matrix in four directions (horizontal, vertical, diagonal and anti-diagonal) for each color channel of the image;
step three: and constructing a feature learning network. The network is based on a ResNet18 network and comprises the following modules in total: a convolutional layer, a max pooling (Maxpool) layer, four residual modules (Resnet block), an average pooling layer, and a fully connected layer. Each convolutional layer is followed by a batch normalization (BatchNorm) layer and the Relu activation function. Each residual module comprises: two convolutional layers and there is a jump connection before the Relu operation of the second convolutional layer. The output category of the full-link layer is 2, i.e., the probability that the image is a natural image and a recoloring image is output, respectively;
step four: training the designed model by using the constructed training set and the corresponding label set;
step five: and predicting the images of the test set by using the saved optimal model weight, and distinguishing the natural images from the recoloring images.
Preferably, in step one, the training set D1 includes three subsets. Where the first two subsets were generated using the ImageNet validation dataset using two conventional recoloring methods, respectively, both of the two sub-training sets contained approximately 19000 pairs of natural images and corresponding recoloring image pairs and ensuring that there were no overlapping images in the first and second sub-training sets. The third sub-training set consists of approximately 16000 image pairs, uses a deep learning based recoloring method, and also considers some semantic segmentation information of the images in the COCO validation dataset. Thus, the generated image includes not only the global recoloring image, but also a recoloring image for a particular object. The test set is divided into deep learning and traditional recoloring scenarios. In the deep learning based scenario, a benchmark set D2 containing 240 natural images and corresponding recoloring images was generated using 4 deep learning based recoloring methods. Conventional image recoloring uses two recoloring test sets disclosed, denoted as D3 and D4.
Preferably, in step two, a co-occurrence matrix in four directions (horizontal, vertical, diagonal and anti-diagonal) is computed for each color channel (R, G or B) of the image to analyze spatial correlation between adjacent pixels to distinguish between natural and re-colored images. The co-occurrence matrix is calculated for any picture V using the following formula:
Figure BDA0003368940730000031
wherein I {. is an indicator function, n is a normalization factor, θ12,…,θdIs the index of the co-occurrence matrix, and Δ x, Δ y are the offsets of two adjacent pixels. The parameter d is set to 2 to obtain a two-dimensional co-occurrence matrix representing a two-dimensional histogram of neighboring region pixel pair values.
Preferably, in step three, the original input and output shapes in the ResNet18 network are modified according to the task. The number of first convolution kernels is changed from 3 to 12. Because after the co-occurrence matrices are computed, 12 matrices are stacked into one tensor, with a size of 256 × 256 × 12. The output size of the last fully connected layer of the network is also changed to 2. This output is used to determine whether a given picture is a natural image or a recoloring image.
Preferably, in step four, the designed model is trained using the constructed training set D1. The network is implemented with a pytorech deep learning framework. Optimization was performed using an Adam optimizer and the initial learning rate was set to 1 × 10-4. The learning rate is adjusted using the cosineanalinglr method, and the number of iterations of one learning rate loop is set to 64. L2 regularization is also used and the weight decay is set to 1 × 10-3. Since images of any size are calculated to result in a symbiotic matrix of 256 × 256 size, the images are not resized and the batch size is set to 128. The initialization of kernel weight adopts He normal initialization method. During the training process, 80% of the training set was used to learn and update network parameters, the remaining 20% was used for validation, and the entire network was trained for 20 cycles using cross-entropy loss as a loss function of the network, with data shuffling occurring at the beginning of each cycle. And by observing the accuracy of the verification data, an early-stopping strategy is adopted. If the precision of 5 continuous rounds cannot be improved, the training is stopped, and the model with the highest verification precision is saved as the final model.
Preferably, in step five, the stored optimal model weights are used to predict each image in test sets D2, D3, and D4, respectively, to distinguish between natural images and recoloring images in the test sets, and in order to provide better visual interpretability of the recoloring image inspection model of the present invention, the thermodynamic diagram of the last convolution layer in the network is displayed using grad-cam + +. In order to verify whether the spatial correlation characteristic provided by the invention can improve the performance of final recoloring detection, a t-SNE visual display is used for displaying the characteristic space of the penultimate layer in the network provided by the invention.
The invention has the beneficial effects that: according to the method, a preset network model generates classification probability corresponding to pictures according to the pictures in a training set, model parameters are corrected according to the classification probability of each picture and a real label of each picture, and the step of training the operation network model is continuously executed until the trained recoloring image detection model is obtained; detecting the images in the test set by using a recoloring image detection model obtained by training, and judging whether the images are natural images or recoloring images; the preset network model is trained to obtain a recoloring image detection model after training, and whether one picture is recoloring and falsified or not can be accurately detected by using the recoloring image detection model, so that the recoloring image detection model can play an important role in various practical application occasions related to image safety.
Drawings
FIG. 1 is a schematic diagram of a recoloring image inspection model according to the present invention.
Fig. 2(a) is a thermodynamic diagram of the natural image of the present invention and its last convolutional layer in the set network.
FIG. 2(b) is a schematic diagram of a natural image and its recoloring image corresponding to the last convolutional layer in the set network and its thermodynamic diagram.
FIG. 3(a) is a schematic diagram of the feature space of the penultimate layer in the set network displayed by using t-SNE visualization when the input is the spatial correlation feature.
FIG. 3(b) is a schematic diagram of the feature space of the next to last layer in the set network displayed by using t-SNE visualization when the input is the original RGB image.
Detailed Description
A recoloring image forensics method based on spatial correlation, comprising the steps of:
step one, constructing a training set and a testing set. Before constructing the training set, we divided the ImageNet verification dataset into 1000 categories by category, and then randomly divided the pictures in each category into two parts, as style pictures and content pictures, respectively. This process avoids excessive artifacts in the data set due to irrelevancy or weak correlation of the stylistic and content images, since the content of the images in the same category is more relevant. The training set D1 includes three subsets. First, we generated a first sub-training set containing about 19000 pairs of natural images and corresponding re-colored image pairs using a conventional re-coloring method [ e.reinhard, m.adhikhmin, b.gooch, and p.shirley, "Color transfer between images," IEEE comp.graph.appl., vol.21, No.5, pp.34-41,2001 ]. The above generation process randomly selects a picture from the genre image and the content image each time and ensures that the used images are not reused. Then, we obtained a second sub-training set with the same capacity as the first sub-training set by a similar generation process using another conventional recoloring method [ f.piti' e, a.c.kokaram, and r.dayot, "Automated color mapping using color distribution transfer," company.vis.and Image understand, vol.107, No.1-2, pp.123-137,2007 ]. The above process ensures that there are no overlapping images in the first and second subsets by swapping the genre images and the content images. The third sub-training set consists of approximately 16000 image pairs. Unlike the above process, a recoloring method based on deep learning [ j.yoo, y.uh, s.chun, b.kang, and j. -w.ha "," photonic concrete transfer via transport transforms "," in proc.of the IEEE/CVF int.conf.on company.vis., 2019, pp.9036-9045 ] is used, while some semantic segmentation information of images in the COCO validation dataset is also considered. Thus, the generated image includes not only the global recoloring image, but also a recoloring image for a particular object.
The test set is divided into deep learning and traditional recoloring scenarios. In the deep learning based scenario, we generated a benchmark set D2 containing 240 natural images and corresponding recoloring images using 4 deep learning based recoloring methods. We first selected the original photos in the COCO validation dataset, ImageNet validation dataset, and the images grabbed from the website, and then randomly recoloring 60 natural images using four methods, respectively. The generation process ensures that the natural images used by each method are not repeated, and the images between the test set and the training set are not overlapped. Conventional image recoloring uses two recoloring test sets disclosed as D3 and D4. Where D3 includes 100 real photographs (grabbed from a website) and a recoloring image generated by various conventional recoloring methods. D4 contained 80 recolored photographs that were manually edited. The photos are either made by a cell phone application or downloaded from a website such as a Photoshop tutorial website.
And step two, extracting spatial correlation characteristics. In conjunction with the "Preprocessing" section of fig. 1, we compute a co-occurrence matrix in four directions (horizontal, vertical, diagonal and anti-diagonal) for each color channel (R, G or B) of the image to analyze spatial correlation between adjacent pixels to distinguish between natural and re-colored images. The co-occurrence matrix is calculated for any picture V using the following formula:
Figure BDA0003368940730000051
wherein I {. is an indicator function, n is a normalization factor, θ12,…,θdIs the index of the co-occurrence matrix, and Δ x, Δ y are the offsets of two adjacent pixels. In our actual implementation, we set the parameter d to 2 to obtain a two-dimensional co-occurrence matrix representing a two-dimensional histogram of neighboring region pixel pair values. The vertical axis of the histogram represents the first value and the horizontal axis represents the second value. This has the advantage of avoiding the need to truncate the acquired co-occurrence matrix to reduce the complexity of the operation. For any size image, assume that the image has 8 bitsPixel depth, which will always produce a co-occurrence matrix of size 256 x 256, avoids the need to resize the image when training and testing the network. Furthermore, for any pixel that is not on an edge, we consider only four of them (horizontal, vertical, diagonal and anti-diagonal) although there are 8 possible neighbors, since the other four directions are redundant.
And step three, utilizing the CNN learning characteristics. For any RGB image, we compute the co-occurrence matrix of each color channel in four directions, resulting in 12 matrices with size 256 × 256. Previous approaches have primarily used co-occurrence matrices to train support vector machine classifiers, but using these manually constructed features directly and using machine learning classifiers may not be the best solution. To distinguish between natural and recoloring images, it is necessary to collect valid identifying features from the co-occurrence matrix. Because the CNN can automatically learn feature representation and is widely applied to computer vision tasks such as image classification, a CNN-based feature extraction module is constructed. As shown in the "recycling detection" section of fig. 1, the feature extraction module of the present invention is established based on ResNet18, and includes the following modules in total: a convolutional layer, a max pooling (Maxpool) layer, four residual modules (Resnet block), an average pooling layer, and a fully connected layer. Each convolutional layer is followed by a batch normalization (BatchNorm) layer and the Relu activation function. Each residual module comprises: two convolutional layers and a skip connection before the Relu operation of the second convolutional layer, the input can propagate forward faster through the skip connection across layers. We modify the original input and output shapes in the ResNet18 network according to the task. We change the number of first convolution kernels from 3 to 12. Since after computing the co-occurrence matrices we will get 12 matrices stacked into one tensor, with size 256 × 256 × 12. The output size of the last fully connected layer of the network is also changed to 2. This output is used to determine whether a given picture is a natural image or a recoloring image.
And step four, training the designed model by using the constructed training set D1. The network is implemented with a pytorech deep learning framework. We performed using an Adam optimizerOptimizing and setting the initial learning rate to 1 × 10+4. The learning rate is adjusted using the cosineanalinglr method, and the number of iterations of one learning rate loop is set to 64. We also used L2 regularization and set the weight attenuation to 1 × 10-3. Since images of any size are computed to yield a symbiotic matrix of 256 × 256 size, we do not resize the images and set the batch size to 128. The initialization of kernel weight adopts He normal initialization method. During the training process, 80% of the training set was used to learn and update network parameters, the remaining 20% was used for validation, and the entire network was trained for 20 cycles using cross-entropy loss as a loss function of the network, with data shuffling occurring at the beginning of each cycle. And by observing the accuracy of the verification data, an early-stopping strategy is adopted. If the precision of 5 continuous rounds cannot be improved, the training is stopped, and the model with the highest verification precision is saved as the final model. Training and testing was performed on a computer equipped with NVIDIARTX 2080Ti GPU.
And step five, predicting each image in the test sets D2, D3 and D4 by using the stored optimal model weight, and distinguishing a natural image and a recoloring image in the test sets, wherein the specific detection result is shown in table 1. To provide better visual interpretability of the recoloring image inspection model of the present invention, we show a thermodynamic diagram of the last convolutional layer in the network using grad-cam + +. While the natural images and their corresponding recoloring images look very similar, their heat maps are quite different, as shown in conjunction with fig. 2(a) and 2 (b). The upper left corner area of the thermodynamic diagram of the natural image has higher negative correlation response (lower gray value at contour center); the lower right hand corner region of the thermodynamic diagram of the corresponding recoloring image has a higher negative correlation response. The visualization result of the thermodynamic diagram shows that the invention can further learn from the extracted spatial correlation feature set and can distinguish natural images from recoloring images.
TABLE 1 test results Table
Figure BDA0003368940730000071
To verify whether the spatial correlation feature proposed by the present invention can improve the final recoloring detection performance, we used t-SNE visualization to display the feature space of the penultimate layer in the network proposed by the present invention. As shown in fig. 3(a), input into the network is a set of features that compute spatial correlations. The filled circles (● in the figure) represent the natural images and the filled triangles (a in the figure) represent the recolored images. It can be seen that the solid circles and the solid triangles are clustered into two clusters with clear boundaries. In contrast, the input of fig. 3(b) is the original RGB image. It can be seen that the solid circles and the solid triangles are mixed together and the network cannot distinguish them well. The visualization result shows that the spatial correlation is really a remarkable characteristic of the natural image and the recoloring image, and the recoloring image can be effectively detected by utilizing the characteristic.
According to the invention, through extracting the spatial correlation characteristics of the adjacent pixels of the image and learning the distinguishing characteristics by utilizing the strong learning capacity of the CNN, a large number of experimental results in two recoloring scenes show that the spatial correlation characteristics have strong distinguishing capacity, the most advanced performance is achieved when recoloring images generated by various methods are detected, and the method has good generalization capacity when an unknown recoloring method is processed.

Claims (6)

1. A recoloring image forensics method based on spatial correlation, comprising the steps of:
the method comprises the following steps: constructing a training set and a test set by using the images processed by the plurality of recoloring algorithms and the corresponding natural images;
step two: extracting spatial correlation characteristics, and calculating a co-occurrence matrix in four directions for each color channel of the image, wherein the four directions are horizontal, vertical, diagonal and reverse diagonal;
step three: constructing a feature learning network; the network is based on a ResNet18 network and comprises the following components: the device comprises a convolution layer, a maximum pooling layer, four residual modules, an average pooling layer and a full-connection layer; each convolution layer is followed by a batch normalization layer and a Relu activation function; each residual module comprises: two convolutional layers with a jump connection before the Relu operation of the second convolutional layer; the output category of the full-link layer is 2, i.e., the probability that the image is a natural image and a recoloring image is output, respectively;
step four: training the designed model by using the constructed training set and the corresponding label set;
step five: and predicting the images of the test set by using the saved optimal model weight, and distinguishing the natural images from the recoloring images.
2. The method for obtaining a recoloring image evidence based on spatial correlation according to claim 1, wherein in step one, the training set D1 comprises three subsets, wherein the first two subsets are generated by using ImageNet verification data set using two conventional recoloring methods, respectively, the two training subsets comprise 19000 pairs of natural images and corresponding recoloring image pairs, and ensure that there is no overlapping image in the first and second training subsets, and the third training subset consists of 16000 image pairs, and a deep learning recoloring method is used, and some semantic segmentation information of the images in the COCO verification data set is also considered; the test sets are divided into a deep learning and conventional recoloring scenario in which 4 deep learning based recoloring methods are used to generate a benchmark test set D2 containing 240 natural images and corresponding recoloring images, and conventional image recoloring uses the two recoloring test sets disclosed, denoted D3 and D4.
3. The recoloring image forensics method based on spatial correlation of claim 1, wherein in step two, a co-occurrence matrix in four directions is calculated for each color channel of the image to analyze spatial correlation between adjacent pixels to distinguish between a natural image and a recoloring image, and the co-occurrence matrix is calculated for an arbitrary picture V using the following formula:
Figure FDA0003368940720000011
wherein I {. is an indicator function, n is a normalization factor, θ12,…,θdIs the index of the co-occurrence matrix, Δ x, Δ y are the offsets of two adjacent pixels, and the parameter d is set to 2 to obtain a two-dimensional co-occurrence matrix representing a two-dimensional histogram of the values of pairs of pixels of the adjacent regions.
4. A method for re-rendering image forensics based on spatial correlation as claimed in claim 1, wherein in step three, the original input and output shapes in the ResNet18 network are modified according to the task; changing the number of the first convolution kernels from 3 to 12, and stacking the obtained 12 matrixes into a tensor with the size of 256 multiplied by 12; the output size of the last fully connected layer of the network is also changed to 2, this output being used to determine whether a given picture is a natural image or a recoloring image.
5. The recoloring image forensics method based on spatial correlation of claim 1, wherein in step four, the designed model is trained by using the constructed training set D1, the network is implemented by Pythrch deep learning framework, the Adam optimizer is used for optimization, and the initial learning rate is set to 1 x 10-4Adjusting the learning rate by using a CosinesealingLR method, setting the iteration number of a learning rate loop to be 64, using L2 for regularization, and setting the weight attenuation to be 1 × 10-3Setting the batch processing size to be 128, and initializing the kernel weight by using a He normal initialization method; in the training process, 80% of training sets are used for learning and updating network parameters, the rest 20% of training sets are used for verification, cross entropy loss is used as a loss function of the network, the whole network is trained for 20 periods, data shuffling is carried out at the beginning of each period, an early stopping strategy is adopted by observing the accuracy of verification data, if the accuracy of 5 continuous rounds cannot be improved, the training is stopped, and the model with the highest verification accuracy is stored as a final model.
6. The recoloring image forensics method based on spatial correlation of claim 1, wherein in step five, the saved optimal model weight is used to predict each image in the test sets D2, D3 and D4 respectively, distinguish the natural image and the recoloring image in the test sets, the thermodynamic diagram of the last convolution layer in the network is displayed by grad-cam + +, and the feature space of the penultimate layer in the network is visualized by t-SNE.
CN202111392793.2A 2021-11-23 2021-11-23 Recoloring image evidence obtaining method based on spatial correlation Pending CN114202592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111392793.2A CN114202592A (en) 2021-11-23 2021-11-23 Recoloring image evidence obtaining method based on spatial correlation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111392793.2A CN114202592A (en) 2021-11-23 2021-11-23 Recoloring image evidence obtaining method based on spatial correlation

Publications (1)

Publication Number Publication Date
CN114202592A true CN114202592A (en) 2022-03-18

Family

ID=80648512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111392793.2A Pending CN114202592A (en) 2021-11-23 2021-11-23 Recoloring image evidence obtaining method based on spatial correlation

Country Status (1)

Country Link
CN (1) CN114202592A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795370A (en) * 2023-02-10 2023-03-14 南昌大学 Electronic digital information evidence obtaining method and system based on resampling trace

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795370A (en) * 2023-02-10 2023-03-14 南昌大学 Electronic digital information evidence obtaining method and system based on resampling trace

Similar Documents

Publication Publication Date Title
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
Quan et al. Image inpainting with local and global refinement
CN111368342B (en) Image tampering identification model training method, image tampering identification method and device
CN110349136A (en) A kind of tampered image detection method based on deep learning
US11392800B2 (en) Computer vision systems and methods for blind localization of image forgery
CN106650670A (en) Method and device for detection of living body face video
CN113112416B (en) Semantic-guided face image restoration method
US8503768B2 (en) Shape description and modeling for image subscene recognition
CN115063373A (en) Social network image tampering positioning method based on multi-scale feature intelligent perception
CN112150450A (en) Image tampering detection method and device based on dual-channel U-Net model
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN111696080A (en) Face fraud detection method, system and storage medium based on static texture
Liu et al. Overview of image inpainting and forensic technology
CN114202592A (en) Recoloring image evidence obtaining method based on spatial correlation
CN113592693B (en) Digital watermarking method, device and system based on Y-Net
CN113850284B (en) Multi-operation detection method based on multi-scale feature fusion and multi-branch prediction
CN117975577A (en) Deep forgery detection method and system based on facial dynamic integration
CN117292117A (en) Small target detection method based on attention mechanism
CN112560734A (en) Method, system, device and medium for detecting reacquired video based on deep learning
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device
CN115188039A (en) Depth forgery video technology tracing method based on image frequency domain information
Theerthagiri et al. Deepfake Face Detection Using Deep InceptionNet Learning Algorithm
Abrahim et al. Image Splicing Forgery Detection Scheme Using New Local Binary Pattern Varient
Zhang et al. Detecting recolored image by spatial correlation
CN118097566B (en) Scene change detection method, device, medium and equipment based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination