CN112116565B - Method, apparatus and storage medium for generating countersamples for falsifying a flip image - Google Patents

Method, apparatus and storage medium for generating countersamples for falsifying a flip image Download PDF

Info

Publication number
CN112116565B
CN112116565B CN202010920099.2A CN202010920099A CN112116565B CN 112116565 B CN112116565 B CN 112116565B CN 202010920099 A CN202010920099 A CN 202010920099A CN 112116565 B CN112116565 B CN 112116565B
Authority
CN
China
Prior art keywords
image
noise reduction
network
original
tampered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010920099.2A
Other languages
Chinese (zh)
Other versions
CN112116565A (en
Inventor
陈昌盛
赵麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202010920099.2A priority Critical patent/CN112116565B/en
Publication of CN112116565A publication Critical patent/CN112116565A/en
Application granted granted Critical
Publication of CN112116565B publication Critical patent/CN112116565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a method for generating an countermeasure sample for tampering a flip image, which comprises the following steps: processing the image to be processed based on the tampered area corresponding to the tampered area image in the image to be processed so as to obtain a first image and an original area image; processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image; based on the second image, a flipped image is determined against the sample set. The invention also discloses an anti-sample generation device and a storage medium for tampering the flipped image. According to the invention, the halftone dots in the image are removed through the halftone restoration network, so that the image detection algorithm is difficult to detect each image in the anti-sample set of the flipped image, the flip attack threat of the flipped image in the anti-sample set is improved, and further, the flipped detection model is trained through the anti-sample set of the flipped image, so that the detection performance and generalization capability of the flipped detection model can be improved.

Description

Method, apparatus and storage medium for generating countersamples for falsifying a flip image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for generating an countermeasure sample for tampering a flip image.
Background
Along with the development of communication technology, the network transmission speed is faster and faster, and the high-speed propagation makes the application of multimedia technology in network space begin to explode, wherein part of multimedia data has certain privacy, such as face images or video, document images of certificates, and the like, so that currently, powerful editing software makes the tampering of media content very convenient, and the content security of multimedia data faces new challenges. The document type image has extremely large information quantity, and the image which can represent identity information is not lack, so that the security problem can be brought if illegal tampering is carried out in practical application. For example, an attacker prints a certificate scanning piece stolen from a network onto paper in order to impersonate the identity of other people, further shoots a certificate image on the paper and uploads the certificate image to a system to finish the attack, and the image uploaded to the system is actually subjected to a process of shooting and taking images twice, which is called a flap attack.
At present, the document image can be subjected to the reproduction detection by a reproduction detection algorithm, the reproduction detection algorithm realizes the distinction by learning different characteristics of a legal scanned document and a reproduction document, and in order to counter the detection mechanism, the existing reproduction attack mode is often added with an image restoration step, namely, the reproduction image is restored to an original image, so that the effect of attacking the reproduction detection system is achieved.
However, under the countermeasure environment, an attacker can forge the specified target content by adopting an image generation technology during editing, and the tamper trace can be effectively masked by adopting the flipping operation by utilizing noise generated by printing and scanning. After the manipulation operation is added, the manipulation area is directly edited in the digital domain, and the non-manipulation area is derived from the scanner. Since the tampered area and the untampered area are different in source, the tampered area can generate different deformation distortion with the common flipped image, and the existing flipped detection algorithm is adopted, and a large number of flipped images cannot be obtained to resist samples, so that the flipped detection algorithm is insufficient to distinguish the two different types of distortion, and the tampered flipped detection is inaccurate.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a tamper-resistant sample generation method, tamper-resistant sample generation equipment and a storage medium for tamper-resistant images, and aims to solve the technical problem that the detection of the existing tamper-resistant detection algorithm is inaccurate.
In order to achieve the above object, the present invention provides a method for generating a challenge sample for falsifying a reproduction image, the method for generating a challenge sample for falsifying a reproduction image comprising the steps of:
Acquiring a tampered region image, and processing the image to be processed based on a tampered region corresponding to the tampered region image in the image to be processed so as to acquire a first image with the tampered region removed from the image to be processed and an original region image corresponding to the tampered region;
processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image;
based on the second image, a flipped image is determined against the sample set.
Further, the step of processing the tampered region image, the first image, and the original region image based on the halftone restoration network to obtain a second image includes:
determining a trained preset channel model based on the original region image and the first image, inputting the tampered region image into the trained preset channel model for model training so as to reduce color distortion and pixel distortion of the tampered region image and obtain a third image;
inputting the first image and the third image into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, the step of determining a trained preset channel model based on the original area image and the first image, inputting the tampered area image into the trained preset channel model to perform model training to reduce color distortion and pixel distortion of the tampered area image, and obtaining a third image includes:
carrying out noise reduction treatment on the original region image and the first image to obtain a noise-reduced original region image and a noise-reduced first image;
inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model;
and inputting the tampered region image into a trained preset channel model to obtain a third image.
Further, the step of inputting the original region image, the first image, the noise-reduced original region image and the noise-reduced first image into a preset channel model for model training to obtain a trained preset channel model includes:
performing data enhancement operation on the original region image, the first image, the original region image after noise reduction and the first image after noise reduction respectively to obtain a first image data set;
And inputting the first image data set into a preset channel model for model training so as to obtain a trained preset channel model.
Further, the preset channel model comprises a U-net network, L1 regularization is determined based on the original region image and the first image, the original region image after noise reduction and the first image after noise reduction, and the loss function of the preset channel model is determined through L1 regularization and L2 regularization based on VGG loss functions corresponding to the original region image and the first image, the original region image after noise reduction and the VGG loss functions corresponding to the first image after noise reduction.
Further, the step of inputting the first image and the third image into a halftone restoration network to perform graphics stitching on the first image and the third image, and obtaining a second image includes:
acquiring a first image after noise reduction and an edge image of the first image after noise reduction based on the first image, and acquiring an original area image after noise reduction and an edge image of the original area image after noise reduction;
performing data enhancement operation on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image respectively to obtain a third image data set;
Inputting the third image data set into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, the halftone restoration network includes: a coarse reconstruction network, an edge contour network, and a detail enhancement network; the step of inputting the third image dataset into a halftone restoration network to perform graphics stitching on the first image and the third image, and obtaining a second image includes:
inputting the third image dataset into a coarse reconstruction network of the halftone restoration network to obtain a coarse reconstruction image set, and inputting the third image dataset into an edge profile network of the halftone restoration network to obtain an edge profile image set;
the edge contour image set and the coarse reconstruction image set are input to a detail enhancement network of the halftone restoration network to obtain the second image.
Further, the step of determining a flipped image against the sample set based on the second image comprises:
and carrying out scanning and flipping operation on each image in the second image to obtain the flipped image countermeasure sample set.
In addition, to achieve the above object, the present invention also provides a tamper sample generation apparatus that tampers a reproduction image, the tamper sample generation apparatus comprising: the method comprises the steps of a memory, a processor and a tamper-image tamper-sample generation program which is stored in the memory and can be run on the processor, wherein the tamper-image tamper-sample generation program is executed by the processor and realizes the tamper-image tamper-sample generation method.
In addition, in order to achieve the above object, the present invention also provides a storage medium having stored thereon an anti-sample generation program for falsifying a reproduction image, which when executed by a processor, implements the steps of the aforementioned anti-sample generation method for falsifying a reproduction image.
According to the invention, the tampered region image is obtained, and the image to be processed is processed based on the tampered region corresponding to the tampered region image in the image to be processed, so that a first image after the tampered region is removed from the image to be processed and an original region image corresponding to the tampered region are obtained; then, processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image; and then determining a flip image countersample set based on the second image, and removing most half-tone points in the image through a half-tone restoration network, so that an image detection algorithm is difficult to detect each image in the flip image countersample set, the threat of the flip image countersample set to the flip attack is improved, further, the flip detection model is trained through the flip image countersample set, and the detection performance and the generalization capability of the tamper flip detection model can be improved.
Drawings
FIG. 1 is a schematic diagram of a device for generating a countered sample for tampering a flipped image in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for generating an countermeasure sample for tampering a flip image according to a first embodiment of the present invention;
fig. 3 is a schematic functional block diagram of an embodiment of an anti-sample generation device for tampering with a flip image according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an anti-sample generation device for falsifying a flipped image in a hardware running environment according to an embodiment of the present invention.
The countermeasure sample generating device for tampering with the flip image in the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 4) player, a portable computer and the like.
As shown in fig. 1, the tamper-flipped image countermeasure sample generating device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the tamper sample generation device that tampers with the flip image may also include a camera, RF (Radio Frequency) circuitry, sensors, audio circuitry, wiFi modules, and the like. Among other sensors, such as light sensors, motion sensors, and other sensors. Of course, the countermeasure sample generating device for tampering with the flipped image may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described here again.
It will be appreciated by those skilled in the art that the construction of the tamper resistant sample generating device of the tamper resistant flip image shown in fig. 1 does not constitute a limitation of the tamper resistant sample generating device of the tamper resistant flip image, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an countermeasure sample creation program that falsifies a reproduction image may be included in a memory 1005 as one type of computer storage medium.
In the tamper-flipped image countermeasure sample generating device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke an anti-sample generation program for tampered flap images stored in the memory 1005.
In the present embodiment, an countermeasure sample generating apparatus that falsifies a reproduction image includes: the system comprises a memory 1005, a processor 1001 and a tamper-flipped image countermeasure sample generation program stored in the memory 1005 and capable of running on the processor 1001, wherein when the processor 1001 calls the tamper-flipped image countermeasure sample generation program stored in the memory 1005, the following operations are executed:
Acquiring a tampered region image, and processing the image to be processed based on a tampered region corresponding to the tampered region image in the image to be processed so as to acquire a first image with the tampered region removed from the image to be processed and an original region image corresponding to the tampered region;
processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image;
based on the second image, a flipped image is determined against the sample set.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
determining a trained preset channel model based on the original region image and the first image, inputting the tampered region image into the trained preset channel model for model training so as to reduce color distortion and pixel distortion of the tampered region image and obtain a third image;
inputting the first image and the third image into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
Carrying out noise reduction treatment on the original region image and the first image to obtain a noise-reduced original region image and a noise-reduced first image;
inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model;
and inputting the tampered region image into a trained preset channel model to obtain a third image.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
performing data enhancement operation on the original region image, the first image, the original region image after noise reduction and the first image after noise reduction respectively to obtain a first image data set;
and inputting the first image data set into a preset channel model for model training so as to obtain a trained preset channel model.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
acquiring a first image after noise reduction and an edge image of the first image after noise reduction based on the first image, and acquiring an original area image after noise reduction and an edge image of the original area image after noise reduction;
Performing data enhancement operation on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image respectively to obtain a third image data set;
inputting the third image data set into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
inputting the third image dataset into a coarse reconstruction network of the halftone restoration network to obtain a coarse reconstruction image set, and inputting the third image dataset into an edge profile network of the halftone restoration network to obtain an edge profile image set;
the edge contour image set and the coarse reconstruction image set are input to a detail enhancement network of the halftone restoration network to obtain the second image.
Further, the processor 1001 may call the countermeasure sample generation program of the tampered flap image stored in the memory 1005, and further perform the following operations:
And carrying out scanning and flipping operation on each image in the second image to obtain the flipped image countermeasure sample set.
The invention further provides a method for generating the countermeasure sample for tampering with the flipped image, and referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the method for generating the countermeasure sample for tampering with the flipped image.
In this embodiment, the method for generating an countermeasure sample for falsifying a flip image includes:
step S101, obtaining a tampered area image, and processing the image to be processed based on a tampered area corresponding to the tampered area image in the image to be processed to obtain a first image after the tampered area is removed from the image to be processed;
in this embodiment, a tampered region image is acquired, and a tampered region in the image to be processed is determined, where the tampered region is matched with the tampered region image, that is, the tampered region image is used for covering the tampered region in the image to be processed, the image of the tampered region in the image to be processed is processed according to the tampered region, the image of the tampered region in the image to be processed is taken as an original region image, and the image of the image to be processed after the tampered region is removed is taken as a first image, that is, the first image is obtained by removing the tampered region from the image to be processed.
In this embodiment, the image to be processed may be an image including a text area and a document photo area, such as a document photo, an identity card, or the like. The tampered area image comprises a replacement certificate picture matched with a certificate photo area of the image to be processed, the replacement certificate picture can be matched with the certificate photo area through image editing software, the tampered area image also comprises a replacement text picture matched with a text area of the image to be processed, specifically, a target text image can be manufactured through Photoshop software, the background of the text area is erased, color is refilled, then, the font and the font size are estimated according to text information in the image, the font which is the most similar to the original text is selected for editing, the font distance and the line distance of the original text are adjusted, and finally, the text color is filled, so that the replacement text picture is obtained.
Step S102, processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image;
in this embodiment, after the tampered area image and the original area image are acquired, determining a third image according to the original area image, the first image and the tampered area image; after the third image is acquired, the first image and the third image are processed based on the halftone restoration network, specifically, the first image and the third image are input into the halftone restoration network, the first image and the third image are merged and spliced through the halftone restoration network, and the second image is obtained through output of the halftone restoration network.
It should be noted that, the trained preset channel model may be determined by the original area image and the first image, then the tampered area image is input into the trained preset channel model to perform model training, so as to reduce color distortion and pixel distortion of the tampered area image by using the trained preset channel model, so as to solve the problem that sources of the tampered area and the first image are inconsistent, and obtain a third image, where the third image is output of the trained preset channel model.
Step S103, determining a flip image countermeasure sample set based on the second image.
In this embodiment, after the second image is acquired, a flipped image countermeasure sample set is determined according to the second image, and specifically, the flipped image countermeasure sample set is obtained by flipping the second image.
Further, step S103 includes:
and carrying out scanning and flipping operation on each image in the second image to obtain the flipped image countermeasure sample set.
Further, in an embodiment, step S102 includes:
step a, determining a trained preset channel model based on the original area image and the first image, inputting the tampered area image into the trained preset channel model for model training so as to reduce color distortion and pixel distortion of the tampered area image and obtain a third image;
And b, inputting the first image and the third image into a halftone restoration network to perform graphic stitching on the first image and the third image so as to obtain a second image.
In this embodiment, after the tampered area image and the original area image are obtained, the tampered area image is input into a trained preset channel model to perform model training, so as to reduce color distortion and pixel distortion of the tampered area image through the trained preset channel model, so as to solve the problem that sources of the tampered area and the first image are inconsistent, and obtain a third image, where the third image is output of the trained preset channel model. After the third image is acquired, inputting the first image and the third image and the merging and stitching, specifically, inputting the first image and the third image into a halftone restoration network, and outputting the second image through the halftone restoration network.
Under the general condition, an attacker cannot generally acquire an original certificate or an identity card, noise reduction processing is carried out on an original area image and a first image to improve the threat degree of tampering, the first image after noise reduction of the original area image is obtained, the original area image, the first image, the original area image after noise reduction and the first image are input into a preset channel model to carry out model training, a trained preset channel model is obtained, the tampered area image is input into the trained preset channel model to carry out model training, and a third image is obtained, so that the problem that different areas in an edited scanned image are inconsistent is solved.
In this embodiment, the integrity of the image is destroyed by directly using the image editing software to splice the target area, so that the method of obtaining the anti-sample set of the flipped image through printing and scanning is performed, so that the anti-sample set of the flipped image is subjected to a complete printing and scanning process, the tampering process occurs before the flipping operation, and in the process of printing and scanning again, no digital image processing method is performed to tamper, so that the anti-sample set of the flipped image is a real digital image, and the content authenticity of the anti-sample set of the flipped image cannot be accurately judged by the traditional digital image detection algorithm.
According to the tamper-resistant sample generation method for the tamper-shot image, the tamper area image is acquired, and the image to be processed is processed based on the tamper area corresponding to the tamper area image in the image to be processed, so that a first image with the tamper area removed from the image to be processed is obtained; then, processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image; and then determining a flip image countering sample set based on the second image, removing most half-tone points in the image through a half-tone restoration network, so that an image detection algorithm is difficult to detect each image in the flip image countering sample set, the threat of the flip image countering the sample set to the flip attack is improved, and further, the detection performance and the generalization capability of the flip detection model can be improved by training the flip detection model through the flip image countering sample set.
Based on the first embodiment, a second embodiment of the countermeasure sample generation method of tampering with a flip image of the present invention is proposed, in which step a includes:
step S201, performing noise reduction processing on the original area image and the first image to obtain a noise-reduced original area image and a noise-reduced first image;
step S202, inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model;
step S203, inputting the tampered region image into a trained preset channel model to obtain a third image.
In this embodiment, since an attacker cannot generally obtain an original certificate or an identity card, in order to improve the threat degree of tampering, noise reduction is performed on the original region image and the first image, so as to obtain a noise-reduced original region image and a noise-reduced first image, and specifically, noise reduction may be performed on the original region image and the first image by using a noise plugin in Photoshop.
And inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model, inputting the tampered region image into the trained preset channel model for model training to obtain a third image, so as to solve the problem of inconsistent different regions in the edited scanned image, and removing distortion conditions in the tampered region image.
Further, in an embodiment, step S202 includes:
step S204, performing data enhancement operation on the original region image, the first image, the original region image after noise reduction and the first image after noise reduction respectively to obtain a first image data set;
step S205, inputting the first image dataset into a preset channel model for model training, so as to obtain a trained preset channel model.
In this embodiment, in order to increase the number of the countermeasure samples in the countermeasure sample set of the flip image, data enhancement operations are performed on the original region image, the first image, the original region image after noise reduction, and the first image after noise reduction, so as to obtain a first image dataset, where the first image dataset includes the original region image, the extended image corresponding to the original region image, the first image, the extended image corresponding to the first image, the first image after noise reduction, the extended image corresponding to the first image after noise reduction, the original region image after noise reduction, and the extended image corresponding to the original region image after noise reduction, specifically, data enhancement operations may be performed on the original region image, the first image, the original region image after noise reduction, and the first image after noise reduction by using data enhancement methods such as rotation, clipping, mirroring, so that the number of images in the first image dataset reaches a preset number, for example, 2 ten thousand, for example, the sum of the number of the original region image and the extended image corresponding to the original region image is 2 ten thousand, and the sum of the images corresponding to the original region image after noise reduction is 2 ten thousand.
And then, inputting the first image data set into a preset channel model for model training so as to obtain third images, and improving the number of the third images.
It should be noted that 80% of the image data in the first image data set was used for training, 10% of the image data was used for verification, and 10% of the image data was used for testing.
In this embodiment, the preset channel model includes a U-net network, since the tampered text area and the document area are both normal digital images, in order to ensure the consistency of the tampered quality and the overall image during stitching, the essence of the preset channel model in this embodiment is image reconstruction, the preset channel model is a Print-Scan (PS) channel model that simultaneously takes color distortion and pixel distortion into account, the preset channel model aims to map the original document image to the image that is subjected to the Print-Scan process, the preset channel model adopts an automatic encoder with a U-net structure, and jump connection in the encoder and the decoder ensures that the final reconstructed image fuses features of different scales, so that multi-scale prediction can be performed, and the content of the reconstructed image is finer. The PS channel model can then be modeled as: ip=p (I0), I0 is the original region picture and the first picture, I p The third image is the image after the preset channel model.
Determining L1 regularization based on the original region image and the first image, the original region image after noise reduction and the first image after noise reduction, and determining a loss function of a preset channel model based on the VGG loss function corresponding to the original region image and the first image, the VGG loss function L2 regularization corresponding to the original region image after noise reduction and the first image after noise reduction through the L1 regularization and the L2 regularization.
Specifically, the loss function of the preset channel model is:
L p =‖I 1 -I d1 +w p *‖VGG(I 1 )-VGG(I d )‖ 2
wherein L is p As a loss function, I 1 For the original region image and the first image, I d For the original region image after noise reduction and the first image after noise reduction, VGG (·) is a VGG loss function, and the effect of the preset channel model is improved by adopting the VGG loss function. In order to maintain the visual quality of the network output, a perceived loss regularizer is added into Lp, and the weight wp is 0.001. The preset channel model was trained with 20 epochs using an ADAM optimizer with a learning rate of 0.001 and no weight decay.
According to the method for generating the countermeasure sample of the tampered and flipped image, the noise reduction processing is carried out on the original area image and the first image so as to obtain the original area image after noise reduction and the first image after noise reduction; inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model; and then inputting the tampered region image into a trained preset channel model to obtain a third image, further reducing distortion of the tampered region image, avoiding the problem that inconsistency between the tampered region image and actual photographing information is likely to exist when the photographing operation is directly simulated on a digital domain, and further improving the threat of the photographing image to the photographing attack of the anti-sample set.
Based on the first embodiment, a third embodiment of the countermeasure sample generation method of tampering with a flip image of the present invention is proposed, in which step b includes:
step S301, acquiring a first image after noise reduction and an edge image of the first image after noise reduction based on the first image, and acquiring an original area image after noise reduction and an edge image of the original area image after noise reduction;
step S302, performing data enhancement operation on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image respectively to obtain a third image data set;
step S303, inputting the third image dataset into a halftone restoration network to perform graphic stitching on the first image and the third image, so as to obtain a second image.
In this embodiment, to increase the number of the contrast samples in the contrast sample set of the flipped image, a first image after noise reduction and an edge image of the first image after noise reduction are obtained based on the first image, and an original area image after noise reduction and an edge image of the original area image after noise reduction are obtained. And then performing data enhancement operations on the original area image after noise reduction, the first image after noise reduction, the edge image of the first image after noise reduction, and the edge image of the original area image after noise reduction respectively to obtain a third image dataset, where the third image dataset includes the original area image after noise reduction, the edge image of the original area image after noise reduction, the first image after noise reduction, the corresponding expanded image of the first image after noise reduction, the edge image of the first image after noise reduction, the corresponding expanded image of the original area image after noise reduction, the edge image of the original area image after noise reduction, the corresponding expanded image of the third image, and the corresponding expanded image of the third image, specifically, the data enhancement methods such as rotation, clipping, mirroring and the like can be used to perform data enhancement operations on the original area image after noise reduction, the edge image of the first image after noise reduction, the edge image of the original area image after noise reduction, the corresponding to the first image after noise reduction, the corresponding to the expanded image of the first image after noise reduction, for example, 2 ten thousand images after noise reduction, and the corresponding to the edge image after noise reduction of the original area image after noise reduction and 2 ten thousand images after noise reduction.
Wherein, the halftone restoration network uses an ADAM optimizer, the initial learning rate is 0.0001, 10 epochs total, and each epoch learning rate is multiplied by 0.9.
According to the tamper-reproduction image countermeasure sample generation method, data enhancement operations are respectively carried out on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image, so that a third image dataset is obtained; and inputting the third image data set into a halftone restoration network to carry out graphic stitching on the first image and the third image to obtain a second image, and carrying out data enhancement on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image to obtain a large enough quantity of flip images against the sample set.
Based on the third embodiment, a fourth embodiment of an countermeasure sample generation method of falsifying a reproduction image of the present invention is proposed, in which a halftone restoration network includes: a coarse reconstruction network, an edge contour network, and a detail enhancement network; step S303 includes:
Step S401, inputting the third image data set into a rough reconstruction network of the halftone restoration network to obtain a rough reconstruction image set, and inputting the third image data set into an edge contour network of the halftone restoration network to obtain an edge contour image set;
step S402, inputting the edge contour image set and the rough reconstruction image set into a detail enhancement network of the halftone restoration network to obtain the second image.
In this embodiment, after the third image dataset is obtained, the third image dataset is input to the rough reconstruction network of the halftone restoration network to perform training to obtain a rough reconstruction image dataset, meanwhile, the third image dataset is input to the edge contour network of the halftone restoration network to perform training to obtain an edge contour image dataset, specifically, the original region image after noise reduction and the third image dataset are input to the rough reconstruction network of the halftone restoration network to perform training to obtain a rough reconstruction image dataset, and the third image dataset is input to the rough reconstruction network of the halftone restoration network to perform training to obtain a rough reconstruction image dataset and the third image dataset is input to the edge contour network of the halftone restoration network to perform training to obtain an edge contour image dataset.
And then, inputting the edge contour image set and the rough reconstruction image set into a detail enhancement network of the halftone restoration network to obtain a second image so as to realize merging of images in a third image data set and remove halftone dots of each image in the third image data set, so that the second image is more similar to an original image, and the threat of a flip image to a flip attack of a sample set is improved.
In this embodiment, the region image I will be tampered with due to the print scanning process 1 Conversion to halftone image I h =H(I 1 ) H (-) is a half tone function, and since information loss exists in half tone conversion, in order to enable the illegally tampered image obtained by splicing to be closer to the tampered area image, so as to obtain a better copying effect, assuming that a reduction function R (-) is an inverse function of H (-), and then obtaining the image from I h Generating a restored image I r So that I r =R(I h )≈I 1 . In this embodiment, R (-) is decomposed into two consecutive sub-problems R using a deep convolutional neural network 1 (. Cndot.) and R 2 (. Cndot.) the use of a catalyst. At R 1 In (-), the network extracts intrinsic features of the scene, such as its overall shape, color and hue, edgesEdge and contour, low frequency scene reconstruction and high frequency scene features are input to the next stage R 2 In (-), R 2 The missing details in (-), like fine texture, are enhanced to complete the reconstruction, and in turn, I r =R 2 (R 1 (I h ))。
Wherein R is 1 (. Cndot.) includes a Coarse reconstruction network, which is Coarse Net that coarsely reconstructs shape, color and hue, and an Edge contour network, which is Edge Net that extracts Edge and contour information.
The Coarse Net for low frequency scene reconstruction adopts a U-Net structure, each image in the input third image data set is converted into tensor with lower resolution through an encoder, on one hand, the downsampling operation increases the channel number of each image in the third image data set to represent abstract information of a scene, on the other hand, the decoder expands each image in the third image data set through upsampling to gradually increase the resolution and reduce the channel number, and finally an intermediate result image I is generated c (coarsely reconstruct each image in the image set) to recover substantially the overall shape structure, color and tone, and remove the halftone cell.
To improve the quality of detail reconstruction, the edge information is detected and an edge map I is formed e By convolving the neural network Edge Net, the Edge Net estimates the conditional likelihood that each pixel belongs to an Edge given a halftone image.
Then, for R 1 Two outputs of (-), i.e c And I e Connected along the channel axis into a single feature tensor, which is passed into the next stage R along with the input halftone image 2 (. Cndot.) i.e.:
wherein I is f (x,y)∈R 7 The graph is a fused feature at position (x, y), where the constituent elements include I c (x,y)∈R 3 、I e (x,y)∈R 1 、I h (x,y)∈R 3
R 2 (. Cndot.) intermediate result image I comprising detail enhancement network Details Net, output to Coarse Net by convolutional neural network c For detail enhancement, the structure adopts a Residual Network (Residual Network), that is to say: i r =R 2 (R 1 (I h ))=I c +F(I f );
Wherein F (·) is the residual function of Details Net, I r The final restored image is the second image.
In this embodiment, the loss function of the Coarse reconstruction network Coarse Net is:
L C =w 1 *L 1 (I d ,I c )+w 2 *L T (I d ,I c );
wherein,
L 1 (I d ,I c )=‖I d -I c1
group-trunk is I d ,I d I is the original region image after noise reduction c To coarsely reconstruct an image in an image set, w 1 、w 2 Is weight, w 1 50, w 2 1 is shown in the specification; w (w) l Is a regularization term of the l layers of the VGG network, VGG (·) is the Gram matrix of the l-layer feature map, which is a VGG loss function, which is a VGG-16 network with 13 convolutional layers and 5 pooling layers.
The loss function of the Edge contour network Edge Net is:
wherein group-trunk isFor the original region image I after noise reduction d N represents the number of pixels in the edge image.
The loss function of detail enhancement network Details Net is:
wherein group-trunk is I d ,w 3 、w 4 、w 5 Is weight, w 3 100, w 4 Is 0.1, w 5 0.5.
According to the tamper-reproduction image countermeasure sample generation method, the third image data set is input into the rough reconstruction network of the halftone restoration network to obtain a rough reconstruction image set, and the third image data set is input into the edge contour network of the halftone restoration network to obtain an edge contour image set; inputting the edge contour image set and the rough reconstruction image set into a detail enhancement network of the halftone restoration network to obtain the second image, and removing halftone dots through the halftone restoration network to enable the second image to be more close to the original document.
In order to verify whether the anti-sample set of the flip image provided by the application has threat to a flip document detection algorithm, a network verification experiment is carried out.
Specifically, denseNet121, denseNet169, denseNet201, resNet50 and ResNeXt101 with higher success detection rate are selected as pre-training networks for the detection of the anti-sample set by the flipped images.
The data set D1 for training the roll-over detection algorithm contains 84 first shot document images and 588 roll-over images, and the data set D2 contains 48 first shot document images and 384 roll-over images. Wherein the image quality of the data set D2 is higher than the data set D1. Fine tuning is performed again on the basis of the pre-trained network, and all pictures in the data sets D1 and D2 are cut into image blocks with a resolution of 224 x 224 to accommodate the size of the network input pictures. On a specific training parameter set, the Batch Size is 64, 20 epochs are trained, the learning rate is 0.00001, and the optimizer is Adam.
When the illegal tampered flip image is subjected to flip detection, the output finally obtained by the classifier is the prediction probability. In this classification problem, 0 represents a flip image, 1 represents a legal scan print image, and the threshold is set to 0.5. In order to verify the robustness of the flap detection algorithm, an error acceptance rate (False Acceptance Rate, FAR) is adopted as an evaluation index of the experimental result. FAR represents the ratio of false acceptance, i.e., the ratio of legal print scan images identified by the tap detection algorithm in an illegally tampered tap dataset, i.e.: far=nfa/nifa×100%.
Where NFA represents the number of false acceptances and NIRA represents the total number of detections. The specific verification results are referred to in table 1.
Table 1, flipped image challenge sample set validation results
As can be seen from table 1, for the universal depth network trained by the flipped data set, in the best attack process, the 10 flipped images made by the present application succeed in resisting 9 attacks in the sample (tampered flipped document image). Furthermore, the successful detection rate of the application's flip image against the sample set is very low based on the depth network trained on the flip data set. Therefore, if the applied anti-sample set of the flip image is used for training the flip detection network, the success detection rate of the flip detection network is greatly improved.
The embodiment of the invention also provides a device for generating an countermeasure sample for tampering with a flipped image, referring to fig. 3, the device for generating the countermeasure sample for tampering with the flipped image includes:
the obtaining module 100 is configured to obtain a tampered region image, and process the image to be processed based on a tampered region corresponding to the tampered region image in the image to be processed, so as to obtain a first image after the tampered region is removed from the image to be processed;
a training module 200, configured to process the tampered region image, the first image, and the original region image based on a halftone restoration network, so as to obtain a second image;
a determining module 300 is configured to determine, based on the second image, a flipped image against the sample set.
Further, training module 200 is also configured to:
determining a trained preset channel model based on the original region image and the first image, inputting the tampered region image into the trained preset channel model for model training so as to reduce color distortion and pixel distortion of the tampered region image and obtain a third image;
inputting the first image and the third image into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, training module 200 is also configured to:
carrying out noise reduction treatment on the original region image and the first image to obtain a noise-reduced original region image and a noise-reduced first image;
inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model;
and inputting the tampered region image into a trained preset channel model to obtain a third image.
Further, training module 200 is also configured to:
performing data enhancement operation on the original region image, the first image, the original region image after noise reduction and the first image after noise reduction respectively to obtain a first image data set;
and inputting the first image data set into a preset channel model for model training so as to obtain a trained preset channel model.
Further, training module 200 is also configured to:
acquiring a first image after noise reduction and an edge image of the first image after noise reduction based on the first image, and acquiring an original area image after noise reduction and an edge image of the original area image after noise reduction;
Performing data enhancement operation on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image respectively to obtain a third image data set;
inputting the third image data set into a halftone restoration network to perform graphic stitching on the first image and the third image to obtain a second image.
Further, training module 200 is also configured to:
inputting the third image dataset into a coarse reconstruction network of the halftone restoration network to obtain a coarse reconstruction image set, and inputting the third image dataset into an edge profile network of the halftone restoration network to obtain an edge profile image set;
the edge contour image set and the coarse reconstruction image set are input to a detail enhancement network of the halftone restoration network to obtain the second image.
Further, the determining module 300 is further configured to:
and carrying out scanning and flipping operation on each image in the second image to obtain the flipped image countermeasure sample set.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores an anti-sample generation program of the tampered flap image, and the anti-sample generation program of the tampered flap image realizes the steps of the anti-sample generation method of the tampered flap image when being executed by a processor.
The method implemented when the anti-sample generation program of the tampered and flipped image running on the processor is executed may refer to various embodiments of the anti-sample generation method of the tampered and flipped image of the present invention, which are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. A method of generating a challenge sample of a tampered flap image, the method comprising the steps of:
acquiring a tampered region image, and processing the image to be processed based on a tampered region corresponding to the tampered region image in the image to be processed so as to acquire a first image with the tampered region removed from the image to be processed and an original region image corresponding to the tampered region;
processing the tampered region image, the first image and the original region image based on a halftone restoration network to obtain a second image;
determining a flipped image against a sample set based on the second image;
the halftone restoration network includes: a coarse reconstruction network, an edge contour network, and a detail enhancement network; the step of processing the tampered region image, the first image, and the original region image based on the halftone restoration network to obtain a second image includes:
Determining a trained preset channel model based on the original region image and the first image, inputting the tampered region image into the trained preset channel model for model training so as to reduce color distortion and pixel distortion of the tampered region image and obtain a third image;
acquiring a first image after noise reduction and an edge image of the first image after noise reduction based on the first image, and acquiring an original area image after noise reduction and an edge image of the original area image after noise reduction;
performing data enhancement operation on the original area image after noise reduction, the first image after noise reduction, the edge image of the original area image after noise reduction and the third image respectively to obtain a third image data set;
inputting the third image dataset into a coarse reconstruction network of the halftone restoration network to obtain a coarse reconstruction image set, and inputting the third image dataset into an edge profile network of the halftone restoration network to obtain an edge profile image set;
inputting the edge contour image set and the coarse reconstructed image set into a detail enhancement network of the halftone restoration network to obtain the second image;
The loss function of the coarse reconstruction network is:
L C =w 1 *L 1 (I d ,I c )+w 2 *L T (I d ,I c );
wherein,
L 1 (I d ,I c )=‖I d -I c1
wherein group-trunk is I d ,I d I is the original region image after noise reduction c To coarsely reconstruct an image in an image set, w 1 、w 2 Is the weight; w (w) l Is a regularization term of a layer I of the VGG network, VGG (& gt) is a Gram matrix of the layer I feature mapping, and VGG (& gt) is a VGG loss function;
the loss function of the edge contour network is:
wherein group-trunk is For the original region image I after noise reduction d N represents the number of pixels in the edge image;
the loss function of detail enhancement network Details Net is:
wherein group-trunk is I d ,w 3 、w 4 、w 5 Is the weight.
2. The method for generating a tamper resistant sample of a tamper-snap image according to claim 1, wherein the step of determining a trained preset channel model based on the original area image and the first image, and inputting the tamper area image into the trained preset channel model for model training to reduce color distortion and pixel distortion of the tamper area image, and the step of obtaining a third image comprises:
carrying out noise reduction treatment on the original region image and the first image to obtain a noise-reduced original region image and a noise-reduced first image;
Inputting the original region image, the first image, the original region image after noise reduction and the first image after noise reduction into a preset channel model for model training to obtain a trained preset channel model;
and inputting the tampered region image into a trained preset channel model to obtain a third image.
3. The method for generating a countermeasure sample for tampering with a flip image as defined in claim 2, wherein the step of inputting the original region image, the first image, the noise-reduced original region image, and the noise-reduced first image into a preset channel model for model training to obtain a trained preset channel model comprises:
performing data enhancement operation on the original region image, the first image, the original region image after noise reduction and the first image after noise reduction respectively to obtain a first image data set;
and inputting the first image data set into a preset channel model for model training so as to obtain a trained preset channel model.
4. The method for generating an anti-sample of a tampered flip-flop image according to claim 3, wherein the preset channel model comprises a U-net network, wherein the L1 regularization is determined based on the original region image and the first image, the original region image after noise reduction and the first image after noise reduction, and the loss function of the preset channel model is determined based on the VGG loss function corresponding to the original region image and the first image, the VGG loss function L2 regularization corresponding to the original region image after noise reduction and the first image after noise reduction, and the L1 regularization and the L2 regularization.
5. A tamper-resistant sample generation method of a tamper-flap image according to any one of claims 1 to 4, wherein the step of determining a tamper-resistant sample set of the flap image based on the second image includes:
and carrying out scanning and flipping operation on each image in the second image to obtain the flipped image countermeasure sample set.
6. An countermeasure sample generating apparatus that falsifies a reproduction image, characterized in that the countermeasure sample generating apparatus that falsifies a reproduction image includes: a memory, a processor, and a tamper-flipped image countermeasure sample generation program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the tamper-flipped image countermeasure sample generation method of any one of claims 1 to 5.
7. A storage medium having stored thereon a tamper-flipped image tamper-proof sample generation program which, when executed by a processor, implements the tamper-flipped image tamper-proof sample generation method steps of any one of claims 1 to 5.
CN202010920099.2A 2020-09-03 2020-09-03 Method, apparatus and storage medium for generating countersamples for falsifying a flip image Active CN112116565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010920099.2A CN112116565B (en) 2020-09-03 2020-09-03 Method, apparatus and storage medium for generating countersamples for falsifying a flip image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010920099.2A CN112116565B (en) 2020-09-03 2020-09-03 Method, apparatus and storage medium for generating countersamples for falsifying a flip image

Publications (2)

Publication Number Publication Date
CN112116565A CN112116565A (en) 2020-12-22
CN112116565B true CN112116565B (en) 2023-12-05

Family

ID=73801769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010920099.2A Active CN112116565B (en) 2020-09-03 2020-09-03 Method, apparatus and storage medium for generating countersamples for falsifying a flip image

Country Status (1)

Country Link
CN (1) CN112116565B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609900B (en) * 2021-06-25 2023-09-12 南京信息工程大学 Face positioning method and device for local generation, computer equipment and storage medium
CN113705620B (en) * 2021-08-04 2023-08-15 百度在线网络技术(北京)有限公司 Training method and device for image display model, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537654A (en) * 2014-12-19 2015-04-22 大连理工大学 Printed image tampering forensic methods based on half-tone dot location distortion
CN109543674A (en) * 2018-10-19 2019-03-29 天津大学 A kind of image copy detection method based on generation confrontation network
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN110414670A (en) * 2019-07-03 2019-11-05 南京信息工程大学 A kind of image mosaic tampering location method based on full convolutional neural networks
CN110728629A (en) * 2019-09-03 2020-01-24 天津大学 Image set enhancement method for resisting attack
CN110992238A (en) * 2019-12-06 2020-04-10 上海电力大学 Digital image tampering blind detection method based on dual-channel network
CN111080628A (en) * 2019-12-20 2020-04-28 湖南大学 Image tampering detection method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537654A (en) * 2014-12-19 2015-04-22 大连理工大学 Printed image tampering forensic methods based on half-tone dot location distortion
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN109543674A (en) * 2018-10-19 2019-03-29 天津大学 A kind of image copy detection method based on generation confrontation network
CN110414670A (en) * 2019-07-03 2019-11-05 南京信息工程大学 A kind of image mosaic tampering location method based on full convolutional neural networks
CN110728629A (en) * 2019-09-03 2020-01-24 天津大学 Image set enhancement method for resisting attack
CN110992238A (en) * 2019-12-06 2020-04-10 上海电力大学 Digital image tampering blind detection method based on dual-channel network
CN111080628A (en) * 2019-12-20 2020-04-28 湖南大学 Image tampering detection method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于生成对抗网络的图像修复算法;李天成;何嘉;;计算机应用与软件(第12期);第201-206页 *

Also Published As

Publication number Publication date
CN112116565A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
Korus Digital image integrity–a survey of protection and verification techniques
Tang et al. Median filtering detection of small-size image based on CNN
Piva An overview on image forensics
Rocha et al. Vision of the unseen: Current trends and challenges in digital image and video forensics
CN112116565B (en) Method, apparatus and storage medium for generating countersamples for falsifying a flip image
Wang et al. Joint multi-domain feature learning for image steganalysis based on CNN
Amerini et al. Deep learning for multimedia forensics
Asghar et al. Edge–texture feature-based image forgery detection with cross-dataset evaluation
Gragnaniello et al. Detection of AI-generated synthetic faces
Ng et al. Discrimination of computer synthesized or recaptured images from real images
Jarusek et al. Photomontage detection using steganography technique based on a neural network
CN112116564B (en) Anti-beat detection countermeasure sample generation method, device and storage medium
Mazumdar et al. Two-stream encoder–decoder network for localizing image forgeries
Mani et al. A survey on digital image forensics: Metadata and image forgeries
Qiao et al. Csc-net: Cross-color spatial co-occurrence matrix network for detecting synthesized fake images
Cu et al. A robust watermarking approach for security issue of binary documents using fully convolutional networks
Li et al. Robust image steganography framework based on generative adversarial network
Taneja et al. Understanding digital image anti-forensics: an analytical review
Zeng et al. Deep residual network for halftone image steganalysis with stego-signal diffusion
Zhou et al. Triangle mesh watermarking and steganography
Hong et al. A recoverable AMBTC authentication scheme using similarity embedding strategy
CN114241493B (en) Training method and training device for training data of augmented document analysis model
Dey Image Processing Masterclass with Python: 50+ Solutions and Techniques Solving Complex Digital Image Processing Challenges Using Numpy, Scipy, Pytorch and Keras (English Edition)
Chaitra et al. Digital image forgery: taxonomy, techniques, and tools–a comprehensive study
Remy et al. Comparative Compression Robustness Evaluation of Digital Image Forensics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant